Tuesday, September 27, 2005

Ponencia de Paul Lippert.

Seven Useful Concepts from and for Understanding Media
Paul Lippert

You can tell a lot about a person by the way he or she reacts to a book like Understanding Media. As an outstanding work of genius, it shakes things up--shakes us up--and simply does not permit a complacent reading. It forces us to make judgments, to follow hints, to assess open possibilities of meaning. I once predicted that the ultimate conflict in human history would be not that between rich and poor, left and right, East and West, or North and South; but between what I call high ambiguity-tolerance and low ambiguity-tolerance personalities. That the two types have not yet assembled into opposing armies and annihilated each other, I believe, is eloquent testimony both to the fact that they live in such separate worlds of experience, as well as, ironically, to the essential complementary role that each plays for the other in society.
And yet in reading Understanding Media, or any of Marshall McLuhan's other works, it is not enough merely to be one of the former type. In this club, the critic William Empson's famous Seven Types of Ambiguity would seem to be just getting started. Not whether, but how we embrace the carefully crafted openness of McLuhan's writing will determine not only what we will make of it, but also what one can make of us.

This, of course, suggests quite a number of sensitive questions. And, of course, the more we depend on a simple "What is he saying?", the more we show our taste for the comforts of clarity and certainty. But this would be to insist on closure. This would be to focus on content apart from form. This would be to deny our own role in the process of reading McLuhan. For reading McLuhan is the kind of process in which the question "What is he saying?" must dialecticallyinteract with the question "What am I saying?" By this I mean that McLuhan's statements always provoke us to reexamine the assumptions, expectations, even worldview that we bring to the text. What is usually passive and implicit in us must be made active and explicit as we struggle for a method that will lead toward enlightenment.
As we all immediately discover, there is very little explicitly about method in the content of Understanding Media. In fact, it is the conspicuous gaps or absences in its content that invite our participation. And yet we get the impression throughout the work that it is all about method. "The Medium Is the Message," is a statement that applies to this book itself, as well as to its content. But it is not a question of merely taking up something that fills the void of our ignorance. Much more difficult, it is necessary that we modify, give up, or replace more limited concepts of method to which we ourselves have become accustomed. Teaching film to university students in the United States, I see this all the time. Having seen nothing but Hollywood films for the most part, it is hard for them to understand the industry's common value system and dramatic limitations. As McLuhan would say, they are like fish in water. It is hard for them to accept at first that in order to understand the movies they grew up with they need to see some foreign ones that they initially find quite baffling. For example, I find Vittorio De Sica's neo-realist classic The Bicycle Thief to be near-indispensable in helping them to see the absurdity of the inevitable happy ending or the limitations of the polar opposition of hero and villain. To understand Hollywood--or De Sica--they must learn to change something in themselves. And like those ambiguous drawings that the viewer can see alternately as representing one figure or another quite different one but never both at once, it is an all-or nothing process.

In the case of Understanding Media, we might even say that this involves something like a religious conversion. To borrow the title of Northrop Frye's famous analysis of the Bible, it is
The Great Code of media ecology. Quite simply, it represents a whole new form of sensibility. The question, of course, is, How do we get it?
This leads us to a classic conundrum: How do we use the form to understand the content, when initially, we had hoped to use the content to understand the form? In the philosophy of scientific inquiry, this is the kind of problem that emerges out of the discovery of the interdependence of fact and theory. Theories are dependent on facts for their articulation and reference, yet facts are dependent on theories for their very conceptualization. This mutual determination makes empirical verification an unreliable method. In anthropology, the cultural context and the emphasis upon interpretation increases the difficulty still more. A great gulf will always lie between general theories and specific observations of a culture. The anthropologist Clifford Geertz often employs the nautical metaphor of "tacking" in suggesting a means of coping. Like a ship sailing into the wind, we tack laterally between the abstract and the concrete as we attempt to progress in our knowledge. The stiffer the wind, the more frequently we shift our direction from side to side. In traditional folk wisdom, we might see it as a type of chicken and egg problem. This has the advantage of focusing our attention on the question of what comes first. In Geertz's figuration, it would be to ask, From which port do we set sail?
This would depend on where we are coming from. Geertz stresses the primacy of what he calls local knowledge, which comes from living contexts of human activity. Though restricted in scope, it has a practical relevance to its context. Correlated with other localities of knowledge, it suggests patterns which point to broader, more abstract understandings.
What I would like to do here is to start with two localities of knowledge: yours and mine. I have a list of seven concepts taken from Understanding Media that have been particularly useful to me in trying to grasp the monumental changes of our present age, and that I would like to discuss with you today. Although I do aim to some degree here to "explain McLuhan," what I believe would be more fruitful would be for you to hold my interpretations up against those produced in your own work and life. This is not to produce agreement or disagreement, acceptance or rejection--a point of view, as McLuhan would say--but to search for patterns which lead to better understanding media.
The first, and I think the most significant, concept to be found in Understanding Media is that of environment, along with the complementary concept of antienvironment. Media, and all technologies, surround us. Like a house or clothing, their significant influence on us is neither isolated nor specifically noticeable. Instead, they influence us more generally and pervasively, affecting the way we interact with everything around us. Their impact is formal, in the sense of Aristotle's concept of formal causation, which is so clearly explained by Eric McLuhan in the newly-published Book of Probes. In assessing the environmental qualities of a given medium, the crucial dynamic to grasp is that to the degree that it is capable of determining the scale and proportions of our perception it must itself be imperceptible, "naturalized," to use Roland Barthes' term for the related effects of mythology. Think of a lens: The more it facilitates your vision with its particular focus, the more it is invisible to you. The medium as environment is the ground which is necessary to give distinct perceptual form to any figure, or object, of our attention. The human nervous system, you see, is incapable of perceiving anything in a vacuum. Instead, we see contrasts in stimuli. Just how we perceive any particular object of our attention is a matter of how it contrasts with its environment. And for this contrast to amount to something perceived about the object itself, it is necessary for the environment to be, in a sense, taken for granted.
This figure/ground, or object/environment, relationship characterizes not just media but all perception. Consider its relevance to this elementary problem in physics. A train is traveling at
120 km./hr. due north. A man in the first car gets up and walks at 5 km./hr. toward the rear of the train. What is the velocity? I usually give my students a few minutes to think this over, but I think you can probably see right away that the question is meaningless until we specify what is the figure and what is the ground. If the figure is the man and the ground is the surface of the earth, then the velocity is 115 km./hr. north. If the figure is the man but the ground is the train itself, then the velocity is 5 km./hr. south. Change the ground to the surface of the sun, and we are talking about a spiraling orbit at supersonic speed. What makes this example anything less than self-evident is our tendency to assume the surface of the earth--what in English we usually refer to literally as the 'ground'--as the ground without really noticing it. As invisible environment, this assumed ground allows the contrasted figure to emerge clearly and distinctly as an object for our perception.
It may seem ironic but it is just when a medium stops being noticeable as an object in itself--when we begin to take it for granted--that it begins to have its most significant influence. Rather than having specific and measurable effects linearly connecting the medium to individual objects and events, everything around appears to change. This is why this insight has for so long eluded the advocates of "objective" observation.
And yet, we may legitimately ask that if the medium is environmental to the extent that it is invisible, how do we study it in this capacity? The answer points to the validity of what McLuhan so often said about the blindness of the specialist in this regard. One cannot study a medium's environment from within. Instead, one must look from the perspective of another medium, which functions as an antienvironment. An antienvironment is an environment that we have not yet taken for granted, and so is highly perceptible to us. This may be because it is new to us, or simply because we are presently accustomed to the conditions of another environment
which serves to throw the new one into focus. Antienvironment is environment as highly perceptible object and thus not functioning as environment. Though it is not conditioning what we perceive, its characteristics that would do this conditioning stand out to us as seeming unnatural; different from what we take for granted. The more we accustom ourselves to these characteristics of an antienvironment, the more it begins to function for us as a new environment by means of which we can by contrast perceive our old environment as a new antienvironment whose characteristics suddenly seem unnatural and thus highly noticeable. For example, an understanding of the cultural peculiarities of the modernist concept of intellectual property and its genesis in the typographic environment emerges as we contrast it with the very different conditions of the earlier chirographic environment for which the concept was largely nonexistent. That Martin Luther expressed a comic ambivalence over the pirating of his works by Dutch publishers is a sign that he lived in a time of transition between media environments. Today we also live in a time of transition, as the emerging environment of digital technologies places into question the future of this same concept. This practice of contrasting, or interfacing, the object/environment relationship of one cultural context with that of another is the basic method of McLuhan's probes in Understanding Media. Worked into the equation, figure (1)/ground(l)=figure (2)/ground(2), it is the basis of the four parts that comprise his later Laws of the Media.
The second, closely related, concept that I would like to take from Understanding Media is that of sensory bias. Media function as environments because they favor the use of our senses in certain ratios which determine different modes of perception. Words and other forms of thought are conceived along the lines of "sensory analogues," which are symptomatic of their roots in experience. Nowhere else is McLuhan's thought more deeply rooted in the phenomenology of the different senses, from the ancient Greek-Hebrew debate over whether thought itself is something seen or something heard to modern developments in cognitive psychology.
Perhaps the most fundamental phenomenological distinction between vision and hearing is that while vision has an intrinsic relationship to spatial perception, hearing is intrinsically related to our perception of time. Sound, as McLuhan's close colleague Walter Ong phrases it, exists only when it is going out of existence. It is tied to the moment and serves as the index of that moment or sequence in time. It is also always the symptom of some type of movement or activity. Sound registers events or happenings as they progress through time as opposed to pointing to the mere locations of inert objects. If there is sound, something is definitely going on, and right now. Vision, by contrast, is better suited to the analysis of static space. Motion is harder to focus on, but most of the objects of spatial perception, which are mute to the sense of hearing, are capably grasped by vision. Vision is more directional in its focus than hearing and is thus more analytical, or schematic, as it precisely distinguishes different points in space. These phenomenological connections between vision and space and between hearing and time are the primary basis of the intellectual connection between McLuhan and his inspiring colleague Harold Innis. A political economist whose interests progressed from the study of staple industries and transportation systems to theories about the role of technology in the development of civilizations, Innis' concepts of time-biased and space-biased technologies led directly to McLuhan's ideas about the sensory biases of media.
Because different media present us with different proportions, or ratios, of access to these different sense-worlds of experience, they have the capacity to shape our worldview. So rooted in the biases of specific sensory channels, our knowledge takes on the logic or logics peculiar to the sensory worlds from which they are derived. Habituated to these ratios which characterize our media environments, we tend to adopt them as our own perceptual biases, making us all relatively aural or visual personalities. These ratios of sensory bias within ourselves are the
bases of sensory analogues, or metaphors, that are the basic components used in conceptualization. Tracing thought backwards from its outward expression in linguistic practice to its origins in experience, metaphor analysis reveals how epistemologies are determined by phenomenologies.
The best examples for comparing visual- and auditory-biased media are probably speech and print. Dominance by one or the other of these media is seen by McLuhan and others as the basis of the essential differences between literate and oral, or modern and traditional, cultures. Though accounting for such broad patterns of history is complex, at its core is the fact that the spoken word is an event while the printed word is an object. The word as sound immerses us in a dynamic world of events in time. It is an animated, agonistic world, full of voices, human presences, and feeling. Unlike the printed word, it is decidedly unobjective. Committed to the static space of the printed page, expression of even the most abstract or transitory of phenomena appears in object form. Think of all the nouns you know that are neither persons, places, nor things. In an oral culture, most of these concepts would more likely be expressed actively. Allegory is necessary to act out exemplars of abstract concepts which may be dimly formed in oral cultures, whereas in print culture they are objectively analyzed.
Our habit of perceiving the world through language turned aural or visual has an effect on the relative authority of the senses. In print culture, seeing is believing, as the saying goes. How many sources of specific scientific evidence can you think of that are not visual? By contrast, in oral culture the voice of authority resides in personalities who speak or in traditional stories that are spoken. And, caught up in the flow of events which, like sound itself, "exists only when it is going out of existence," oral cultures are time-biased conservatively toward preservation of
tradition. In the visual space of print culture, we are biased spatially toward building, expanding, progressing. Today’s digital technologies dramatize the tension between eye and ear as never before, largely through user interfaces which put the two into intense interaction.
The third concept from Understanding Media that I would like to discuss is that of hot and cool media. This dichotomous concept serves to further characterize the sensory bias of a medium, culture, or concept. Although the general definition of these terms is fairly simple and forthright, understanding just how they apply to specific cultural analyses has proven most frustrating to many of McLuhan's critics. Rather than promising to lift the veil from this subject completely, I would like to limit myself here to an examination of how the terms apply to two different types of literacy that have emerged out of different media and historical contexts.
A hot medium is a medium that extends a single sense in high definition or intensity. A cool medium extends one or more senses in low definition or intensity and depends upon the user to "participate" in the synthetic interplay of the senses, even to the extent of calling for the imaginative use of senses not actually extended. Unlike the hot auditory quality of radio or the hot visuals of movies, television is cool because its users participate by filling in what is incomplete in the low-definition sounds and images, largely, McLuhan says, through the imaginative use of our tactile sense. But I said I was going to talk about literacy.
In all that one hears about literacy these days, and in spite of various ideas about the variability of the reader's response, there is very little about how different literacies emerge in response to different media and cultural environments. Even scholars in the McLuhan tradition often write about literacy as if it were a monolithic medium with homogenous effects, most obviously when they refer to three basic media-cultures: oral, literate, and electronic. This lumps everything from cuneiform on clay tablets to pulp fiction in paperback into the same cultural type. It does not categorically distinguish between the modern and the medieval or the
ancient. It ignores, to borrow the title of Elizabeth Eisenstein's excellent history, the printing press as an agent of social change, or, as McLuhan would put it, the Gutenberg galaxy.
Perhaps it is that this modern, typographic type of literacy is merely taken for granted. That it is a hard-won achievement, built out of a very different earlier form of literacy and now in reversion or some other form of transition, seems scarcely noticed. Just how peculiar typographic literacy is to the modern age becomes much clearer if we place it in contrast to the age of chirographic, or handwritten, literacy which preceded it. In many ways, chirographic literacy and culture stand in stark opposition to the qualities found in typographic literacy and culture. This would lead us to posit four basic media-cultures, as well as to investigate the specific characteristics of these fundamentally different literacies.
In terms of hot and cool, chirographic culture is cool and typographic culture is hot. At the most concrete level the printing press is the original mass production machine, designed with the principle of retooling for an infinite variety of highly specialized tasks at its core. It is ideal for creating a whole new world of starkly visual, decontextualized language. Under this industrially-controlled visualization, a new machine-like efficiency and precision was achieved with language. By contrast, in chirographic culture visual decontextualization of language was incomplete. Traces of the author, a scribal tradition, and the old oral tradition as context remain in the manuscript. More important, a new visual context of spatial semantics and explicit prose style was unable to develop. Because of this incompleteness, chirographic literacy was constrained by the need for the reader's participation in filling in the gaps. As McLuhan said, while typographic culture is consumer-oriented, chirographic culture is producer-oriented. Walter Ong has devoted much of his scholarship to investigating the cool qualities of chirographic culture, mostly in the context of examining the various "interfaces" between oral
and literate sensibilities and social classes within the confines of chirographic societies. But rather than adopting McLuhan's terminology, Ong speaks of the chirographic culture's inability to achieve full closure in its visualization of discourse. This lack of closure, Ong argues, is the basis of a pervasive insecurity and closed hostility characteristic of the age of rhetoric. With print, full closure is achieved through the more complete visualization of discourse. Ironically, this closure leads to a new, secure cultural openness characteristic of the age of romanticism. This romantic openness, Ong explains, has helped to bring us into the present age of electronic media.
But isn't this new age largely one of cool media, like television and the internet? Are we as a society maintaining the level of closure necessary for maintaining the openness of typographic culture? Isn't a hot medium like print necessary to preserve the social literacy that supports the open society? In our predominantly cool media environment, are we not slipping into something akin to the chirographic literacy of the past?
My fourth concept from Understanding Media is one that has been immensely popular and yet equally puzzling: that of retribalization in the electronic culture. While not quite the popular cliche that the related concept of the global village has become, the idea of reverting to the form of traditional oral culture within a technological environment has captivated the imagination of the latter-day romantics from the 1960s. As I recall, this is when we began cheering for the Indians instead of the cowboys. It is part of that openness from the typographic culture carried over into the new age. In the spirit of literate openness, McLuhan is usually neutral to euphemistic about the process. Yet, to the alert reader, he makes quite clear that retribalization means the end of the cherished open qualities of the literate culture.
Even when it comes to Enlightenment values, I suppose, there is no accounting for taste. And there are no doubt many values from oral culture that can beneficially complement or even liberate us from the pitfalls of our typographic ways. But I think that a more fundamental question here is that of how this new tribalism, or secondary orality as it is elsewhere called, is to coexist and interact with the accumulated surviving elements of the typographic culture. As the term secondary orality implies, it is an orality mediated by other technologies. As such, it is a tribalism that lives in a world that plays by very different rules than those that held sway in the preliterate, pretechnological era.
A good first observation is that while the concept colorfully applies to all sorts of popular phenomena in society today, there is also quite a number of things and people that certainly do not seem to be going tribal in the least; quite the opposite, in fact. The idea that some of us are retribalizing in some ways while others are even becoming more typographic is amply accommodated by Walter Ong's concept of the media/cultural interface. What this concept recognizes is that, historically, media are cumulative rather than merely successive, and that after the initial stage of primary orality more than one medium's environments and cultures can coexist and interact with each other within the same society. And this interface between media can coincide with the interface between social classes.
Chirographic culture, for example, is an interface culture. Chirographic literacy is for the most part restricted to a small elite and applied to the institutions and interests controlled by that group. And much, if not most, of what is written is intended for oral recitation to nonliterates who reside in what Ong calls residual orality. Rhetoric is an art developed by literates for interfacing with oral subcultures.
The printing press created the possibility of a literacy that is universal, for all citizens and for addressing all topics. Typographic culture creates all sorts of universalizing ideals, among them the absorption of the residually oral into its ranks. But electronic media seem by their very nature to encourage and thrive on interface. Like classical rhetoric, they are produced and operated by highly literate people for the consumption of increasingly nonliterate people. And as we can hear from the tone of their discourse, best exemplified by the pervasiveness of advertising and other propaganda, they involve a very different sensibility at the sending end than they do at the receiving end.
The residually oral subcultures of the chirographic culture were composed of residually tribal people. But they were tribal people living in a social context in which the tribe was no longer a sovereign political or cultural unit. In a sense they lived in a world that was no longer of their own making, and in which their culture was at a disadvantage. Today's tribal culture also seems to exist in such a captive state. The continued, even increasing, dominance of society by a literate subculture should make us question just how global this village is. Aided by digital technologies, the phenomena that McLuhan associated with retribalization are many and are rapidly increasing in number. And yet, at least for the few, these technologies are augmenting the literate culture as well. Perhaps, along with Umberto Eco and others, we should ask if we are indeed witnessing retribalization, or the return of the middle ages.
Contrasting with this is my fifth concept from Understanding Media, which is that of a complementary relationship between specialization and standardization in typographic culture. In contemporary intellectual and popular culture alike, the idea of standardization is not one that stirs much excitement. And yet it seems that nearly everything these days is considered by someone to be somehow special. The dependence of the special on the standard often goes unnoticed. Although the concept receives scant explicit treatment in Understanding Media, the
careful reader is ineluctably drawn to it as he/she alternates between McLuhan's descriptions of typographic culture and literacy in general as being either standardizing or specializing. The more one is able to synthesize these observations into a more comprehensive view of the relation of either writing or print to its environment, the more the superficial appearance of contradiction is resolved.
The most broadly relevant example of standardization facilitating specialization actually does not come from the environments of print or writing but from that of speech. A mere forty-five or so phonemes are sufficient to express an infinite number of ideas. The ability to articulate the atypical is in part dependent on the faithful, rule-bound enunciation of this fixed set of phonemes. Pronouncing them differently does not make the meaning more singular; it makes it less clear. The less clear the message, the more context and listener expectation will fill in to lead to a more typical interpretation.
With writing, this becomes more obvious. Logographic writing, which evolved out of pictures, is limited in its capacity to express novelty efficiently. Because individual words or phrases tend to require their own separate symbols, it is a cumbersome system numbering in the thousands or even tens of thousands. I say "tend to" because in order to reduce the sheer burden of memorizing so many different shapes, various modifications in the system are often made to allow certain shapes to serve double duty. The addition of phonetic indicators helps but also introduces a whole different system of representation to one that is already quite complex. And since separate symbols harken back to their pictorial origins, individual expressiveness in how they are drawn is highly valued. None of this aids in standardization. Repeated attempts at script reform illustrate the difficulty.
With a writing system like this, the number of words that one can read or write is dependent upon the number of symbols one knows. This tends toward the expression of the typical, since novel concepts require additional symbols which must be memorized. New or foreign concepts have an especially hard time, since they cannot be communicated in this type of writing until a suitable symbol has been invented and taught to the writer and all potential readers. In spite of various efforts to standardize logographic writing, the essentially individual nature of the word-symbol works against individuality of verbal expression.
The alphabet eliminates this problem. Just over two dozen simple, highly standardized symbols are all that are necessary to express anything that can be said. The rules of phoneticization are so simple that an average individual can become literate on a level coextensive with his/her speech competence in a relatively short time. As McLuhan's colleague Eric Havelock has so eloquently shown, the standard efficiency of the alphabet enabled the emergence out of an oral culture of a new social literacy through which ancient Greece's singular contributions to Western civilization were made.
Print further standardizes to achieve greater individuality. As an industrial medium, it is based upon repetition and exchangeable parts; all the mechanical traits we usually associate with tedium and drudgery. Think of rows of boxes, full of hundreds of little metal A's, B's and C's. The letters themselves lose the individual character of handwriting, as they are assembled into unaesthetic, uniform lines which develop an efficient if colorless spatial semantic. With print come national languages, standardized spelling, page numbers, alphabetized indexes, uniform pricing, and a host of other quasi-mechanical systematizing devices. For some, it even led to the quest for a single, mechanical method for producing all knowledge. But more significantly, it
led to the forms of discourse and thought which created modern individualism and a culture oriented toward novelty and progressive change.
Just how this culture developed, of course, is the story of modernity itself. But the crux of how it is based on standardization can be explained in terms of a conceptually simple shift in symbolic bias. With analogic symbols like pictures, in which the signified in some way naturally resembles the signifier, what is true of the former is also true of the latter. A standard schematic, icon, or illustration will produce a relatively standard meaning. Individually expressive pictorial technique produces relatively individual meaning. But in both cases, the accomplishment of objectives is limited. Standard meanings are never so standard, individual meanings are never so precisely articulated as with digital symbols. With digital symbols like words the signifier is related to the signified only by convention, which facilitates this complementary relationship at the level of the basic mechanics of the symbol. It is from the dull, standard surface of the printed page that the most special, exciting things emerge. Calligraphic or other visual embellishment of the signifier is counterproductive. Print works best when the letters do not call attention to themselves as letters. Increasing digitalization of the word from speech to writing to print institutionalizes the dynamic of standardized means toward specialized ends for an expanding range of social phenomena and for an increasing proportion of the population. New media like television and the internet which re-introduce analogic symbolic forms, subvert and erode this dynamic. They lead to the ideal of just doing one's own thing. They bring to mind George Bernard Shaw’s reaction to New York’s brightly lit Times Square, “What a beautiful thing, for someone who cannot read.”
The sixth concept that I would like to take from Understanding Media to discuss with you is that of the cultural center-margin dynamic and the related idea of monopolies of knowledge. For most of my students who have grown up going to public schools and who are habituated to the rhetoric of an avowedly democratic society, it would seem that knowledge is not only free-
flowing, it is positively forced upon you. It never occurs to them that a major function of developments in media, and technology in general, is to restrict the dissemination of knowledge. In chirographic culture, the complex, baroque qualities of logographic writing systems briefly outlined earlier work toward the restriction of written knowledge to a small elite. In alphabetic pre-print cultures, scholarly languages like medieval Latin and regionally varying elaborate calligraphic styles and peculiar systems of abbreviation stunted the universalizing and democratizing effects of the Greek invention. It is this tendency to monopolize knowledge that provides the plot dynamic for Umberto Eco's medieval detective novel, The Name of the Rose.
To a large degree, this works even without conscious design. Media form centers of knowledge/power where their influence is strongest. The environment created by a dominant medium or other technology makes ways of knowing peculiar to it seem obvious to those acculturated to it that would seem incomprehensible to those who are not. In fact, the more dominant the medium, the more the culture seems locked in to those particular ways of knowing and is unable to innovate or adopt alternate perspectives. At the margins of an empire, where the dominant medium's influence wanes, a cultural interface occurs. This is a region where the medium's influence is not monopolistic. Its environmental impact is weak; it is neither fully environment nor antienvironment. It interfaces with the similarly weak influence of a bordering medium's environment. This is where innovations and revolutions are born.
In the works of McLuhan's mentor Harold Innis, the histories of civilizations can be seen in terms of the constant struggles between the monopolizing powers of technological environments at the centers of empires and the innovative oppositional forces exerted by new technologies that form at the margins. Ancient Egypt and Babylon, for example, were empires based on monopolies of knowledge held by theocratic elites by means of various technologies, including
logographic writing. As the inflexibility of the logographic environment led to conflict and weakness, innovation was encouraged at the margins of these societies in the form of phoneticization of writing. The Phoenicians, who lived on the outskirts of these empires, developed a script that was easier for more people to learn and that was more easily adaptable to the exigencies of long distance trade. They systematized the principle of phoneticization, which had been known of dimly by these earlier cultures but never fully developed.
The Phoenicians built a powerful trading empire which developed its own rigidities, owing significantly to the tendency of their syllabic script to favor conventional forms of expression. So long as their script represented actual speech syllables, of which there are hundreds in any language, with a mere few dozen symbols, one pretty much had to stick to business as usual if one wanted to be understood. On the margins of this empire was an oral people who had barely been exposed to literate cultures: the Achaeans. Unlike other oral cultures whose exposure to literacy was through conquest, they were close enough to be influenced but not dominated by the Phoenician empire and its media environment. Adapting the Phoenician syllabary to the sounds of their own purely oral language, the ancient Greeks completed the phoneticization process by abstractly splitting the syllable into vowels and consonants and creating the full-fledged alphabet.
The demands of empire and warfare put strains on the implementation of Greek literacy, which led to still more innovations at the margins of the alphabet's influence. Out of the Etruscan backwater sprung the Roman empire, based in large part on its use of papyrus for administrative and military communications over an excellent system of roads. After the fall of the empire it was at the margins of Roman conquest and the Roman Catholic Church's influence along the Rhine and extending into the Alps that further innovation and resistance to monopoly
appear once again. Gutenberg's printing press sparked the Protestant Reformation, and power, wealth, and knowledge shift from southern to northern Europe.
In the course of the twentieth century, the European culture's North American margin has developed into the culturally-dominant United States. New York and Los Angeles appear today as the twin centers of an electronic media/cultural empire. But where are today's margins? Much of the thrill that comes with watching television or logging on to the internet is the sense of you yourself being at the center of things. The electronic culture's populism and commercialism are constantly reworking their formulas to integrate the marginal into the center, at least symbolically. This is part of what McLuhan means when he says that electronic media are decentralizing. Later theorists of the postmodern call it decentering. Does this mean that the concept of the center-margin dynamic no longer applies?
McLuhan says in Understanding Media that there are no margins in electronic culture "so far as the time and space of this planet are concerned" (p. 273). I think that this is significant. Electronic technology obsolesces the old mechanical age through its instantaneous triumph over the physical impediment of time and space. But have the divisions between cultural insiders and outsiders ever been based purely on physical distance? McLuhan goes on to predict a new equality and subversion of hierarchy that I think we all still await. However, I think his qualification leaves open the possibility that center-margin dynamics still exist in the conceptual spaces of our cultural world. This would explain the existence of omnipresent centers in the midst of similarly omnipresent margins. In the electronic culture, it would seem, monopoly no longer need rely upon spatial location to insure its control.
The last concept that I want to talk about is one that subsumes all of the others I have so far discussed. Though its meaning is so broad as to seem vague, I think it is a good example of the kind of idea that I discussed at the beginning of this paper as “provoking” us to reexamine our point of view and fundamental understandings. It is the notion of techne. Related to the contemporary concept of technology, it is more general, more abstract. Rather than the way to accomplish some mechanical task, this premodern term refers to the “way,” or style, of an entire culture. In ancient times it was often used to refer to artistic style or other nonmaterial entities. For us, I believe, it can perform the important function of reconnecting in our culture much that has been split apart through the excesses of modernity. Work and leisure, art and industry, logic and feeling, economics and culture, the practical and the ideal, all of these dichotomies are the products of the splitting off of one type of techne favored by modernity which became the dominant ‘technology’ of today. It is a split that is a primary cause of that quintessential modern illness: alienation. It is a split that alienates us from, among other things, an understanding of our nonmechanical relations to technology. At a time when so many new technologies mediate various nonmaterial cultural products, it is at once ironic and tragic that the so-called “information age” is so culturally numb, as McLuhan would say, to its own products.
The solution, I think, is not to spurn technology, but to re-integrate it critically into the rest of life. So repatriated, the ways that it echoes and is echoed by the culture around it can be heard more clearly and the broader sense of the techne can emerge in consciousness. The whole culture can become more self-conscious and more self-actualized in a wider portion of its being. Karl Marx’s scholarly career is especially enigmatic in this regard. Founding an entire theory of historical and social evolution on the basis of modes of material production, he nonetheless saw this industrial base as being determinative of and interactive with the social/cultural superstructure. And yet, in spite of the “dialectical” nature of his materialism, his theory is perhaps weakest in accounting for this superstructure. For example, his Labor Theory of Value explains persuasively how the value of a commodity is determined by the amount and type of labor socially necessary to produce it. But in accounting for how the value of labor itself is determined culturally (e.g., What is “normal” lifestyle?, What is a “necessity”?, What is a “decent” standard of living?), it is notably less “scientific.”
The concept of techne allows us to see the technical and the cultural not only as mutually determinative, but as seamlessly interrelated. This is not the type of insight to suit the low ambiguity-tolerance personality, to be sure. But it will help us at least to begin to understand that most perennially intractable intellectual subject of all, ourselves.

No comments:

Blog Archive