Heim (Electric Language) – Fenômeno de Processamento da Palavra

The screen is like a lens that moves at random over a text but is unable to apprehend the entire thing—like the Hindu allegory about the blind men who investigated an elephant and gave totally different descriptions: none of them saw God whole. —Remark by a computer user

How do you gain access to the new phenomenon on first encounter? Through a new language, of course. You learn to address yourself to unheard of entities. You learn to speak of files that possess no apparent physical dimensions, menus offering a selection of nonedibles, and monitors that provide a certain vigilance over your own words. You learn to navigate with wraparound and with a cursor—which some would more appropriately dub cursee as it becomes the recipient of their profanities. You may even learn the rudiments of RAM and ROM memory, mouse compatibility, and the ASCII code. At the very least, you must address yourself to floppies and to windows, to function keys and program documentation (read: instruction manual]. As you write, you learn to edit simultaneously: you learn to address yourself to block moves, hyphenation zones, and soft spaces versus hard spaces. The vocabulary of the editor’s cut and paste, which manipulates print on paper, becomes your own in electronic form. You learn not only to delete but also to unerase. Then to search and replace, and onward to globally search and replace. Automatic formatting and reformatting enter the daily routine of writing.

As you learn your way around the System, you come to feel literate in a new way—or, at least, so others would put it when, after a frenzy of frustration lasting anywhere from several weeks to several months (depending on your patience], you acquire [128] enough skill to use word processing in your daily writing. Computer literacy is the terminology some people, especially educators, use when they want to indicate that something important is learned in the human-computer interface. The notion of computer literacy is no more than an awkward attempt, by borrowing a notion from another symbolic element, to bring computers rightfully within the purview of schools and scholarly consideration. Yet much of the new knowledge amounts to little more than the savvy required of a user when the device used is still in a primitive stage of development. The personal computer in the 1980s is still a very crude facsimile of a useful device, something like the Model T Ford with its hand-cranked starter and its user-coaxed engine.

Nonetheless, a certain self-masking is intrinsic to the phenomenon of word processing. It is more a built-in dissemblance feature than something which will pass away with the acquisition of skills and with advances in technology. The built-in dissemblance makes the phenomenon more elusive than other symbolic elements for writing. For dissemblance, or self-concealing, belongs to the phenomenon of word processing itself. To recognize the inherent self-masking feature of word processing is to begin the description.

Masking takes place in two ways. First, the very nature of our mode of apprehending the radically new’ masks its break with previous expectations. We necessarily grasp the new through metaphor, as was noted at the end of chapter 3. Human ingenuity taps intuitions we have already assimilated from our exposure to other procedures. Metaphors indicate newly emergent interpretations of existence; they are acts of freedom which also correspond to the demands of new situations. By applying metaphors to what is new, through an act of human ingenium, we become one with the new, involved in it, and we exercise a finite freedom, a limited infinity, in that we have addressed, in a specific and irrevocable way, a process of continual unfolding. Our freedom is tied to previous practices, yet tied only in order to be loosened and projected anew. The electronic environment re-calls the older print technology by invoking its language. We assimilate the new electronic element of language through the older technology of print-on-paper writing, and even through technologies far older than print. This falls under the general cultural imperative to understand things by interpreting them; cultural life is inherently hermeneutical, a process of renewed interpretation.

When the new language teaches us to scroll through the text we write, then we are addressing the new by way of the ancient. Scrolls, unlike book pages, are continuous texts and are therefore addressed as an unfolding whole. Yet the nature of the electronic whole does not at all unfold the way a papyrus scroll unrolls. Here’s how the WordStar manual describes horizontal scrolling: “The screen acts like a window to your document. The window, or screen, moves over the document to give you a full view of documents wider than 80 columns.” Window, page, and scrolling serve to get some hold on the radically new by using the handles of the familiar and even of the long past. This way of access, however, hides the calculational capacity of computers which makes it possible to assign pages to the text in an infinite variety of formats, before or during printout. And, of course, this way of access, through pagination, also seems to suggest that digital writing remains a permanent servant or tool for the purposes of print culture.

While different possibilities are opened up by the new element, our first grasp on it begins by covering up whatever goes beyond the familiar. What we highlight through metaphor simultaneously hides what we perceive in the phenomenon. This is clear today in the infant stages of the computer revolution when very few working writers actually think of their words as residing on magnetic media in digital format. Most still continue to print out their work at all stages of composition, save hard copy drafts, and even do revisions on the hard copy and transcribe them back to the computer.

[] Scrolling text on a computer screen differs from reading through a stack of manuscript pages as greatly as watching television differs from reading a book. You cannot juxtapose two finished pages and read them together when you are working with screen copy. A long manuscript is indeed a kind of video papyrus where a feel for the distinct steps and linear stages of thought is, of necessity, minimized. Certainly the sense of finalized sections is marginal as the transfer of portions (blocks) is always inviting. Automation of the writing element enhances the sense of unified document, in which you can, at will, search and replace any expression throughout the entire manuscript, regardless of what the written expression may denote. When a document is visualized as an entirety, you think of it as a single unit and can shift material back and forth without hesitation. As you become accustomed to the digital text, you grow inevitably impatient when trying to look up by hand certain passages in a printed book; it is so much more convenient to invoke automated searches when seeking out passages in digital text. The metaphor of scrolling takes us back centuries while propelling thought into wholly new relationships to language.

The metaphors for word processing are mixed and become explicitly, awkwardly, “ways of speaking”: “The cursor is the blinking square or underline on the screen. The position of the cursor marks the ‘point of action’ where text or commands are entered or deleted. The shape of the cursor is a square in Insert Mode and an underline in Overstrike Mode.” The universal complaint about bad documentation for computer programs belies the struggle inherent in the articulation of the unprecedented, and the very fact of the user having to consult documentation—instead of, say, an instruction manual—indicates the novelty of the situation: the electronic element is the marriage of writing symbols and scientific technology. The old language breaks down before the direct experience of the new; playing around with the writing element supersedes speech about it—for a time, at least. And thus [131] the profound truth contained in the simple advice of one pithy piece of computer documentation: “This is one of those computer gizmos that’s easier to use than it is to explain. Just play with Up and Down; you’ll get the idea.” The event of a new sense of reality calls forth the primary human learning response: play. Play belongs intrinsically to ontological discovery, to the defining of realities emergent from chaos, to the finesse of free discovery contained within the limiting constraints of our world, as we saw in chapter 1.

A second way in which the phenomenon is intrinsically self-masking is inherent in the nature of the technological framework itself. The intelligibility of, say, electrical light switches is both accessible (because we know it is man-made] and inaccessible (because it is connected to a highly complex electronic infrastructure], The automobile provides limited access to its operations, as does the airplane. But as the latter devices become computerized, they too will offer increasingly masked intelligibility and even more limited access to underlying operations. There will be fewer up-front gauges that encourage human assessment of what is happening, and they will invite a lesser degree of human intervention. Computational systems represent the apotheosis of automation insofar as decision making becomes converted to digital format. John Seely-Brown has called the inherent self-concealment of computational systems the “system opacity.”

System opacity is the fundamental disparity between the user and the engineered setup of the interface. No matter how much human skill becomes accommodated to word processing, the phenomenon will always remain partially hidden. Dissimulation is internal to the phenomenon because informational systems, unlike mechanical systems in general, are largely opaque, their function not being inferable from the underlying, chiefly invisible substructure. The types of physical cues that naturally help a user make sense out of mechanical movements and mechanical connections are simply not available in the electronic element. There are far more clues to the underlying structural processes for the person [132] riding a bicycle than there are for the person writing on a computer screen. Physical signs of the ongoing process, the way the responses of the person are integrated into the operation of the system, the source of occasional blunders and delays, all these are hidden beneath the surface of the activity of digital writing. No pulleys, springs, wheels, or levers are visible; no moving carriage returns indicate what the user’s action is accomplishing and how that action is related to the end product; and there is no bottle of white-out paint complete with miniature touch-up brush to betoken the industrial chore of correcting errors by imposing one material substance over another. The writer has no choice but to remain on the surface of the system underpinning the symbols. As we shall see when discussing manipulation, there are different levels to which writers can be drawn into the process of automation. But no level offers experiential penetration into the underlying opacity of the system; natural language and thinking in natural language are simply incompatible with the binary digits used on the level of machine-language—as any Assembly-Language programmer can testify.

As programs become powerful enough to exercise something of the discernment needed for automating more and more of the writing task, the system will acquire greater opacity. With present-day software, the writer must absorb acronyms and prompts as they are provided by the writing program: “When you see the familiar mark CM along with PRMPT, you know you’re in XyWrite—these two markers are always present in Xy Write. The CM is where you talk to Xy Write, and the PRMPT is where it talks back to you.” With the eventual introduction of human voice input and synthesized responses, the above description may become literal: you will address the system and the system will respond to you. The disparity between automated physical symbol and human writing process will grow. In any case, current research on computer interface has shown that it is necessary for the user to develop a mental model or set of inferences concerning the [133] underlying movement of the system. However crude and unsophisticated it may be, a mental model allows the user to build some basis on which experiences can be collected and from which the user can respond to the interactive processes of automated writing. A metaphor or sense-endowing map of the system is not provided ready-made by the technology, as was frequently the case with mechanical operations. Because of the indefinite number of its operations and because of the flexibility of any given software, the user can never wholly rely on a so-called idiot-proof system; it will always be necessary to manage problems as the system is applied to different tasks in the flow of information in thought and writing.

The human user, then, confronts the opacity of the system by building a set of metaphors for making operational guesses at the underlying structure. These are not, of course, explanations in any strict scientific sense. They provide the basis around which to organize responses for continued interaction with the system. To counter system opacity, the user comes to visualize, on a conscious or subconscious level of awareness, the system as a one or another kind of flow of information. When learning to write on a computer, for instance, you begin to imagine physical storage areas, such as disk drives, ram drives, and read-only memory. In order to save and organize files securely, you learn to conceive of them as physical locations, as imaginative places, however phantomlike they may seem. Otherwise, disaster is near. The most elementary case is the neophyte watching in astonishment as the text disappears when scrolled off the screen; some primitive model of storage begins to replace the first sense of irretrievable loss as the user learns to handle the vanishing writing. Users devise increasingly complex stages of building up a model, as the following example shows.

As I am writing with a word-processing program, I decide that the current additions to a chapter are exploratory and may not be appropriate for the final version. I want to keep going, however, in order to get down the ideas that are occurring to me here and now. I [134] want to use sections of what I have written in an entirely different way and I want to add new ideas to them. The resulting material might be valuable in another context. So, I visualize what is on the screen as a second file, existing apart from the first material I was working on. In order to interact properly with the system to achieve my goals, I must first hit a save button to preserve what I have already written, identify the file I am working on with a new and different name, and then remember to save the second file and retrieve the first when I want to proceed with the chapter. Saving the file means noticing the light on the disk drive go on, hearing the whir of the floppy disk, or seeing the program return a saved message with a storage drive and subdirectory location assigned to the hard drive. If, on trying to save the explorations, the error message “File already exists” comes up, I must have formed something of a model about the way a computer stores information by using a set of unique characters to identify and keep track of a volume of information. If I have developed no such model, it will be a total surprise for me to discover later that there is only one file stored with my chapter on it, and the version I wanted to save is gone, vanished forever. This insight implies no technical knowledge of the File Allocation Table (FAT) or the bits set for file identifiers on the level of machine-language bytes, nor does it require awareness of the tracking system on the disk drive. What I do need is a sense of the unique file identifiers accompanying any block of information to be stored. Otherwise my second-draft explorations might obliterate the more satisfactory version I wanted to keep.

Needless to say, this type of learning is usually of the hardest kind: trial-and-error experiences. Most likely, error and then trial. Being restricted to the surface of technological devices, the user generally develops an operational interpretation of the system’s inner workings only after first mistaking the system’s procedures. Recovering from errors is the primary resource for learning how to interact with the computer. This gap of necessary misunderstanding [135] highlights the second sense in which the phenomenon of word processing is self-masking.

There is something reassuring, then, about those word-processing programs that emphasize What-You-See-Is-What-You-Get, affectionately called WYSIWYG (wizzywig) by the computer industry. WYSIWYG programs help stabilize—and conceal—the untamed power of digital writing by approximating to a high degree the printing metaphor as it applies to word processing. Such software focuses almost exclusively on the polishing and production of a final document, that is, of a text formatted in pages and then printed on paper. By perfecting the conjunction of computer technology with stand-alone automated printers, programs such as Multimate, WordPerfect, XyWrite, and the older WordStar synthesize the metaphor of writing on a computer as writing on a high-tech typewriter.

Before WordStar and the later WYSIWYG programs, computer terminals were regarded as “glass teletypes.” When you type a page on a typewriter, what you see is what you get. This is not necessarily true in the case of word processing, where a text can be formatted for hard copy in any number of ways. In the electronic element, the computer text in itself is not at all graphically visible to human eyes nor is it essentially inscribed. Text remains resident in computer memory at varying locations and in various degrees of volatility. So-called screen-oriented word-processing programs, such as Leading Edge or Volkswriter Deluxe or the others just mentioned, go to great lengths to make smooth the transition from computer text to clean, pagelike views of the text on screen so that the real, final, or printed copy mirrors a pagelike text on screen. (Beginners on word processors are troubled by the fact that the viewing area of the monitor cannot accommodate the full size of a printed page.) Some programs, such as Professional QWERTY, even attempt to emulate the typewriter with an interface designed to ease the transition from mechanical to electronic elements. Considerable time and skill were needed to develop programs that [136] preserve the correspondence between text in the electric element and the physical end product of the printing process. Not only must correspondence between the radically different elements be internally adjusted by the software, but the commands to the mechanical printer must also be somehow contained (embedded) in the text without actually appearing. Simultaneously, the hidden commands to the mechanical printer must be accessible at all times to the writer of the text. The complexity of such a feat, achieved in varying degrees by currently available software, points to one of the three aspects of the new writing element, its distinctive way of manipulating symbols.