There is a concept known as the Akashic record, a record of all things that were and are and, by some definitions, are to be. Thought to be Hindi in origin, the notion suggests a repository of all knowledge, whether known or unknown, that takes up no space per se (the akasha is an alternate dimension) but may be accessed at will, by the initiated, to reveal all things. No, it wasn’t in our readings for today, which are of a more scientific variety. This record is no more real than Aladdin’s lamp; it is a myth, an idea revised (and expanded in various ways that changed its original meaning) for English speakers in the 19th century. As a concept, it served to provide a source of authority to mystics who made claims without material substantiation. The Western tradition ascribes omniscience to God, Divine revelation was quite credible as a source at more times in history than it wasn’t, and this is simply another variation on that theme. Joan of Arc claimed she heard St. Michael, St. Catherine, and St. Margaret; certain theosophists claimed the ability to tap into the akasha or to know someone who could. Nothing to do with science. As a myth, though, it tells us something about human needs and desires. We have dreamed of knowing everything, of being able to handle vast quantities of information, over and over again as a means to augment ourselves, to reach beyond limits.
Our readings focus on manifestations of this same aspiration.
VANNERVAR BUSH - 1945
We begin with Dr. Vannevar Bush’s 1945 essay in which calls for "a new relationship between thinking man and the sum of our knowledge." Bush’s optimism and positive outlook – even in the war and the most terrible weapons ever devised, he finds something to celebrate: evidence of the power of collaboration in a common cause – are contagious. His is not an analysis of science as a double-edged sword, is rather a celebration of what it has brought us (Lasting benefits. Control over the environment, better food, better clothing, shelter, greater security, release from bondage; long life span, better health, greater freedom from disease, the promise of improved mental health) and what it will bring us in the future. His reasoning grants him incredible foresight, and he suggests the possibility of all sorts of things: swifter communication, information storage devices, optical scanning, “an age of cheap complex devices of great reliability”, always carried devices that can take photos (as so many cellular phones do today, though he envisioned them as glasses – maybe that is to come), Polaroid-style cameras and digital cameras that allow the photographer to instantly view the photo, voice recognition systems, search engines, new systems of indexing, supercomputers, the use of time stamps, voice recording, and video records, spreadsheets, centralized electronic billing systems, miniaturization and portability of data, personal computers, personal digital assistants, new forms of the encyclopedia (like Wikipedia), user annotated records, data aggregators, even the internet itself. He saw even further, suggesting the possibility of interfacing systems with our nervous systems.
All these things, he suggests, can be harnessed to improve ourselves, to reduce the time from problem to solution, to advance our knowledge and understanding ever more quickly. We can and should, he advocates, turn our peacetime efforts in this direction, for the betterment of scientists and of all humankind.
He is, of course, grounded in his time despite his prescience. Acutely aware of how things work, he never surrenders to magical ‘someone will discover something new that allows this’ thinking so common in science fiction (‘warp drive’ thinking), and that limits his predictions. His Memex, for example, the computer-like device for keeping a record and making connections between concepts, is the size of a desk rather than a small box on top of it. Though charming, this groundedness and his optimism are his vulnerabilities. Not a postmodernist at all, he doesn’t stop to think that perhaps in constantly documenting moments (photographically or in other ways) or the connections we form between ideas we might lose something of that moment or obscure what we were trying to find in the first place. Untempered acceptance of innovative technology as good would soon lead to the age of “DDT is good for me!” and thalidomide babies. He is, however, not a tyrant, and his vision is a kindly and noble one, and inspirational.
D. C. ENGLEBART – 1962
Englebart quotes V. Bush extensively. His paper on “human augmentation” documents a project to improve “the intellectual effectiveness of the individual human being.” It realistically anticipates, proposes, and documents the use of new tools and systems to accomplish more work and draw on more information by means that are faster. Again, he is bound to his time when it comes to things like storage media and also in his attitudes.
He uses “an aborigine” as an example of a person who cannot “grapple directly with the sort of complex situation in which we seek to give him help.” Inherent in Englebart’s argument is the tenant that “our background of indirect knowledge and procedure” is good, represents an augmentation, and other things, views, ways of being are lesser and need fixing. (What if this individual does not wish to drive a car through traffic? It’s dangerous, after all.) “Our culture has evolved means for us to organize the little things we can do with our basic capabilities so that we can derive comprehension from truly complex situations, and accomplish the processes of deriving and implementing problem solutions” he writes, and a critic might point out that we seem to have invented as many problems as we have solutions, still don't really seem to get everything done or know what's going on. A critic might counter that our culture has evolved as a means for us to get what we want from others whether they wish to offer it or not, whether we know what we are doing or not.
By the time we get to “source of intelligence” hierarchies on p. 13, I was a little concerned about what might happen if we all just went along with Englebart. Essentially, in the name of efficiency, someone would decide in advance for us what information sources we should trust and then we would receive information primarily from those sources and have to reject those in order to get to others, presuming alternative sources even make it into the system (a cynic might say 'ah, graduate school' but I don't believe it's so or should be so). The attribution of intelligence to synergism, many things (cells in the human brain, lines of code in a program) working together is all well and good, but collective action does not necessarily imply centralization of power and authority. It is possible that the implications of “human augmentation” would be human control, a system that, in the name of advancing science, removed the individual from the process of reasoning and evaluating information, something that is counter to what Bush suggested (he insisted there be a record, but allows individual users to form their own “trails”, connections between ideas, and to access all available knowledge without restriction – Englebart criticizes this as “spending so much time in lower level processes of manipulation” on p. 39). The “augmented human” might be better at following orders faster, and achieving desired results sooner, but it might cost his or her individuality and freedom. Something which, incidentally, an experiment that involves writing with a pencil tied to a brick does nothing whatsoever to justify.
“Well designed symbol structures” (30) could restrict more than they facilitate. The breaking down of knowledge into kernels may unnecessary split concepts that are related – who will be drawing the lines? (On p. 45, there is the suggestion of self-altering code and computers, so, maybe, the computers will tell us…) The coordinate-indexing descriptors, which are not well described, may be far worse than the Dewey decimal system. What he’s suggesting does represent a step in the direction of accepting information as belonging to certain categories rather than judging for oneself (is this a liberal idea? Or a conservative one?) “Master code structure” – yikes – for cognition? Englebart admits that "developing the conceptual structure represents a sweeping synthesis job full of personal constructs from smatterings picked up in many places" (49), and he hopes for something that will replace this. Replace personal experience? Grant an uniform one in place of many diverse ones? Are you sure you want this augmentation?
It would, however, allow us to deal with vast quantities of information. The description of how that would look (“a natural position” my eye!) is admittedly reminiscent of some computing setups today (multi-screen, multi-interface). It seems like a leap to go from word processing and mathematical computation are augmented by computers to let’s let them frame all of our logic from now on, but he’s trying to escape the “90% maintenance / 10% progress” paradigm.
LICKLIDER-TAYLOR - 1968
After Englebart, I was primed to dispute whatever these next two were selling, that there will soon be “more effective communication via machine than face-to-face” but I settled down and had to admit that it’s quite likely so. I certainly prefer to have an online syllabus with hyperlinks to resources than to just have the professor tell me verbally what he expects me to read. The example of the technical project meeting “face to face through a computer” is attractive, and does represent more or less what happens when everything is working properly for all parties. The requirement that individual models be synchronized ("cooperative modeling", p. 22) for communication sounds a bit sinister, but it’s a much gentler suggestion than that in the earlier paper.
Also the cartoons are fun. On page 26, though, it's unclear whether the man's rather poor drawing, which is improved by transmission via the computer soas to influence the woman, is actually good for her. Is she going to fall in love with a computer generated fantasy “artist” and wind up with an accountant who can barely draw? Is there the suggestion that it is his idea -- of the heart with their initials in it, which doesn't seem very original or personal, but whatever – is what catches her fancy and that its transformation via technology is irrelevant to the bond they form as a result? I also noticed that only men were at the important meeting, and this women is acted upon rather than acting. Hmm.
Again, this is a prescient source. To handle the vast quantities of information, they suggest a system of nodes. While these operate with a little more human-like logic than the internet actually does today, it is a very nice representation of an information network. They predict ‘flame wars’ without using the word, and have a pretty clear idea of what interactive communities will be like. All this in 1968.
Is it okay, do you think, for computers to "know who is prestigious in your eyes and buffer from a demanding world"? The fact that spam is an evil notwithstanding? Are you sure this is what you want?
MYERS – 1998
Brad Myers brings us much closer to the present, and his source list is a who’s who of important names in HCI.
Myers is looking backward, for the most part, and presents us with the ways that the problem of dealing with massive quantities of information has been, if not solved, greatly ameliorated.
In the forward looking section, his suggestions re: what's next are: gesture recognition; multimedia; virtual and augmented reality; three dimensionality; computer assisted cooperative work; natural language and speech. We have since seen the amazing growth of computer assisted cooperative work (“Web 2.0”), and these others do exist today, though they’re not perfected or in widespread use (except for multimedia), so I can ascribe prescience to him as well.
SUMMARY
Our readings for today represent a history of the exploration of the idea of handling a lot of information in improved ways. We have been moving in the directions suggested by these theorists. Though creatures of their times, they each predict the future accurately in many ways.
I am personally grateful for much of that progress, enjoy the ability to manipulate images, acquire information, work with math I wouldn’t be able to otherwise, and so on, but I do encourage you to think about the assumptions implicit in some of these ideas. I don’t believe “Google is making us stupid” at all but I think that the most efficient path to solutions often lies in improved critical thinking, in examining the nuts and bolts of a problem, and in viewing things from multiple perspectives (“thinking outside the box”). Systems than allege to reduce work by removing the user from the analysis of information quality entirely are not any more reliable, in my view, than trusting someone who claims to read the Akashic record would be.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment