"Three Laws of a Popular Visionary"
Before I was accepted into Rensselaer Polytechnic Institute's Technical Communication program, I wouldn't even know what the acronym "HCI" stood for. One would be hard-pressed to divine any information about the field of human-computer interaction from me. My mind was a blank regarding this subject -- or so I thought. If the same person asked me about the prophetic implications of science fiction writing on the field of modern science, the floodgates of information would burst forth and inundate her mental landscape with a host of ideas that derived from the imaginations of so many authors. Little did I know that these prophetic visions would delve deeply into reality.
Popular culture is something that social scientists and ethnographers have long studied to derive the understanding that a mass audience has regarding several different mental realms. Not surprisingly, science is one such realm. Figures -- media darlings such as Carl Sagan and Michio Kaku -- have brought about a popular notion of what science is. Their ideas bring forth a popular assertion that science is a part of life and will not fall away so long as the public imagination remains captured. Furthermore, their credentials allow the public to believe what they say, because, after all, they are experts in their fields. But something strange also permeates the public imagination more ubiquitously than scientists themselves. While the Sagans and Kakus of the world bring forth "real" science, the imagination is truly touched by the fictional author. Arthur C. Clarke, Robert Heinlein, Frank Herbert, and their ilk all contribute to the speculative realm of popular culture, a realm that in turn impacts science itself. Not surprisingly, when you ask the "common man or woman" what they see in the future, something from science-fiction often emerges.
Within the domains of science and fiction, their is contention. However, the world was blessed on January 20, 1920, with the birth of the truest exemplar of both science and fiction, Isaac Asimov. Born in the Soviet Union, no single figure has grasped the public image of what it means to be a legitimate scientist and a legitimate fictional prophet than Asimov. His pioneering fiction brought new, radical ideas into the common understanding. He literally hybridized both science and fiction into something digestible by the average person and respected by the advanced scientist.
Focus on Asimov's life is often focused on his prolific authorship that helped to spread the importance of science, and rightfully so. Having written or edited nearly 500 books, and with over 90,000 instances of correspondence, he was indeed something of a literary fanatic. Additionally, he was always a scientist at heart. Having graduated from Columbia University in 1939, he earned his Ph.D. in Chemistry from the same institution in 1948. In the decades that followed, he would hone his craft toward humanistic science fiction that brought the genre out of the "pulp-magazine" domain and into the domain of widely-respected fiction. Oddly, he existed in a void, separated from 20th century literature. According to New York Times Reporter Mervyn Rothstein, who wrote an April 6, 1992 obituary of the author:
"I make no effort to write poetically or in a high literary style," he said in 1984. "I try only to write clearly and I have the very good fortune to think clearly so that the writing comes out as I think, in satisfactory shape. I never read Hemingway or Fitzgerald or Joyce or Kafka," he once wrote. "To this day I am a stranger to 20th-century fiction and poetry, and I have no doubt that it shows in my writing."
Regardless of this separation from the works of contemporary authors, Asimov's brilliance prolongs a legacy that continues to impact thinkers to this day.
One area that Asimov's writing has influenced is the arena of human-computer interaction (the aforementioned "HCI"). His most famous statements that would impact this field appeared in the 1950s with his "Three Laws of Robotics." These laws are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.
2. A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.
3. A robot must protect its own existence as long as such protection does not conflict with a higher order law.
It was, in fact, Asimov himself who is the origin of the term "robotics," which first appeared in his 1942 story "Runaround."
Robots, and all automata, have been part of the popular imagination for hundreds of years. Mary Shelly's Frankenstein: The Modern Prometheus brings forward a horrific account of the dangers of automating and granting independence to machines and machine-like creations. Automata and clockwork technology existed before Shelly's masterpiece. "During the twentieth century, several new technologies moved automata into the utilitarian realm...[the] clay model, water clock, golem, homunculus, android, and cyborg [all] culminated in the contemporary concept of the robot," writes Roger Clarke of the Australian National University. He continues to define the robot in more complicated terms, having the "exhibit three key elements:"
1. Programmability ("a robot is a computer")
2. Mechanical Capability ("a robot is a machine")
3. Flexibility
Clarke concludes that "we can conceive of a robot, therefore, as either a computer enhanced machine or as a computer with sophisticated input/output devices." He goes on to note that the distinction between "robot" and "computer" is becoming increasingly arbitrary, a notion that suggests that Asimov's optimistic visions of robotics will become more relevant as time progresses. Clarke does note, however, that "Many facets of Asimov’s fiction are clearly inapplicable to real information technology or too far in the future to be relevant to contemporary applications." Yet he continues,
"Some matters, however, deserve our consideration. For example, Asimov’s fiction could help us assess the practicability of embedding some appropriate set of general laws into robotic designs. Alternatively, the substantive content of the laws could be used as a set of guidelines to be applied during the conception, design, development, testing, implementation, use, and maintenance of robotic systems."
Examples that we can see following Asimov's laws are many. For example, consider the implementation of a standardized computer-robotic interface across several cultural borders. Clarke asserts that the flexibility required for technology (and that breaks from Asimov's rigidity) must be able to follow contextual adaptability.
Ideas such as what is and ought to be follow as well. "The greater a technology's potential to promote change, the more carefully a society should consider the desirability of each application," writes Clarke.
Ultimately, one of Clarke's strongest points following Asimov's laws is that of the implementation of a code of ethics that must surround the interaction between humans and computer or computer-like interfaces. Information technology professionals must definitively examine the ethical implications of their creations and their interfaces. Humans will not always be accepting of robots, computers, and the like, and to properly allow greater ease among users, ethical lines must be drawn that afford people the chance to feel comfortable that their interface does not violate tacit codes of conduct and understanding. Maybe the rigidity of Asimov's laws will help guide engineers toward such an understanding.
Jean-Luc Doumont, a senior member of the Institute of Electrical and Electronics Engineers agrees with this notion. He writes:
"The search for fundamental laws, unfortunately, has seldom, if ever, been applied to professional communication. Most how-to books on the subject seem content with long lists of phenomenological principles. Useful as each of these might be, a long list of them will always be hard to assimilate, at least without some perception of a simpler underlying logic."
Doumont feels that using this rigidity, one can effectively communicate ideas through both personal and impersonal laws. He defines a set of "Three Laws of Professional Communication" that echo Asimov's robotic laws. From adapting to an audience to reducing noise to using effective redundancy, Doumont's laws may differ greatly from Asimov's (in terms of scope and applicability), but carry with them the same inspiration.
In a similar circumstance, Dror G. Feitelson uses Asimov's Three Laws of Robotics to derive three laws that apply to software itself, another fundamental element of HCI. Feitelson asserts in his reformulation of Asimov's first law that human production (be it text or the like) is the sacred element that must not come to harm. However, current trends in programming allow software to discard this record. Despite this, he continues to state that the data and the experience must be held as sacred as well, suggesting that if the software disrupts this contract, it will be impossible for the user to successfully employ the software. "These changes," he writes regarding ideas and programs that will protect the user's experience, "might seem small to the developers, who spend considerable time considering the best ways to perform various tasks, but they can be devastating to users who just want to perform their work with as little hassle as possible."
Feitelson's second law parallels Asimov's as well. "Software must obey orders given by its users." He explains how it must function "according to its specs." Likewise, his third law suggests that software, like the robot, must protect its own existence. It should not crash or regularly malfunction, but should continue to work properly, even if the user lacks the know-how to function gracefully within its parameters. Feitelson acknowledges that his adaptation of Asimov's laws is a "trade-off" for designers, but one that will undoubtedly increase the performance and enhance the experience that users face. "It’s time for software developers to be more accountable for their products and to remember that their software is there to serve its users -- just like Asimov’s robots," he writes.
In the end, Asimov is not responsible for an HCI revolution. He did not see himself as anything more than a humble humanist with a few good words to share. However, unlike so many others, his ideas, especially his Three Laws of Robotics, have been a launch pad for other innovators, essayists, and thinkers to move into a new realm of understanding regarding HCI. Without his rigid laws, there would be neither those laws we could emulate nor those laws which we could break so effectively.
WORKS REFERRED
Clarke, Roger. "Asimov's Laws of Robotics: Implications for Information Technology (Part 1)." Computer. Dec 1993: 55. IEEE Xplore. Web. 22 Sept, 2009.
Clarke, Roger. "Asimov's Laws of Robotics: Implications for Information Technology (Part 2)." Computer. Jan 1994: 62. IEEE Xplore. Web. 22 Sept, 2009.
Doumont, Jean-Luc. "Three Laws of Professional Communication." IEEE Transactions of Professional Communication. Dec 2002: 291. IEEE Xplore. Web. 22 Sept, 2009.
Feitelson, Dror G. "Asimov's Laws of Robotics Applied to Software." IEEE Software. July/Aug 2007: 112. IEEE Xplore Web. 22 Sept, 2009.
"Isaac Asimov Biography and Notes." 2009. Web. 22 Sept, 2009.
"Robotics Research Group." University of Texas at Austin. Web. 22 Sept, 2009.
Rothstein, Mervyn. "Isaac Asimov, Whose Thoughts and Books Traveled the Universe, is Dead at 72." 7 April, 1992. Web. 22 Sept, 2009.
For a footnoted version of the essay (in which the works will be directly cited within the text), send me a private message.
No comments:
Post a Comment