Wednesday, September 23, 2009

Snow White and the 'Thinking Machines'


In 1954, just 17 years after Walt Disney's Snow White premiered at the Carthay Circle Theater, a man lay dead beside an apple, half-eaten and laced with cyanide; an act of apparent suicide by consensus. 45 years after his controversial death- in 1999, Time Magazine named Alan Turing 21st on the list of top 100 most important people of the 20th century for his influences on mathematics and computation. And recently, on September 10th, 2009, the UK government issued an official apology for its treatment of the now beloved historical figure, who in 1952 was charged with ‘gross indecency’ (Turing was homosexual). The punishment for this charge: the revocation of Turing’s clearance as well as hormone injections, forbidding him from practicing his work and ultimately leading to Turing’s suicide. The outcome: The loss of a mind that undeniably changed the world forever in ways he could never have imagined, most notably within fields of military intelligence, cryptography/cryptanalysis, mathematics, computation, physics, and philosophy. In interests of Human-Computer Interaction, we shall more heavily examine his merits in computation and philosophy of artificial intelligence ("Alan Turing," 2009).

Alan Mathison Turing was born in London, England on June 23, 1912. Clearly brilliant, he was able to grasp the work of Albert Einstein at the age of 16 and even extrapolated Einstein’s questioning of Newton’s laws of motion from a line that was never made explicit. Reformulating Kurt Gödel’s limits of proof and computation, in 1937 he presented a paper titled “On Computable Numbers,” where he introduced the concept of the Turing machine- the central object of study in theory of computation, and the fundamental unit of computation on which all technological advances (including the microchip) is founded. Pure brilliance aside, perhaps it was also Turing’s environment and context that helped shape him into the influential contributor that we recall today. Turing was in his intellectual prime during World War II, and made a name for himself as a senior code breaker, famously breaking the code of the German Enigma machine. His demonstrated prowess during wartimes in areas of cryptanalysis and electronics allowed him to serve at the National Physics Laboratory and work on a paper he presented that detailed the first stored-program computer; the materialization of which ran its first program in 1950. His stored-program computer consisted of a logic unit, control unit, inputs and outputs, and memory. It was essentially a real life Turing machine- the likes of which he had envisioned in his 1937 paper ("Alan Turing," 2009).


From the aforementioned Time Magazine article, “So many ideas and technological advances converged to create the modern computer that it is foolhardy to give one person the credit for inventing it. But the fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine. (Gray, 1999)” It is much more significant than that. Some of the most prized human endeavors are completely reliant on Turing’s computational contributions: The unraveling of the human DNA genome, the identification of much of our universe, let alone our short-lived presence on the moon. Truth be told, the computer is now behind the curtain of most day-to-day human activity. Technology fundamentally exists to aid the human condition, and the digital computer is arguably the single-most important revelation, taking a giant leap in the ability to aid the human condition when historically only small jumps were accomplished. I don’t think I need to belabor the significance of the digital computer. However, it is worth reminding ourselves that Alan Turing lived in a different time- a time where logic ran wild like beasts in an enchanted forest. Yet he was able to harness logic in the form of a tool that humankind could use to solve any conceivable problem. This is such a powerful thought and yet we take it for granted almost every minute of every day. From Turing’s paper in 1948 which summarizes the Turing machine of which he conceived in 1937, he describes it as “…an infinite memory capacity obtained in the form of an infinite tape marked out into squares on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.” Alan had conceived of this machine (commonly referred to as the Universal Turing Machine) simply as a thought experiment, but it only takes a modicum of inspection to see that his thought experiment was really foreshadowing the central processing unit ("Turing Machine," 2009).

After the war, Turing became deputy director of the computing laboratory at the University of Manchester and swiftly moved on to address the problem of Artificial Intelligence- a socio-cultural and philosophical problem as much as a technological difficulty. Building on his original ideas of computational machines, he broadened his thoughts around thinking machines. He proposed that a machine could learn from itself and modify its own instructions. This train of thought leads to self-reference and arguably, (according to minds like Douglas Hofstadter) is what possibly allows systems to acquire meaning, despite being made of ‘meaningless’ elements. Self-reference and meaning are fundamental notions in the topic of consciousness. Clearly, this is an area not well understood to us even today. What is consciousness? What does intelligence mean? What does it mean for a computer to be intelligent? Is that the same or even comparable to biological intelligence? What are the consequences of considering a computer to be intelligent, or even conscious?

A lot of times we deem a particular piece of computational technology as intelligent when it behaves in certain manners (e.g. it is extraordinarily or surprisingly helpful, it seems to have planned ahead and anticipated one’s needs, etc.). However, we just as quickly dismiss it as an attribution of those humans who have developed the piece of technology. In those cases, the designers and developers are the intelligent ones and specified and programmed the behavior right into the computer as a crystallization of their own intelligence. However, this dismissal is getting more and more difficult to stomach since many computers today actually perform behavior not accounted for in the designer’s designs or the programmer’s code. From Turing’s ability to harness logic into computational elements, we now arrive at the opposite of harnessing; emergence. At what point will we be able to comfortably admit that these machines are indeed participating in thought and showing genuine (yet artificial?) intelligence?

Turing considered the question, “Can machines think?” and realized that the question is likely impossible to answer because ‘thinking’ is difficult to define in the first place. In fact, 50 years later the definition of ‘thinking’ is still under debate- rendering the question ineffective still. So, instead of focusing on thought and trying to prove whether or not computers could participate in it, he decided to focus on something that could be measured- a machine’s ability to appear human-like, no matter how human-like the machine really is. In 1950, Turing proposed what he called an “imitation test,” later called the “Turing test.” It called for a test whereby a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. If this judge is unable to tell the machine apart from the human, the machine is said to have passed this “Turing test. ("Turing Test," 2009)” The Turing test is a standard benchmark in the field of artificial intelligence, but fails to address understanding- among other important cognitive elements (e.g. emotion). In other words, just because a computer is able to fool a human by processing symbolic representations of logic and transforming them into appropriate human language (see John Searle’s Chinese Room), it doesn’t show that the computer actually understands any of it, or experiences any of it in a conscious manner. This is still a burning topic in the field of philosophy.

Artificial Intelligence (AI) - though not as popular as it was before the turn of the 21st century- has generated much fanfare and has contributed greatly to the field of HCI both directly and indirectly. The most notable indirect route has been via media and entertainment. Science Fiction has rendered both delight and apprehension in the hearts of the masses. AI often serves as an important component of the plot in many film narratives (think: “2001 Space Odyssey” and Asimov’s “I Robot”). It has seeded preciousness of today’s HCI designers. The idea of machines being able to learn and correct their own behavior, to take a proactive part in aiding or performing human activities are foundational dispositions HCI designers possess. Turing had suggested that rather than building a program to simulate the adult mind, it would be better rather to produce a simpler one to simulate a child's mind and then to subject it to a course of education. Today, it has become accepted that in order to have technology that truly aids the human condition; it must have the ability to learn and to attempt to match wits as much as possible with the user. The saturation of AI in media can partially account for our current assuming relationship with technology. Without knowing it, Turing provided an arena in which our imagination has danced for decades, which has resulted in an atmosphere where we not only question the intelligence of our computers; we expect it.

When the life of Alan Turing ended, a piece of our humanity was potentially stolen. At just the age of 42, Turing could have given much more to an already astounding collection of wordly contributions. He left us quickly and poetically, playing both the witch and the princess in an act right out of a historic Walt Disney film. However, the symbolism of that very act might not be lost to the ages; deemed by most to be an urban legend, some still point to the Apple logo as a tribute to his final act and scene. Whatever the case, Turing has undoubtedly left us with more than we could ever ask from one individual. Though many disciplines have been heavily influenced by him, and even created because of him, the field of HCI in particular has greatly benefited from his work in computational theory and artificial intelligence. If Mr. Turing were around today, I think he would be proud to see the progress our society has made in accepting our humanly differences, and even prouder yet to see what we've taken from his work and ran with in the effort to aid the human condition.

Sources

Wikipedia articles:

Alan Turing. (2009, September 21). In Wikipedia, the free encyclopedia. Retrieved September 22, 2009, from http://en.wikipedia.org/wiki/Alan_Turing

Turing Machine. (2009, September 17). In Wikipedia, the free encyclopedia. Retrieved September 22, 2009, from http://en.wikipedia.org/wiki/Turing_machine

Turing Test. (2009, September 11). In Wikipedia, the free encyclopedia. Retrieved September 22, 2009, from http://en.wikipedia.org/wiki/Turing_test

Other:

Gray, Paul (1999-03-29). "Time 100: Alan Turing". Time.com. Retrieved 2009-09-22.

No comments: