Tuesday, October 13, 2009

Human-computer capabilites and limitations

by David F. Bello

While it is of course relevant literature and a highly important field of inquiry into the study of human-computer interaction, I can't help but feel that computational imitations of human cognition are always already falsely deterministic of the way our human brains actually work. Perhaps exhibiting myself as a pessimist, I do not see how a discrete state machine of any sort might one day replicate the actual, internal functionality of a human brain. I am not a neuroscientist, nor have I ever claimed to be (except that one night at a bar in 2006, but let's not get into that...). It does, however, appear to me that the neurons in one's mind do not and cannot be represented fully by the electronic circuitry of a computer. The cognitive architectures discussed here act within software systems at a much higher level of abstraction than binary code and represent higher levels of cognitive activity than the fundamental units of neuron-to-neuron activity. Thus, any mapping of thought and code must take place in some middle-ground of cognitive architecture and software architecture that is being sought out in the 5th chapter of our textbook, "Cognitive Architectures," by Michael D. Byrne.

Douglas Hofstadter's books always come to mind when considering these more theoretical topics of HCI. For instance, in I am a Strange Loop, he analyzes the structures of abstraction that neuroscientists study (from amino acids, to synapses, to columns and cortices, and finally hemispheres), and compares these to cognitive structures which are the root of thought and language (from the concept "dog," to various types of memory, to memes and the ego, to a sense of humor, and finally the concept of "I") (Hofstadter 26). We don't always consider the relationship between these categories because they have not yet been discovered as such. For instance, in his other book, Godel, Escher, and Bach: An Eternal Golden Braid, he presents a fictional dialogue with an ant colony, whose individual units do not present data unto themselves, but serve to create meaning through large-scale systematic shifts in movement and action. This is related to the idea of software, because though we do not often comprehend the direct relationship between executing a program and the binary code beneath that layer of interface, it is always there. Just as it would be completely without meaning to see an ant walk up a stalk of grass, it is completely without meaning to see a single switch move from on to off and back without the fuller understanding of a complex system of computational programming in context. It is this nature of neurology to also follow this paradigm which leads to the idea that a single neuron has no meaning unto itself, but rather it must be the entire system of neurons firing in realtime and in context which provides meaning to the self and to the body.

But once we move forward from the levels of binary code and neural activity, there must be some middle ground through which computational cognitive models are able to accurately represent human cognition at an appropriate level. Mustn't there be? Perhaps software can be written to model the neural activity of the brain in its entirety. Or, moving upward through layers of mental abstraction, software which representations language and/or visual information in its entirety. Simpler than that, can we develop a program to accurately represent the entirety of human thought at the level of the self? That is, could a single person's neural activity ever be mapped completely into an implementation of computer software? If so, what are the implications for the relationships of death and birth, belief and memory, language and the senses? Can we ever provide a working cognitive model to "react" to a Beatles song or a painting by Goya?

To move away from my thus far tangential whimsy on the human condition, let us look at the more relevant limitations of human beings, rather than the limitations of computers to become like human beings. The Computational Complexity Theory (CCT) perfectly exhibits the way that human cognition and computer software are layered by abstraction. The task mentioned in the chapter, deleting a word from a paragraph, is something to be done, separate (computationally) from the interface itself. The interface, which in this case can be transferred out and replaced with an alternative set of means of interaction with the deletion of that word, exists "on top of" the underlying code representing the word, its placement, and the structures of both on the computer's hard drive and in its memory. Just as, when we might speak that paragraph out loud, we may be thinking in units of abstract thought; allowing our brain's language capacities to perform the work. Or we might be examining our phrases as units, rather than words; words, rather than syllables; and syllables rather than phonemes. I consider this telescopic view of language much like the layers of abstraction in software: graphical user interface, code objects and classes, programming language architecture, operating system architecture, machine language, hardware processing, etc.

What does this matter? Well, for one, the way that we view computing systems informs our thoughts about software and what we can do with computers as users. Also, the very fact that we use computational cognitive models to gauge how we think about ourselves and our own thinking can bring about great sociological inferences on not only how we use computers, btu how we "use" everything else: how we interact with one another and our environment, how we learn and grow as human beings, how we believe and how we think, etc. It is important to know that there is much we don't understand both about the human mind and the digital technology that surrounds us. While there are a great deal of programmers, there are fewer and fewer who go deeper and deeper into the levels of abstraction in code. In part, because of the growing complexity of software, but also because we are losing interest in the seemingly "mundane" aspects of binary coding, machine language, assembly code, etc. and more interested in those higher level programming systems which afford greater rewards in the form of developing complete software objects in much shorter periods of time and require a far shorter learning curve. Just as it is unnecessary for the average language user to think of every single phoneme that comes from their mouth and the oral and vocal structures behind that production of meaning, not every computer user is willing to take the time to understand what every element of hardware and software is doing while they check their email and add friends to their facebook profiles. There is much being left behind, but there is also further directions to go. Unlike the human mind, the computer is forever extensible: capable of memory upgrades, processing power, and storage capabilities far beyond that of the human mind. Where will these abstractions take us?

Byrne, M. D. (2007). "Cognitive architecture." In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 93-114). Lawrence Erlbaum.

Hofstadter, Douglas. G©Ĺ“del, Escher, Bach: An Eternal Golden Braid. New York: Basic Books, 1999.

Hofstadter, Douglas. I Am a Strange Loop. New York: Basic Books, 2007.

No comments: