Sunday, October 5, 2008

Human Capabilities Capture Attention

The material covered in the readings was not at all what I expected when I chose the “Human Capabilities” topic for my short response essay. (If I had known this, I might have chosen differently!) I had expected to read about how computer technologies extend human capabilities, but was instead introduced to a variety of complex, albeit fascinating theories and frameworks shaping the field of HCI, from perspectives as varied as perceptual-motor interaction to neuroergonomics and cognitive architecture. Truthfully, neuroergonomics and situated cognition (which I do know a little bit about from reading James Paul Gee’s work) captured my attention – more so than some of the other theories covered in the readings. However, because as Welsh et al. write in Chapter 1 “there needs to be a greater consideration of attention in the selection and execution of action (30),” the best course of action here might be a discussion of "attention" and what it means in terms of some of the theoretical frameworks presented in this week’s Handbook readings.


Welsh, et al. examine perceptual-motor interaction within the Information Processing framework. The authors argue that as HCI moves into the realms of “virtual and augmented reality, teleoperation, gestural, and haptic interfaces, among others, the dynamic nature of perceptual-motor interactions are even more evident” and the “assessment of actual movement required to engage such interfaces [sic] more revealing” (29). They also, as mentioned above, acknowledge more emphasis be placed on the role of attention as it relates to action. Some of the questions they believe are critical to understanding the role of attention are: What is attention? What does it mean to pay attention? What influences the direction of our attention? The answers to these questions are important for a greater understanding of how we interact with our environment.


Attention, according to the authors, is “the process through which information enters into the working memory” (30). Attention, they argue, is selective; focus can be shifted from one source of information to another, and can be divided between more than one source of information at a time. It is, they say, the “cocktail party” phenomenon (a vodka and tonic, please). At a party, for example, Welsh et al. explain, a person can listen to more than one conversation at a time, though by doing so, will make diminished contributions to the primary conversation as he or she dedicates more resources to the secondary one. The implications for design, they write, are threefold: as designers work to limit stress on individual information processing systems, they need to a) create interfaces that assist in the selection of the most appropriate information, b) be knowledgeable about the types of attention shifts that occur and how to use or not use them, and c) when attention is divided between a series of tasks, each of these tasks should be designed to facilitate automatic performance in order to avoid conflicts (30).


Although the reading does not concern itself with value sensitive design, it seems there are “values” at play in these implications. For example, the creation of an interface that assists in the selection of “the most appropriate information” suggests that control resides in the machine, thus human agency is diminished; and second, designs that incorporate deliberate attention shifting activities could be considered manipulative. While these might be perfectly acceptable design considerations in most circumstances, under most conditions, it behooves us to consider how our attention and the ability to receive information is molded by interactions with computers technologies.


As many of our interactions with technologies are now visual-auditory interactions, it also becomes increasingly important to understand how these will impact attention, write Welsh et al. The authors focus most of their attention on “visual attention.” Visual attention, they explain, is typically dedicated to information received by the fovea and perifovial color sensitive cone cells of our eyes (31). Also impacting visual attentionare rapid eye or saccadic movements, which are thought to be necessary to “derive a detailed representation of the environment” (31). Visual attention is akin to a "spotlight or zoom lens that constantly scans the environment” (31), according to the authors. Two coding systems are presented for understanding visual attention: the “Spotlight Coding System,” which is described as the “reading text on a page,” and the “Object Coding System” “which appreciates the context of an object in a scene or the most interactive surface of the object” (31). The coding systems can be enhanced or disrupted by either exogenous (external) stimuli that are beyond our control, or endogenous (internal) shifts of attention within our control. There are benefits and drawbacks to both.


The ability to exogenously shift attention by design can “quickly draw one’s [visual] attention to the location of important information” (31). Endogenous shifts are performer-driven and result from a greater variety of stimuli (i.e. arrows, numbers, or words). Though a more subtle approach for cueing people, endogenous shifts may require acts of interpretation to understand a cue thereby requiring “a portion of limited information processing capacity that can interfere with, or be interfered by, concurrent cognitive activity” (31). In other words, visual cues are well and good until they become distracting and deflect attention from the task at hand.


“Attention” comes up again in Chapter 2 where Proctor and Vu survey methods used to study human information processing. Unlike Welsh et al., however, who focus on visual cues, Proctor and Vu begin their discussion with the Cherry experiment (1953), an auditory experiment which tested attention in information processing by playing two messages into both ears simultaneously. For the most part, results showed that people could only repeat one of the messages, though they might remember a physical characteristic of the other message, i.e., gender.


In 1958, Broadbent introduced his “Filter Theory,” which attempts to explain the phenomenon Cherry discovered. “Filter Theory” suggests the nervous system is a single channel processor, which implies when two messages are played at the same time, the unattended message cannot be identified. In 1964, however, Treisman introduced his “Filter-attenuation Theory” which suggested an attenuated signal may be sufficient enough to allow identification of the second message if it is one with a low identification threshold (i.e. a person’s name or unexpected event). Deutsch and Deutsch had, in 1963, proposed “Late Selection Theory,” which, according to Proctor and Vu, theorized unattended stimuli “are always identified, but a bottleneck occurs in later or reflective processing” (54). The difference between these theories is that “Late Selection” assumes meaning is fully analyzed, while “Filter” and “Filter-attenuation” theories do not.


A more recent theoretical framework examined by Proctor and Vu is the “Load Theory of Attention (2004)” (54). This theory “resolves the early versus late selection debate by introducing two selective attention mechanisms: perceptual selection and cognitive control” (55). When a perceptual load is high or great demands are being made on the perceptual system, irrelevant stimuli are excluded from being processed. On the other hand, when memory load is high, it is not possible to exclude or suppress irrelevant information at the cognitive level. This means that interference from distractions can be reduced under conditions of high perceptual load, but increased with high working memory load. Another theory that extends “Load Theory” is “Multiple Resource Theory” which suggests that “multiple task performance is typically better when tasks use different input-output modes than when they use the same mode” (55).


On a final note, in Chapter 2 Proctor and Vu write “the central metaphor of the information processing approach is that a human is like a computer” (44). Although I am not going to address it in my response, I think it worth mentioning that I prefer Byrne’s (2007) cognitive architecture framework for design over the information processing paradigm. It suggests “cognitive architectures are software artifacts constructed by human programmers, designed to simulate human architectures in a humanlike way (Newell, 1990)” (94). This says to me that computer technologies – through software design – are becoming more like humans, rather than the other way around. It seems more natural this way. A bit more neuroergonomic, if you will.


Readings Citations:


Welsh, T. N., Chua, R., Weeks, D. J., & Goodman, D. (2007). Perceptual-motor interaction: Some implications for HCI. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 27-42). Lawrence Erlbaum.


Proctor, R. W., & Vu, K. L. (2007). Human information processing: An overview for human-computer interaction. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 43-62). Lawrence Erlbaum.


Byrne, M. D. (2007). Cognitive architecture. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 93-114). Lawrence Erlbaum.