Tuesday, November 11, 2008

On robots and morals

Responding to articles written by my own professors two weeks in a row: awful or awesome?

The articles on children and technology this week, especially the two studies asking what can be seen in interactions between children and personified technologies (admittedly also affected by our recent study of value-sensitive design), raised some interesting questions, some of which fall outside the realm of Human Computer Interactions and some of which extend beyond the topic of children and technology. Instead of writing a straightforward essay attempting to encompass all of these questions, I'd like to break them down one by one with a brief discussion.

1. Do moral judgments in interactions with "personified agents" generalize to all interactions with technology?

This question was inspired by Freier's use of a Half-Life 2 model for his morality study. The model used in the study was given a voice separate from that of the game that played a researcher in a game of Tic-Tac-Toe. Insults from the researcher were observed by children to be immoral, especially when the digital player spoke up for her own rights and feelings. The question above can be asked specifically in this case: If shortly after, the children taking part in the study played Half-Life 2, and the model who had previously played Tic-Tac-Toe was now a gun-toting member of the opposite team, would the child have issue with shooting her? Would the moral reaction to the character carry over from one context to another?

More generally, once the child has encountered enough digital models that advocate for themselves, would the child start to ascribe morals to relations with all digital models? After all, for instance, characters in video games often cower, or protest, or try to fight back as we carjack them or shoot at them or stab them or steal from them. Within the context of the game, we see these as conventional, rather than moral values. However, these children maintained moral wrong even when faced with the option that other cultures did not consider certain acts as such. Would the cultural rules of a video game be perceived in the same way?

2. Are these moral reactions limited to children of the current generation, who are growing up surrounded by technology?

I would be interested to see how the AIBO study might be replicated among members of different generations. My own interactions with similar lifelike robots have been a strange mixture of curiosity and revulsion. I'm interested in the biological mimickry, but at the same time, such...creatures?...still fail to transcend something similar to Mori's "uncanny valley." Stuffed dogs, in my mind, are inanimate objects that we can comfortably project our moods upon. Robot dogs that need regular stimulus to maintain "mood", or "health" or "happiness" are a different...well, they're a different animal entirely.

I will always prefer a real dog to a robot dog, and I might assume that others of my generation would feel the same. Part of the appeal of a real pet is that their dependency upon their owner forges a relationship with consequences. Feeding a dog regularly makes it like me, which makes me like it in return. If I forget to feed my robot dog, I change the batteries and it is fine (or, in the case of a fellow student with a Webkinz, I can starve it, give it a spa treatment, and get more points than I would have if I fed it regularly). Note my use of "it" as a pronoun, unconscious until a second readthrough.

However, children who grow up surrounded by technology in a world where a cellphone is a necessary appendage, where more knowledge comes from the internet than from teachers, and where hearts over the head of a Tamagochi equal reciprocated love, may inevitably see such anibots differently. So I think two further studies would be interesting: one of older people interacting with the AIBO, and one with children of the same generation a few years down the road from now as teenagers interacting with similar technologies.

3. Are moral reactions to "personified agents" fair or valid?

A different way to ask this might be, can the child subjects of these studies see the strings and the hands of the puppetmaster? All of the personification from avatars and robots and digital people come from real researchers and programmers. The technologies are not human, cannot relay values or emotions that were not implanted in them by their creator. Are children aware of this at all in their interactions with such technologies? Are they responding naturally, or are they responding in the context that they know they're being manipulated? Do children really think the AIBO gets pleasure from eating, or are they playing along? Or does it matter?

Another question might be how children would react if the study of morals were reversed. What if the computer was in charge of placing the X's and O's, and cheated the researcher. Would the children perceive that as a moral violation? Do these children believe that the personified agent is capable of making moral decisions as well, or only that real people should have morals in their interactions with them? What qualifications would have to be met to describe a complete moral relationship?

4. Once we ascribe morals to our interactions with technology, can they still function as tools?

A friend of mine is living in Hollywood working on spec scripts and I recently worked with him on punching up a scene where [copyright / trademark / stealing prohibited!] a man is too embarrassed to ask his female-voiced GPS for directions to an adult video store, so he asks how to get to a convenience store across the street. When he turns left into the parking lot of the video store, the GPS voice seems to admonish him for his trickery. Comedy aside, various technologies are used to perform morally grey tasks that some would argue as necessary. If we get to the point where computers respond to natural speech and talk back to us, as demonstrated in the Apple commercial we viewed in class, would it be morally wrong to, for instance, have a "personified agent" who mediated the process of putting a bolt through a cow's skull at a slaughterhouse? Who controlled the process of lethal injection? Who wielded the weapons system on a tank? Who fired nuclear missiles? Would we program such technologies with voices and reactions appropriate to their tasks? Would we make them so that we could lessen our guilt over performing such tasks ourselves?

5. What will this lead to in the crazy science fiction world that will inevitably be our future?

OK, so perhaps some of the last question fit more into this category. What future are we working toward in conducting our research? At the surface, these studies wanted to ascertain what effect technologies had on the development of children, but the questions they raised went beyond that for me. Freier offers in his conclusion the following observation: "The implications of the
alternative design [digital models that cannot self-advocate their own rights] are that children will come of age engaging in a significant number of social interactions that lack any moral feature possibly increasing the likelihood that children will not construct a rich understanding of the intimate relationship that exists between social reciprocity and morality." While the study offers conclusive evidence that digital interactions can be optimized to develop morality, as the above questions show, I wonder about the other half of the equation, in which we begin as a culture to ascribe moral agency to our own digital creations.

Do computers have moral rights, or are they limited to the morals we program them to have? Do digital representations of people online have the same rights as their offline counterparts? Can relationships with digital people and animals offer the same benefits as real interactions? Do personified agents dream of AIBO sheep?

Monday, November 10, 2008

Someone Please Think of the Children

As technology progresses and thus results in a change in social norms, it is often the case that we must adapt in order to continue our development. Such is the case of children and technology. Children today have more information at their fingertips than ever before possible, they are constantly connected, and constantly informed of their current circumstances. As designers of technology, and conduits of technology and information, we must take into account this fact of the increasing degree of exposure that the youth of the world has. In no way should it be suggested to censor material, however, that does not mean that we should ignore potential differences and implications in children accessing technology versus that of an adult.

One of the questions posed by Livingstone, poses if the Internet is a distinctive technology. This is a perfectly understandable question, as the Internet is not a physical device or really anything specific. The Internet is simply a series of protocols that allows users to connect to servers and other users, which contain data, for the purpose of exploring content and information. It may not interact with it physically, however we do tend to associate with it as if it were a tangible object, which perhaps can be attributed to the interaction involved with the modern computer. In many cases, it would seem that the computer is viewed less directly as a piece of technology itself, but more of a means to an end, functioning as an interface or point of contact for the user to interact with other pieces of technology, such as software applications or the internet. As such, it can be argued that, indeed, the internet is truly it’s own technology, which can be interacted with through the intermediary of the modern computer (and indeed many other devices as well), bringing the potential for a broad range of interactions with it to the user.

Livingstone also asks if children belong to that of a specific “group”, suggesting that some feel that they might be “accounted for” within other demographics, or by responses given by their parents. As the existence of Internet access in one’s household becomes more common, it is only natural that the everyday interaction with this technology adapt to its ever-presence. It can also be said that, in the past, children have a great potential to learn about new technology, and to interact with it in a much more natural manner than adults, as in many cases it is a technology they are “growing up with”. Learning about said newer technology at a young age, when the brain is still like that of a sponge, facilitates the intuitive and natural interaction that children often have with technology. Livingstone reports “In the UK, recent surveys show that among 7–16-year-olds, 75 percent have used the Internet, a figure which doubled the adult population figure of 38 percent”. Children are using the internet at nearly twice that of the average adult, perhaps it is then no surprise that interactions on the internet are frequently geared towards the fast comprehension and browsing habits of children.

Not only do children frequent the Internet more than the average adult, but their habits when utilizing this time are typically different as well. As we can see in Livingstone, “BMRB’s Youth TGI (2001) showed that the most common uses are studying/homework (73%), email (59%), playing games (38%), chat sites (32%) and hobbies and interests (31%).” However, for adults, we can see that “Looking for information and using email were the two most common online activities of Internet users in 2006. These were done by 85 per cent and 81 per cent of adult users respectively in the three months before interview in 2006.” (NSO). This points to a much larger amount of recreational use on the part of children users, and such interactions should be planned for accordingly. For example, a website such as MSNBC.com, a site that is much more likely to be frequented by adults, contains a wealth of information, however is not necessarily aesthetically appealing, at it is following function over form. Whereas a recreational site that is used by a typically younger audience, such as FACEBOOK.com, has much more emphasis on a cohesive, aesthetically pleasing interaction between itself and all of its different members. Not only are browsing styles and habits different in adults and children, but levels of trust as well. To many adults, the Internet is still a relatively new technology, which results in a certain sense of distrust involving it. Whereas children, as the result of their growing up with it, almost associate a certain naive expectation of trust with the Internet, which can, unfortunately, be easily exploited. As Livingstone points out, “in the UK, NOP’s Kids.net survey found that 29 percent of children using the internet would give out their home address and 14 percent their email address”. This level of trust is a startling thought in this day and age, in which information such as this could be so easily used in a manner in which the user had not desired or intended, even if that means receiving more spam mail.

An idea that seems to finally be gaining some recognition in the world of computing and web design is that indeed, “children are the future”. They are the forerunners, they don’t just spot upcoming trends, and they create them. “Children themselves play a key role in establishing emerging internet-related practices” (Livingstone). Druin also suggests that children potentially have four impacts or roles in the design process: user, tester, informant, and design partner. The latter two roles, informant and design partner, are perhaps the most important of the four. While the prior two give us as designers a framework to design around, the latter give us actual feedback on the interaction and design of the technology we are attempting to implement. As an informant and design partner with children, although they may be the more difficult roles for both the adult and child to fulfill, the information and potential designs and implementations as a result of the roles can be quite rewarding.

So what can we look forward to in the future? I would wager that much more technology and software should become “child-centric”. As the current generation of children grows to a point of power in society, their norms of Internet and technology usage will become the norms, and as such, we must prepare for this. Additionally, even amongst adult users, technology that is often first introduced as children’s technology, such as UI design within video games or movies, has a way of eventually becoming commercialized to an adult audience. This transition once again, enforces the link between what may start as technology intended for children, and moves on to technology for everyone.


Sources:

Druin, A. (2002). The role of children in the design of new technology. Behaviour and Information Technology, 21(1) 1-25.

Livingstone, S. (2003). Children’s use of the internet: Reflections on the emerging research agenda. New Media & Society, 5(2), 147-166.

“National Statistics Online (NSO)”. Usage of Internet. http://www.statistics.gov.uk/CCI/nugget.asp?ID=1711 .

Tuesday, November 4, 2008

White Space Vote and HCI

As I mentioned in class, the FCC just voted on the White Space issue.  This is important:

FCC Expands Use of Airwaves


Preachers on the pulpit, Guns N' Roses and others who fear their wireless microphones would be disrupted by widespread public access to certain unused airwaves were drowned out by high-tech titans Google and Microsoft in a federal ruling yesterday.
The Federal Communications Commission approved a plan that would allow those airwaves, called white spaces, to be used by gadgets such as cellphones and laptops connected to the Internet once that spectrum becomes available after the national transition from analog to digital television in February.

Monday, November 3, 2008

Ubiquitous Computing Still Hasn't Disappeared

Abowd and Mynatt's (2000) fascinating and prescient article, "Charting Past, Present, and Future Research in Ubiquitous Computing," offers insight not just for ubiquitous computing (ubicomp) researchers but anyone with an interest in technology design for "everyday living," as they put it.  Their essay examines ubicomp work during the 90s and offers a series of useful guidelines for thinking about ubicomp in context as well as recommendations for future research. 

The remarkable diffusion of computing into our physical world represents more than just easily available technology, rather, "it suggests new paradigms of interaction inspired by constant access to information and computational capabilities" (28).  These new interactions are easy to spot, especially in the last few years as mobile platforms like the iPhone have changed the way many people relate to their data as well as how they act in social situations (i.e., taking calls when with a friend, texting while someone is trying to talk to you or my favorite: The play-with-my-phone-to-avoid-the-pain-of-this-awkward-silence game). 

So, how far have we come since 2000?  Abowd and Mynatt argue that, "current systems focus their interaction on the identity of one particular user, rarely incorporating identity information about other people in the environment.  As human beings, we tailor our activities and recall events from the past based on the presence of other people" (37).  I agree with the statement, but times have changed.  We are now deeply involved in the context of others
, though often it can be construed as superficial.  Social networks like Twitter and Facebook allow us to broadcast our feelings and daily adventures.  However, is this what the authors had in mind?  Though we have made much "progress" in blending others into our digital lives, most of this information is focused on the present, and in the case of Twitter, it is the micro-present.  Our dominant, seemingly ubiquitous social networks are designed for the present.  Twitter, in particular, is designed to encourage micro-updates of 140 characters maximum. 

Much of our technology is designed for the now
, but where is our past represented in the digital ecology?  I argue that we do have an astonishing digital past in the form of email and instant messaging archives.  (text messages are not saved to a central server by default, so they seem, unfortunately, to live in a state of constant disappearance.) Arguably, there has never been a time in history when the historical record has been more complete or rich.  People who would never write thousands of letters do write as many emails, and they are (hopefully) preserved in archives.  The question I have for ubicomp is this: How can we design ubiquitous devices and software to somehow harness the power of our rapidly growing personal archives?  In other words, how (or even should we) incorporate the past into our ubiquitous digital lives?  What could we learn about ourselves if we had a way (if we choose) to harness and easily visualize our past using ubicomp techniques?  The mobile phone seems to be the key.  What if you bought a pack of cigarettes and instead of your phone showing a waring that smoking kills, it popped up with a quote from an email you received three months earlier from your wife that said "Please don't smoke.  I don't want you to get sick."  This would incorporate our present environment, our past and our future health all using ubicomp.   

This particular scene can be usefully broken down using Abowd and Mynatt's five-point framework for thinking about context in ubicomp: 

  1. Who: “Current systems focus their interaction on the identity of one particular user, rarely incorporating identity information about other people in the environment" (37). My scenario brings the user and other people in the user's life into context.  Although it's not a "real-time" interaction with your wife, why does it have to be?
  2. What: “The interaction in current systems either assumes what the user is doing or leaves the question open” (37). With GPS enabled phones and the emerging use of mobile phones as credit card devices, the device will not have to assume what you are doing.  It will know where you are and what you bought (of course, you could just pay cash).
  3. Where: “In many ways, the 'where' component of context has been explored more than the others" (37). Obviously, GPS finally solves the problem of “where.”  The key to understanding the importance of “where” depends upon how well our ubiquitous technology appears to us at critical moments and steps in to help.
  4. When: “...most context-driven applications are unaware of the passage of time” (37).  Linking context to time is crucial for developing truly aware applications.  In my scenario, the mobile device could use the time of day and GPS to send a warning before the user buys cigarettes.  Say, for example, if it was two o'clock in the morning on a Friday night, and the user enters a convenience store.  The device, based upon a baseline of past activity, might try to warn the user not to buy cigarettes.
  5. Why: “Even more challenging than perceiving 'what' a person is doing is understanding 'why' that person is doing it" (37).  Trying to ask “why” a person is doing something does not, to me, seem like a fruitful question for computers to ponder.  Instead, humans should ask these questions about themselves.  However, in my scenario, the computer simply prompts the user using the emotionally charged form of personal email to facilitate reflection about why they are doing what they are doing--right now.  This seems to me the best use of ubicomp and computing in general:  Rather than giving answers, computers should ask better questions and let humans do their own answering. 
    As the authors note, HCI tends to design for closure, but everyday computing believes that daily activities "rarely have a clear beginning or end" (43).  This is a critical observation.  Life ebbs and flows, and our technology ought to accommodate our human reality, not constrain us inside of a designer’s assumption box.  

    Mark Weiser (1991) wrote that, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” (1).  This is profound guidance for all designers, not just ubicomp.  My sense is that today, too much of my technology is in my face, so to speak.  I want my technology to quietly fade in when called upon, and I want it to leave me alone unless I need it. 
    The ironic challenge for ubicomp is not to make more stuff but to make more stuff disappear. 

    References

    Abowd, G. & Mynatt, E. (2000). Charting past, present, and future research in ubiquitous computing. ACM Transactions on Human-Computer Interaction, 7(1), 29-58.

    Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.

    Sunday, November 2, 2008

    Ubiquitous Computing

    Ubiquitous Computing: A Short Response Essay

    The article Charting past, present, and future research in ubiquitous computing (Abowd, G. & Mynatt, E., 2000) (cited as "(Chart, 1997)") from this week's readings provides an excellent framework for the topics discussed in the other papers. For this reason, this short response essay will be structured around Abowd, G, et al.'s piece.


    Technical Premises

    The first topic to consider is the use of natural interfaces for computing interaction. This refers to employing more specialized and meaningful artifacts in greater numbers, rather than a few very general-purpose---and consequently, less intuitive---interfaces for computers, such as the keyboard, mouse, fixed-screen design underlying `modern' computers(Chart, 1997). If the interactions are carried out by artifacts that have a close coupling with "first-class natural data types" such as using a pen to simply mark on a pad, the system is more usable than if it tries to abstract away: for example, converting the handwriting into text(Chart, 1997). Abowd, G, et al. also note that recognition-based interaction is inherently error-prone. Since recognition tasks are necessary at times, this problem should be addressed in three stages. First, designers should strive to refine and improve interfaces wherever possible to reduce errors in the first place(Chart, 1997). In the space where these efforts are insufficient, the system should notify the user of its error discoveries (which can be fed from historical statistics, explicit rules, or confidence threshold triggers)(Chart, 1997). Once the user is apprised of the error, there must obviously be a reasonable error-recovery infrastructure through which they can produce the desired input(Chart, 1997).

    The second major topic is computing context-awareness. Context-awareness entails knowledge of who the user is, what the current interaction is, where it is taking place, the current time, and what other things are temporally proximal to an interaction(Chart, 1997). With these cues, an ideal system would be able to establish the most important (as well as most difficult) context---why a user is doing what they are(Chart, 1997).

    Establishing context raises a new, non-trivial question---how does one uniformly represent context(Chart, 1997)? Abowd, G, et al. suggest that a "context fusion" provides the right solution by drawing on disparate systems depending on the availability, reliability, and relevance of the constituents in each context(Chart, 1997).

    Context is Key by Coutaz, J., Crowley, J., Dobson, S., & Garlan, D. (2005) (cited as "(Context, 2005)") focuses entirely on the topic of context. They first point out that context is not a state, but rather it is entangled in processes(Context, 2005). Failing to regard changes in state can result in surprising and undesirable results, such as a moving person finding a printout spread across each printer he passed because each was the nearest during the transmission of their respective pages(Context, 2005). This suggests that a holistic context is important to consider(Context, 2005). In this model, the printers could estimate where the person would be as the printout completed and therefore route all pages to that printer(Context, 2005). A third concern is the potential for user model deviations from system models(Context, 2005).

    To address these issues, Coutaz, J. et al. propose a "Conceptual Framework for Context-Aware Systems." The basis of the framework is a set of finite-state automatons where each state (i.e. each node) represents a context and each transition (i.e. each edge) corresponds to a shift in context. This FSA is altered by a system modeled upon three levels of abstraction(Context, 2005). The lowest, the "sensing" hardware, feeds data to the next, the "perception layer," which in turn produces data for the top layer, the "situation and context identification layer"(Context, 2005). By drawing upon both the current state reported by lower layers and history (as well as other systems), the model can produce a good, useful context in process(Context, 2005).


    Key Applications

    Assuming this "context fusion" is properly constructed, one of the possible applications would be the provision of "augmented reality," a state in which real-time information is streamed to a user in response to the environment(Chart, 1997).

    The natural extension of this ubiquitous computing application is automatic recording, capture, and access to live experience (Chart, 1997). Omni-present video input devices enable hassle-free recording whenever desired, and live exchange of these video feeds can power tele-brainstorming or formal idea exchanges(Chart, 1997). When the video is just fed into a computer, it can power a real-time video overlay in the user's view featuring labeling/status, reference diagrams, or even easy-to-follow situational instructions(Chart, 1997).


    Further Considerations

    When using computing like this every day, there are special considerations to note. First, many daily uses lack a discernible beginning or end; communication in general is a life-long activity---sub activities should respect this nature either by forgoing discreet-activity-oriented tasks or by minimizing cognitive load so as to avoid interfering with the user's flow of activity(Chart, 1997).

    In Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms by Ishii, H. & Ullmer, B. (1997) (cited as "(Tangible, 1997)"), the authors devoted considerable space to this problem of peripheral awareness. The authors explore couplings between digital states and reality including: a vibrating string that represents packets of network traffic; heated metal seats that represent the presence of another person near a microphone/speaker link; and a room equipped with light, water flow, and ambient sound peripheral cues(Tangible, 1997).

    For computing to accommodate universality, it must be able to cope with task interruption, which is common and probably unavoidable(Chart, 1997). This means at least saving states of incomplete tasks for later resumption and reminding users of the unfinished task if necessary(Chart, 1997). Related to this, support for multiple concurrent activities is essential; users are variously concious of elements of their environments and a computer must leverage this to integrate well into a user's mental space(Chart, 1997).

    When organizing information, it is important to provide associative models when not working on well-defined tasks (which integrate well with hierarchical models)(Chart, 1997). There was an interesting paper on namesys.com (before Hans Reiser went to jail) that discussed the importance of not imposing artificial structure on data because it will destroy accessibility. The user must learn an arbitrary structure (arbitrary for them) to access the data, which is unreasonable. Unfortunately, the paper seems to have disappeared in recent months.

    Ubiquitous computing raises a number of social questions. Ownership and control of information that must span numerous contexts, physical and digital, suggests serious conflicts between utility and privacy(Chart, 1997). Additionally, the likely preservation of any recordable action can have a chilling effect on freedom of speech and public participation(Chart, 1997).

    A final consideration is how to evaluate ubiquitous systems. Designers must form a compelling user narrative for fulfilling a perceived or real need to justify the system, and more importantly, as a metric by which the system's impact can be measured(Chart, 1997). Further, establishing an authentic context of use for an evaluation is exceedingly difficult, considering the cutting-edge nature of ubiquitous computing, which can confound testing(Chart, 1997). Because of the non-finite-span of daily use (discussed above), task-oriented evaluation techniques are clearly not appropriate tools(Chart, 1997).


    Personal Thoughts

    Having quite embarrassed myself with my last short response's erroneous criticisms, I have decided to focus on just having an overview of the readings followed by these very tempered questions:

    The idea of peripheral context information seems interesting, but I would like to see some research on the psychological effects of exposure to extra stimulus like that compared to being given a quiet, calm workspace.

    Also, I wonder if there isn't an advantage to mastering generalized controls for efficiency over the more "natural" physical interfaces. For example, I can type far faster than I can write, and coupled with the shortcuts/functions of vim I believe I can trounce a user who is tied to physical manipulations to interact with an editor. Perhaps an approach more like what's in Vernor Vinge's Rainbows End is closer to the ideal.

    Monday, October 27, 2008

    It’s All About Performance

    Short Response Essay by Lillian Spina-Caza

    Computer Technologies Designed for Performance

    Whether it be the performance of particular tasks using mainframes in the 1960s for “filling airline seats…or printing payroll checks” (Grudin, 19), or for improving performance of individuals through word processing or spreadsheet applications using minicomputers in the 1970s, computer technologies have always been designed with performance in mind. It is a given that for organizations to perform well, all of the systems, processes and people supporting them need to perform well, thus making optimal performance a critical organizational goal. It wasn’t until the mid-1980s and the advent of computer-supported cooperative work or CSCW, however, that the meaning of performance shifted -- from performance as task or functionality -- to performance as social or group endeavor. Once dialogue between people became privileged over dialogue between systems, communication evolved as a critical component of cooperative work, thus it is no surprise that technologies such as Internet, email, video and audio conferencing, and text tools like instant messaging (IM) and chat were quickly adopted for business applications.

    As Olson and Olson (2007) write in the Handbook on page 546, “groupware [as] software designed to run over a network in support of the activities of a group or organization for carrying out activities,” was originally made to provide greater geographic and temporal flexibility. It was also created with new modes of socializing in mind. Some of the new social communication technologies that emerged were successful (email, IM, and chat) while others – like video conferencing – were not as widely embraced (548). The reason why some communication technologies are better received than others, according to Olson and Olson is, I would also argue, directly tied to performance. A/V problems associated with poor audio, poor video, camera placement, ac or noise interference, and delay issues, have all resulted in poor acceptance of video conferencing as a technology. Ironically, as Olson and Olson point out, video does not produce the “being there” phenomenon that many had hoped for (553). The expense and effort of producing quality video currently outweigh the benefits of using this technology to establish social presence. Performance is critical to the successful adoption of any technology, new or old.

    Social Actors and their Props

    With the emergence of groupware applications in the mid-1980s, “the social, motivational, and political aspects of workplaces become crucial” (Grudin, 22); in other words, one might say the drama behind the scenes took center stage. Whereas the mainframe was once the only stage for all of the critical action, now it was individuals performing in concert with each other – using technology – who found themselves in the limelight. Grudin (1994) points out when the move to networked PCs and workstations became widespread, new markets opened up for groupware to support communication and coordination. This move resulted in a paradigm shift away from off the shelf, single-user products to computer support for groups, requiring developers to consider “group dynamics for the first time” (22). Instead of placing computers in leading roles (i.e., as “mainframes” or main characters), people interacting with computers came to be viewed as actors, the tools they work with taking on secondary or supporting roles. It is understandable why the “actors” at this new stage of technological development also became the central focus of new product design centered on activity practices.

    Christiansen (1996) suggests activity “is the term for the process through which a person creates meaning in her practice…” (177). Once activity became tied to meaning-making, it also became important to realize how tools as artifacts are situated or contextualized within activity to create meaning. As Christiansen explains, “You may observe and interview actors in a community of practice, but you will not come to understand why they use the artifacts the way they do until you have come to understand what kinds of activity are used in their practice” (177); this type of understanding is at the heart of participatory design. It is no wonder then, if people are viewed as “acting” with technology, that the performance metaphor stretches to include designers, who like the technologies they create, play key supporting roles. In addition to learning from “economists, social psychologists, anthropologists, organizational theorists, educators, and anyone else who shed light on group activity” as Grudin suggests, product developers also began following the performance metaphor along its natural trajectory, turning to drama and theater to gain new insights about human actors and the activities they engage in when using computer technologies.

    Performance Sets the Stage for Participatory Design

    It is not surprising that the performance metaphor shapes a “third space” or place where product developers can learn more about the practices associated with activity. As Muller explains in “Participatory Design,” third space experiences are hybrid experiences or “practices that challenge assumptions, are open to reciprocal learning, and facilitate polyvocal or many-voiced discussions across and through differences" (1062). One way designers can gain insights into why people use artifacts the way they do requires, as Christiansen points out, a better understanding of the kinds of activity are used in their practice – the use of drama and videos or the tools of performance can aid in this type of exploration. Muller suggests a number of techniques borrowed from theatre that are valuable for bringing to light activity practices, including ideas suggested by Boal’s Theatre of the Oppressed (1974, 1992) that aid a group or community to find its voice(s) and articulate its position(s) (Muller, 1071).

    Other helpful dramatic tools used for creating third space experiences to inform design practices include: “Forum theater” or a type of theater where non-professional actors perform skits with less than desirable outcomes in front of interested parties, and audience members become authors and directors who can alter skits to achieve desired outcomes (1071). “Tableau” is a technique where performers are told to freeze during play and are asked to describe what they are “doing, thinking, planning, and hoping” (1071). “Interface Theatre,” created by Muller et al. in 1994, has software professionals act out user interfaces in a large auditorium, using the theatrical stage as the screen where each actor plays the role of a concrete interface component (i.e., Kim the Cursor, Marty the Menubar, and so forth). Another participatory design practice adopting “performance” as a method for improving design is the “Situated and Participative Enactment of Scenario” that asks designers to take part in a “projective series of improvisations with 'the magic' thing in users’ homes and workplaces” (Muller, 1071). All of these techniques are performance-driven, much like the people and technologies they are used to describe. Performance, it can be claimed, is a metaphor that comes full circle in the realm of HCI.


    Sources:

    Christiansen, E. (1996). “Tamed by a Rose: Computers as Tools in Human Activity.” In Nardi, B. A. Context and Consciousness: Activity theory and human-computer interaction. (pp. 175-198). Cambridge: MIT Press.

    Grudin, J. “Computer-Supported Cooperative Work: History and focus.” IEEE. May 1994.

    Muller, M.J. (2007). “Participatory Design: The third space in HCI.” In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 1061-1082). Lawrence Erlbaum.

    Olson G. & Olson J. (2007). Groupware and Computer-Supported Cooperative Work. In Sears, A. & Jacko, J. (Eds.). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, 2nd Edition. (pp. 545-558). Lawrence Erlbaum.

    CSCW: Computer-Supported Cooperative Work

    The topic of Computer-Supported Cooperative Work (CSCW) is derived from an earlier system of office automation. The notion that we can work cooperatively, within groups has been around since the 1970's. We want to learn from others and get and share ideas. Successful companies know that teamwork is imperative for the success of a company, and that each team player has at least one thing to contribute to this. The study on History and Focus written by Jonathan Grudin, University of California, Irvine shows us how we vary by culture.

    I can understand how CSCW gains insight from the field of anthropology, educators and those who participate in group activity. We have so many applications we subscribe to today. We use facebook, myspace, open office, gmail documents, skype and so many other sources of communication today. This is not just single-user based applications, though we use them individually. We use them to connect to a specific group.
    According to Grudin, group work was not developed in the technology created in the 70's, in this case then, how did technology learn human behavior? Jay has created a short story for this topic and has given us the option to create more to the story, to comment on the story, and even the option not to participate. For this lesson in group work, I think it is great to have the opportunity to participate, so thanks Jay for this insightful chance. The story can be totally twisted, which is awesome, and I hope it goes in that direction.

    For us to focus only on the 'work' effort of CSCW is very generalized. Since CSCW supports the small group effort, it can be regarded in most concentrations. Not only that, but so many areas have an influence in the development of this product. Information Systems people are familiar with the social dynamics of networked PC's and individual workstations and the greater good of organizations. They can assist in creating small groups with workstaions by creating a sense of community. When we share the need of key goals and direction, we are cutting down on the friction of being to general.

    Grudin explains that in large IS environments, the (vast majority of users) decades of experience will shed light on the non-technical problems. Is this really because of the users bringing this insight to attention? I can understand that within a small group of users, the technological problems can become an issue as they (the user) may not have the experience to use the programs.

    Another difference into the CSCW area is how it relates to users by country. Grudin sites that there are many differences to the way we (here in the U.S.) approach CSCW compared to European countries. One of the differences lays in finance. In the United States, research and development are supported and more interwoven with universities. This supports the reason that the funding is coming from a more varied source (independent research, private research grants, and endowments) than within Europe, where their funding is more goverment sponsored. The research in Europe also focuses on large-scale systems development. While I'm not exactly sure what that may mean, I would guess it means that they are progressing faster than we are here. In the U.S., it almost seems like we think backward. According to Grudin, many U.S. researchers build technology and then look for ways to use it. Wouldnt this be a waste of time and effort? I think many developers in HCI would like to have their ideas be used on the first time of introduction, but really I would hope to believe that they would perform in-depth user studies to find the need first. Culture plays an integral part in this effort. In Europe, their many cultures play a part in the need for a groupwork social dynamic. During conferences and social gatherings you can tell that the Europeans are professionals who would like to share their research, experience and current results. At conferences, most who are attending will present their work. Compare this with the U.S. culture who present their work for larger audiences (may or may not be presenting), are more polished and emphasize results.
    It is interesting that we share group work with people all over the world. Some of the correspondence seems effortless, while others are prevented by firewalls. Designers creating the applications have so many issues to think about, it is amazing that we communicate so much~

    work cited:
    Jonaghan Grudin, University of California, Irvine. "Computer-Supported Cooperative Work: History and Focus", May 1994

    Tuesday, October 21, 2008

    Jason Grigely
    VSD Conceptual Investigation


    Socio-Technical System
    The system or environment involving our problem space, may be described as a health center, gym, or home, however, it can be further extended to any area that contains a treadmill for the sole purpose of exercise. The existence of a treadmill within this space constitutes a problem by the very nature and behavior of a treadmill. The technology for treadmills has not vastly changed in quite some time, and perhaps it is because of this that we are starting to see a growing observation by large communities of runners. Simply put, running outdoors is “easier” than running indoors on a treadmill.

    Values
    It is a common experience amongst most runners, that there is a great degree of variance between the acts of running outdoors versus running indoors on a treadmill. While outdoor running is said to be of greater physical difficulty, running indoors on a treadmill is commonly accepted as being more of a psychological battle and less physically demanding. However, with the easy adaptability of running on a treadmill, in terms of adjustable incline, additional impact cushioning (reducing stress on ankle and knee joints), as well as the ability to easily control your pace and speed, perhaps we should think about bringing the perceptive ease of running outdoors to the treadmill.

    As it is currently designed, repetitive and continued use of the treadmill requires a certain degree of willpower, almost as if the user must at times force themselves to use a treadmill, if only for convenience’s sake. This is in part due to “Running ‘on the spot’” which “can lead to an earlier onset of boredom and mental fatigue” (Anderson). The stationary nature of the treadmill, with its unchanging view, and the simply feeling the user gets that he/she simply isn’t going anywhere. This adds an unnecessary level of stress to an activity that is meant not only to get/keep the user in shape, but also to function as a stress reliever. The increase of stress through this activity not only effects the users motivation, and attitude towards the use of the treadmill, but can also have extended effects to the people around them, as the stress may manifest in other ways.

    Physiological Advantages
    A study done by Jennifer R. Abramczak et al. suggests that the physical act of running outdoors is more strenuous than that of running on a treadmill. This is substantiated by the mechanics involved with running on a treadmill, such as the belt, which quite literally moves beneath the runner’s feet, requiring less action to propel ones’ self forward; in stark contrast to outdoor running, where the runner must push themselves, as well as lean forward a bit, which tends to put a greater strain on the runner’s back. Theoretically, the difference in strain on your body experienced on a treadmill versus running outdoors, should allow the user to run much greater distances on a treadmill, however Abramczak et al. suggests that this may not necessarily be so. Even though outdoors, runners in the study reached a higher heart rate, their Rate of Perceived Exhaustion (RPE), was almost always lower when running outdoors. What this means, is that while running outdoors was shown to be more physically demanding, as runners obtained a higher heart rate, they often felt less fatigue, or exhibited fewer signs of fatigue than treadmill runners.

    An advantage of running outdoors is airflow. The drag force you experience running outdoors, while it does require more energy from the runner at higher speeds, the “absence of wind resistance on a treadmill leads to a significantly lower oxygen consumption compared to running outside” (Anderson). As you continue to exercise, your body requires more and more oxygen in your blood to prevent the buildup of lactic acid, which is why after exercising for extended periods, your muscles begin to ache. Therefore, if we were able to bring the additional airflow (as well as perhaps even the drag force) of outdoor running to the treadmill, it would ultimately benefit the user.

    What approach can we take to solve this problem?
    In order to solve the issues faced by many involving the treadmill, we must augment and elevate the design of the treadmill. In doing so, we can combine the positive elements of both atmospheres, while sacrificing little, if anything from the user’s workout. The first step to take in the elevation of the treadmill design would involve a visual display, which encompasses the surroundings of the runner, and is designed to mimic outdoor terrain, which can be chosen by the user. These visuals would provide the user with a virtual environment in which they would be immersed. Infrared sensors could be used to adjust the display height, as the size of potential users of a treadmill is likely to vary greatly. The environment would reflect perspective changes to the user based on elements such as running speed/pace, as well as the slope/incline of the treadmill. Mimicking an outdoor environment would serve as a stress reducer, as well as give the user something to focus on while running, while still concentrating on the act of running, rather than being distracted by music, television, or simply fixing their attention to a spot in the room.

    In addition to the virtual display, we could implement a supplementary ventilation system, which would provide airflow to the user, which mimics that of an outdoor environment. Additionally it would be possible for the user to mimic wind conditions (for example, if you’re running north, and there is a 5mph eastern wind gust, the user would feel the effects of the crosswind), as well as the drag force encountered in running outdoors. Not only would these changes in air flow assist in the recreation of the outdoor running experience, but it would also help to increase oxygen intake for the user. This, in theory, would allow for the user to maintain a reasonably higher heart rate for longer periods, as their body would be able to more aptly combat the buildup of lactic acid.

    Preliminary Conclusions
    By solving the psychological issues involved with running on a treadmill, we open the door to greatly expand the capacity for athletic training indoors. While most personal trainers and elite athletes agree that for the best results, when training for an outdoor event (such as a 5k race, marathon, etc.), it is best to train outdoors. However, they also acknowledge that this is not always an availible option for everyone. In addition, it may be possible, with these proposed changes, to increase the effectiveness of treadmill training from both a physiological, as well as a psychological perspective, thereby surpassing outdoor training in terms of effectiveness.


    Sources

    THE PHYSIOLOGICIAL DIFFERENCES OF OUTDOOR TRAIL RUNNING VERSUS INDOOR TREADMILL RUNNING. Jennifer R. Abramczak, LeAnn M. Hayes, Christopher A. Johnson. University of Wisconsin- Eau Claire, Eau Claire, WI.
    10/19/2008. Link


    THE DEBATE. George Anderson, Steve Barrett. FitPro Network.
    10/19/2008 . Link

    Avatars for the wheelchair-bound: The value of inclusion in digital spaces

    In brief --

    Sociotechnical problem space: Any digital space that uses avatars that reflect the appearance of the user (specifically Yahoo! and Second Life here).

    Implicated value: Inclusion

    Direct stakeholders: Users with disabilities (specifically the wheelchair bound here)

    Indirect stakeholders: Able users, avatar artists, programmers



    Avatars and inclusion

    Avatars are the representation of the user within digital spaces, and can range from flat, non-animated pictures to pseudo-3D models that explore virtual worlds. In this essay, I'll be analyzing the potential effect that limitations in avatar creation might have on a user's self-image and sense of inclusion. For the purpose of making a specific examination of the topic, I will reduce my scope to users who are wheelchair-bound, and avatars available through Yahoo! Avatars and the virtual world Second Life. I want to be explicit in drawing the distinction that, though I'm discussing the disabled community, inclusion is an issue separate from accessibility. Accessibility defines the ability to utilize technology as equally as non-disabled users; inclusion describes the equal accommodation of users who have disabilities without pity or discrimination.

    Digital spaces allow for great malleability of identities because of the general lack of accountability. This has aspects both liberating--the ability to literally carve out a niche for one's self as a troll with a giant battle axe--and dangerous--middle-aged men posing as teen-age boys to lure unsuspecting minors into illegal sexual encounters. Even in cases where actual photographs of users are being used, for example on social networking sites, the user is able to manipulate or recontextualize the photo for the benefit of the image they are creating. Within the disabled community, there exists a range of approaches to creating online identities. Some choose to create fully-abled avatars for themselves, manufacturing an image that they were precluded from in real life. Others represent themselves more literally, choosing to bring the disabilities they face in real life to their avatars as well.

    I believe this choice should be left to the user, and do not wish to debate, as some have within the disabled community, the authenticity of either choice. Whether or not a disabled user chooses to represent themselves as such through an avatar, the digital communities that provide the means to create such avatars should allow for the possibility. To not do so sends the message that such users are unwelcome or unwanted in the digital community.

    Sociotechnological spaces -- two examples

    Yahoo! is an example of a site that utilizes avatars meant to allow physical representation of the user. Through a series of menus, a user can select different skin colors, facial features, hairstyles, clothing and accessories. The amount of choices, especially in the latter two categories, is expansive. Recently, for example, they offered both pro-McCain and pro-Obama t-shirts for avatars to wear.

    Despite the myriad fashion options, Yahoo! Avatars did not offer accessories for the disabled--crutches or wheelchairs--for the first three years of service, from 2004-2007. While able users could debate the unimportant choice of a plaid scarf versus a grey one, wheelchair-bound users were unable to represent a major part of their identity through their avatar. Today, wheelchair-bound users are still limited to three options per gender. For instance, males can choose between a suit-wearing avatar, a green-shirt-and-jeans-wearing avatar, or an avatar standing beside a wheelchair as though magically healed. Clothing options available to able users, such as the pro-candidate shirts mentioned above, are not available for an avatar seated in a wheelchair. In essence, while others are able to more fully represent themselves online, wheelchair-bound users are forced to choose between representing their disability or their personality.

    A line of wheelchairs in Second Life, via Second Edition

    Within the realm of Second Life, users are able to manufacture new accessories and appearances for their characters, and so users with disabilities are able to have greater control over aspects of their own inclusion. One successful group, Wheelies, serves as a positive example of such users representing their handicaps through avatars and building a sense of community. The group works out of a virtual nightclub, and distributes to new members a welcome package that includes a virtual wheelchair. The virtual dance floor at the Wheelies Nightclub finds avatars performing dance moves in their chairs alongside able-bodied avatars. Simon Stevens, the group's founder, has been featured in Newsweek and received an award from British Prime Minister Gordon Brown for his work in creating the group.

    Direct stakeholders -- users with disabilities

    People with disabilities often struggle to determine how their handicaps impact their identities. In addition, most go through life experiencing, at the least, curious stares, or worse, mockery and ridicule. In creating their online identities through avatars, if certain options are not available to them, the limitations may reinforce negative perceptions they have in the analogue world: disabilities are a repugnant aspect of appearance and identity, people with disabilities are disregarded by the able-bodied population, and that people with disabilities are unable to take part in any sort of community other than those made up of others like them.

    Through allowing users with disabilities to include those disabilities as part of their avatar identity, websites and other digital spaces enable users to create a positive self-image of themselves as they are, and give them a sense of inclusion in the digital community.

    Indirect stakeholders

    Obvious indirect stakeholders include the artists and programmers who manufacture the avatars for use by the handicap community. In Yahoo!'s case, these programmers work on the corporation side, and must create new options for users with disabilities. This will involve research, both in studying designs of wheelchairs and other enabling devices, and in talking with the disabled community to discover what options are needed.

    The less obvious, but more important, group of indirect stakeholders, are other able-bodied users. In education, the concept of inclusion works both ways. As students with disabilities experience equal education opportunities, able students are able to interact with a class of people who give them insights on equality, diversity and non-discrimination. In the digital world, the wheelchair is only a visual representation of a user's identity, and in a sense, the physical limitations of the wheelchair-bound user are negated. Typical users may find it easier to approach those with physical disabilities within the virtual world, whereas in real life, they may experience discomfort or anxiousness when confronted with a person in a wheelchair. In these digitally-mediated spaces, typical users may find it easier to adopt the previously mentioned insights on equality, diversity and non-discrimination. Those views might then generalize to real-life interactions with people who have disabilities.

    Designing Transparency Tools for PC Platforms

    By Lillian Spina-Caza

    I. Brief overview of sociotechnical problem space

    According to the Computer Industry Almanac, as of September, 2007, PCs per capita in the United States “topped 80 percent in 2006 and will reach 98 percent in 2012” (c-i-a, ¶2). An estimated one billion personal computers are currently in use worldwide, making the number of children who have access to computers in the home at an all time high. Even children who do not live in households with computers can still encounter PCs in the homes of friends or relatives, schools, public libraries, and other community settings.

    Initiatives such as One Laptop Per Child (OLPC) have made a commitment to placing computing technology in the hands of children worldwide at very little cost. A number of other inexpensive computers have been developed in direct competition with OLPC’s XO-1and XO-2 models for global markets, thus making computers more accessible now than ever before. For this reason alone – the sheer numbers of children who are and/or will soon be exposed to computing technologies – it is imperative that transparency be a critical value in technology design whenever children are involved.

    The sociotechnical problem space addressed here is PC use by young people or novice users who are not yet able to articulate the nuances of information technologies, and who do not, in most instances, have a clear understanding of how PCs and software systems operate; through no fault of their own they simply do not speak the language of programmers or designers. The problem space is also one where children (and oftentimes their parents and teachers) place a good deal of trust in PC technologies because they are, for the most part, fairly simple to use and/or easily learned, and provide a source for information and entertainment. But despite ease of use, very few children and adult caregivers comprehend, except at the most basic level, how these systems work.

    One might argue, most of us do not need or want to slide under our cars to understand how a suspension system works, all we need or want to know is that the car rides smoothly over bumps in the road. However, it is one thing to put our faith in a car, and quite another to place blind trust in information technologies imbued with functions and values determined by system developers and software designers in a profit-motivated industry. The problem space identified here is one with far-reaching consequences, which makes it essential we are able to understand, to be able to take apart, examine, and become consciously aware of the tools and capabilities that PCs and associated software afford us.

    II. The value of transparency in interactions with personal computers

    If we use PC software products without thinking very deeply about how these things work, then to what extent will they determine, limit or extend our capacity to learn? Transparency – as a value, and as it is being described here – is at the core of other human values such as empowerment, autonomy, creativity, self-esteem, and learning. Opacity, on the other hand, is antithetical to all of these, especially to learning. Transparency as it relates to learning will be the focal point of this conceptual investigation. Transparency marks vivid differences between open and closed source software: the right to know what it is we are doing (and conversely what is being done for us), how it is done, and most importantly, how to exert control and do it ourselves if we so choose. “Proprietary software keeps users divided and helpless. Its functioning is secret, so it is incompatible with the spirit of learning. Teaching children to use a proprietary (non-free) system [sic] puts them under the power of the system's developer – perhaps permanently” (Stallman, ¶7).

    What I am proposing here is an interface that would eliminate secrecy and which I am calling a “Zoominable” designed to help us understand and analyze systems. It would, in essence, answer questions such as “How does this work?” and “What can I do with it?” in ways that enable a more in-depth, understanding of underlying design protocols. A Zoominable is not a tutorial because it will not be designed to teach a young person or novice user how to perform a task using a PC. It will be designed to act more like a magnifying device that will permit young people (or a parent and child together) to zoom in on how something works, to try it on for size, take it apart, put it back together, or experiment with it. A Zoominable will be a virtual tour guide through the “foreign language” of code that will make visible and learnable what is usually not easily viewed or is hidden in software.

    Transparency in and of itself is not a new idea. Tanimoto (2005) writes, “Transparency is a valued attribute in software use because it demonstrates how things work, which in turn creates trust, allows for error detection, and promotes learning about how software systems work” (2). What I am suggesting here is transparency in PC environments should be paramount in interaction design where a) products intended for adults are also going to be used by young people, and b) for any new products designed specifically with children in mind.

    Transparency assumes a constructivist or active approach to learning versus a passive, consumptive approach. It is a form of transparency that OLPC originally envisioned with its laptop initiative: “While we do not expect every child to become a programmer, we do not want any ceiling imposed on those children who choose to modify their machines. We are using open-document formats for much the same reason: transparency is empowering. The children—and their teachers—will have the freedom to reshape, reinvent, and reapply their software, hardware, and content” (laptop.org, ¶1).

    Dr. Steven Tanimoto, who values transparency as an attribute, also argues it “can beget a desire to control,” viewing this as a potential weakness (Tanimoto, p. 4). He explains,

    Exposing the intricacies of complex software systems to users can overwhelm or confuse them. Revealing a system's decision- making rules may invite users to game the system and lead them away from the main goals of their interaction. Facilities that interpret or explain the system may also rob users' attention that would otherwise be invested in achieving their primary goal. Transparency mechanisms may therefore need to be flexible and adaptable both to accommodate different users and to accommodate user growth” (Tanimoto, ¶1).

    Though Tanimoto’s concerns are valid and must be taken into account when designing for transparency, the desire to beget control need not be thought of as a weakness if it occurs in a proactive way. Control that emerges from learning the language of code, could be viewed as empowering. A Zoominable would be designed in such a way to afford such control; adaptable for different uses and designed to accommodate user growth and experience. It would be non-intrusive and accessible at the start of a function. For example, the Zoominable icon or a pop up might appear asking, “do you want to know how this works?,” offering levels of code-cracking complexity from beginner to expert. Then it will be at the discretion of a young person (working alone or with a parent and/or teacher) to make the choice to learn more about a particular piece of software or operation, and to select the depth and breadth of what is revealed. The Zoominable would be designed with the goal of making PC technologies transparent, thus empowering children to play around with computer code and see what they can do with it without harming or causing irreparable damage to systems or software.

    The ability to understand and, therefore, control our experiences with technologies is important if we are not going to be just passive consumers of these technologies. Placing a greater value on the kind of transparency that leads to fluency in the language of code can enhance and encourage critical interaction between humans and computers. Transparency, however, flexible enough so it does not inhibit or interfere with freedom of use; designed in such a way as to be able to be turned on and off at the discretion of the user.

    III. Direct and indirect stakeholders

    While it could be argued anyone who designs or uses PC technologies has a stake in whether or not transparencies are built into software or interface design, some will benefit more directly than others, some will benefit indirectly, and some may not view transparency as beneficial to the current business model for producing software. All three types of potential stakeholders are addressed below.

    Direct stakeholders who stand to gain the most from a Zoominable interface include young people, parents, and educators. Children benefit by having a tool that allows them to unpack unfamiliar knowledge and learn how things work and how to make them work themselves – to actively participate in the learning process. Parents are also direct stakeholders as they, too, can learn more about technologies they are allowing or encouraging their children to use. Educational administrators benefit by being in a position to better evaluate educational software for use in schools. With real transparency, teachers given mandates to infuse technology into curricula are better able to understand how it works, and to demonstrate it to students. While it is unlikely all stakeholders will be interested in getting to the bottom of how systems and software works, it is important the option to do so remains open to them.

    Indirect stakeholders who might benefit from transparency include government agencies, corporate policymakers, and/or small business operators who are considering new systems and would like to make more informed comparisons between what they have and what they are considering to purchase. Employees who work for these agencies and businesses may also benefit by being able to customize their experiences with technologies.

    Finally, direct stakeholders benefiting least from the introduction of a Zoominable-type interface are the corporate entities and individuals that create and market software for PCs, and stand to lose proprietary information if code is open to anyone who wants to see it. This problem, however, will continue to plague the market whether or not transparency is built into system design. Open source software is not going away any time soon, and new and innovative ways to develop and sell product will evolve as things continue to shake out. One way to work around this problem might be to set time limits on proprietary software so that it can be opened up and made transparent only after a certain time period on the market. Another workaround might be for software companies to offer “for fee” services that support the use of open code software. The concerns of those who stand to lose the most will need to be addressed if transparency is to become an integral value in the design of PC technologies targeting or used by young people.


    Sources:

    Blankenhorn, D. (2005) “Open Source Transparency.” Corante. http://mooreslore.corante.com/archives/2005/04/19/open_source_transparency.php

    Computer Industry Almanac, Inc., 2007, http://www.c-i-a.com/pr0907. htm

    One Laptop Per Child. (2008) http://laptop.org/en/laptop/software/

    Stallman, R. (2008). Can we rescue OLPC from Windows? Free Software Foundation. http://www.fsf.org/blogs/rms/can-we-rescue-olpc-from-windows

    Tanimoto, S. (2005). Proc. Int'l Workshop on Learner Modelling for Reflection, to Support Learner Control, Metacognition and Improved Communic. between Teachers and Learners, in conj.with AIED2005, Amsterdam, July, 2005. pp. 2-4. http://www.cs.washington.edu/ole/111tanimoto.pdf


    Tanimoto, S. (2005). “Transparent Interfaces to Complex Software: Helping Users Understand Their Tools.” p. 4. http://viscomp.utdallas.edu/vlhcc05/speakers.htm