Tuesday, November 4, 2008

White Space Vote and HCI

As I mentioned in class, the FCC just voted on the White Space issue.  This is important:

FCC Expands Use of Airwaves


Preachers on the pulpit, Guns N' Roses and others who fear their wireless microphones would be disrupted by widespread public access to certain unused airwaves were drowned out by high-tech titans Google and Microsoft in a federal ruling yesterday.
The Federal Communications Commission approved a plan that would allow those airwaves, called white spaces, to be used by gadgets such as cellphones and laptops connected to the Internet once that spectrum becomes available after the national transition from analog to digital television in February.

Monday, November 3, 2008

Ubiquitous Computing Still Hasn't Disappeared

Abowd and Mynatt's (2000) fascinating and prescient article, "Charting Past, Present, and Future Research in Ubiquitous Computing," offers insight not just for ubiquitous computing (ubicomp) researchers but anyone with an interest in technology design for "everyday living," as they put it.  Their essay examines ubicomp work during the 90s and offers a series of useful guidelines for thinking about ubicomp in context as well as recommendations for future research. 

The remarkable diffusion of computing into our physical world represents more than just easily available technology, rather, "it suggests new paradigms of interaction inspired by constant access to information and computational capabilities" (28).  These new interactions are easy to spot, especially in the last few years as mobile platforms like the iPhone have changed the way many people relate to their data as well as how they act in social situations (i.e., taking calls when with a friend, texting while someone is trying to talk to you or my favorite: The play-with-my-phone-to-avoid-the-pain-of-this-awkward-silence game). 

So, how far have we come since 2000?  Abowd and Mynatt argue that, "current systems focus their interaction on the identity of one particular user, rarely incorporating identity information about other people in the environment.  As human beings, we tailor our activities and recall events from the past based on the presence of other people" (37).  I agree with the statement, but times have changed.  We are now deeply involved in the context of others
, though often it can be construed as superficial.  Social networks like Twitter and Facebook allow us to broadcast our feelings and daily adventures.  However, is this what the authors had in mind?  Though we have made much "progress" in blending others into our digital lives, most of this information is focused on the present, and in the case of Twitter, it is the micro-present.  Our dominant, seemingly ubiquitous social networks are designed for the present.  Twitter, in particular, is designed to encourage micro-updates of 140 characters maximum. 

Much of our technology is designed for the now
, but where is our past represented in the digital ecology?  I argue that we do have an astonishing digital past in the form of email and instant messaging archives.  (text messages are not saved to a central server by default, so they seem, unfortunately, to live in a state of constant disappearance.) Arguably, there has never been a time in history when the historical record has been more complete or rich.  People who would never write thousands of letters do write as many emails, and they are (hopefully) preserved in archives.  The question I have for ubicomp is this: How can we design ubiquitous devices and software to somehow harness the power of our rapidly growing personal archives?  In other words, how (or even should we) incorporate the past into our ubiquitous digital lives?  What could we learn about ourselves if we had a way (if we choose) to harness and easily visualize our past using ubicomp techniques?  The mobile phone seems to be the key.  What if you bought a pack of cigarettes and instead of your phone showing a waring that smoking kills, it popped up with a quote from an email you received three months earlier from your wife that said "Please don't smoke.  I don't want you to get sick."  This would incorporate our present environment, our past and our future health all using ubicomp.   

This particular scene can be usefully broken down using Abowd and Mynatt's five-point framework for thinking about context in ubicomp: 

  1. Who: “Current systems focus their interaction on the identity of one particular user, rarely incorporating identity information about other people in the environment" (37). My scenario brings the user and other people in the user's life into context.  Although it's not a "real-time" interaction with your wife, why does it have to be?
  2. What: “The interaction in current systems either assumes what the user is doing or leaves the question open” (37). With GPS enabled phones and the emerging use of mobile phones as credit card devices, the device will not have to assume what you are doing.  It will know where you are and what you bought (of course, you could just pay cash).
  3. Where: “In many ways, the 'where' component of context has been explored more than the others" (37). Obviously, GPS finally solves the problem of “where.”  The key to understanding the importance of “where” depends upon how well our ubiquitous technology appears to us at critical moments and steps in to help.
  4. When: “...most context-driven applications are unaware of the passage of time” (37).  Linking context to time is crucial for developing truly aware applications.  In my scenario, the mobile device could use the time of day and GPS to send a warning before the user buys cigarettes.  Say, for example, if it was two o'clock in the morning on a Friday night, and the user enters a convenience store.  The device, based upon a baseline of past activity, might try to warn the user not to buy cigarettes.
  5. Why: “Even more challenging than perceiving 'what' a person is doing is understanding 'why' that person is doing it" (37).  Trying to ask “why” a person is doing something does not, to me, seem like a fruitful question for computers to ponder.  Instead, humans should ask these questions about themselves.  However, in my scenario, the computer simply prompts the user using the emotionally charged form of personal email to facilitate reflection about why they are doing what they are doing--right now.  This seems to me the best use of ubicomp and computing in general:  Rather than giving answers, computers should ask better questions and let humans do their own answering. 
    As the authors note, HCI tends to design for closure, but everyday computing believes that daily activities "rarely have a clear beginning or end" (43).  This is a critical observation.  Life ebbs and flows, and our technology ought to accommodate our human reality, not constrain us inside of a designer’s assumption box.  

    Mark Weiser (1991) wrote that, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” (1).  This is profound guidance for all designers, not just ubicomp.  My sense is that today, too much of my technology is in my face, so to speak.  I want my technology to quietly fade in when called upon, and I want it to leave me alone unless I need it. 
    The ironic challenge for ubicomp is not to make more stuff but to make more stuff disappear. 

    References

    Abowd, G. & Mynatt, E. (2000). Charting past, present, and future research in ubiquitous computing. ACM Transactions on Human-Computer Interaction, 7(1), 29-58.

    Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.

    Sunday, November 2, 2008

    Ubiquitous Computing

    Ubiquitous Computing: A Short Response Essay

    The article Charting past, present, and future research in ubiquitous computing (Abowd, G. & Mynatt, E., 2000) (cited as "(Chart, 1997)") from this week's readings provides an excellent framework for the topics discussed in the other papers. For this reason, this short response essay will be structured around Abowd, G, et al.'s piece.


    Technical Premises

    The first topic to consider is the use of natural interfaces for computing interaction. This refers to employing more specialized and meaningful artifacts in greater numbers, rather than a few very general-purpose---and consequently, less intuitive---interfaces for computers, such as the keyboard, mouse, fixed-screen design underlying `modern' computers(Chart, 1997). If the interactions are carried out by artifacts that have a close coupling with "first-class natural data types" such as using a pen to simply mark on a pad, the system is more usable than if it tries to abstract away: for example, converting the handwriting into text(Chart, 1997). Abowd, G, et al. also note that recognition-based interaction is inherently error-prone. Since recognition tasks are necessary at times, this problem should be addressed in three stages. First, designers should strive to refine and improve interfaces wherever possible to reduce errors in the first place(Chart, 1997). In the space where these efforts are insufficient, the system should notify the user of its error discoveries (which can be fed from historical statistics, explicit rules, or confidence threshold triggers)(Chart, 1997). Once the user is apprised of the error, there must obviously be a reasonable error-recovery infrastructure through which they can produce the desired input(Chart, 1997).

    The second major topic is computing context-awareness. Context-awareness entails knowledge of who the user is, what the current interaction is, where it is taking place, the current time, and what other things are temporally proximal to an interaction(Chart, 1997). With these cues, an ideal system would be able to establish the most important (as well as most difficult) context---why a user is doing what they are(Chart, 1997).

    Establishing context raises a new, non-trivial question---how does one uniformly represent context(Chart, 1997)? Abowd, G, et al. suggest that a "context fusion" provides the right solution by drawing on disparate systems depending on the availability, reliability, and relevance of the constituents in each context(Chart, 1997).

    Context is Key by Coutaz, J., Crowley, J., Dobson, S., & Garlan, D. (2005) (cited as "(Context, 2005)") focuses entirely on the topic of context. They first point out that context is not a state, but rather it is entangled in processes(Context, 2005). Failing to regard changes in state can result in surprising and undesirable results, such as a moving person finding a printout spread across each printer he passed because each was the nearest during the transmission of their respective pages(Context, 2005). This suggests that a holistic context is important to consider(Context, 2005). In this model, the printers could estimate where the person would be as the printout completed and therefore route all pages to that printer(Context, 2005). A third concern is the potential for user model deviations from system models(Context, 2005).

    To address these issues, Coutaz, J. et al. propose a "Conceptual Framework for Context-Aware Systems." The basis of the framework is a set of finite-state automatons where each state (i.e. each node) represents a context and each transition (i.e. each edge) corresponds to a shift in context. This FSA is altered by a system modeled upon three levels of abstraction(Context, 2005). The lowest, the "sensing" hardware, feeds data to the next, the "perception layer," which in turn produces data for the top layer, the "situation and context identification layer"(Context, 2005). By drawing upon both the current state reported by lower layers and history (as well as other systems), the model can produce a good, useful context in process(Context, 2005).


    Key Applications

    Assuming this "context fusion" is properly constructed, one of the possible applications would be the provision of "augmented reality," a state in which real-time information is streamed to a user in response to the environment(Chart, 1997).

    The natural extension of this ubiquitous computing application is automatic recording, capture, and access to live experience (Chart, 1997). Omni-present video input devices enable hassle-free recording whenever desired, and live exchange of these video feeds can power tele-brainstorming or formal idea exchanges(Chart, 1997). When the video is just fed into a computer, it can power a real-time video overlay in the user's view featuring labeling/status, reference diagrams, or even easy-to-follow situational instructions(Chart, 1997).


    Further Considerations

    When using computing like this every day, there are special considerations to note. First, many daily uses lack a discernible beginning or end; communication in general is a life-long activity---sub activities should respect this nature either by forgoing discreet-activity-oriented tasks or by minimizing cognitive load so as to avoid interfering with the user's flow of activity(Chart, 1997).

    In Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms by Ishii, H. & Ullmer, B. (1997) (cited as "(Tangible, 1997)"), the authors devoted considerable space to this problem of peripheral awareness. The authors explore couplings between digital states and reality including: a vibrating string that represents packets of network traffic; heated metal seats that represent the presence of another person near a microphone/speaker link; and a room equipped with light, water flow, and ambient sound peripheral cues(Tangible, 1997).

    For computing to accommodate universality, it must be able to cope with task interruption, which is common and probably unavoidable(Chart, 1997). This means at least saving states of incomplete tasks for later resumption and reminding users of the unfinished task if necessary(Chart, 1997). Related to this, support for multiple concurrent activities is essential; users are variously concious of elements of their environments and a computer must leverage this to integrate well into a user's mental space(Chart, 1997).

    When organizing information, it is important to provide associative models when not working on well-defined tasks (which integrate well with hierarchical models)(Chart, 1997). There was an interesting paper on namesys.com (before Hans Reiser went to jail) that discussed the importance of not imposing artificial structure on data because it will destroy accessibility. The user must learn an arbitrary structure (arbitrary for them) to access the data, which is unreasonable. Unfortunately, the paper seems to have disappeared in recent months.

    Ubiquitous computing raises a number of social questions. Ownership and control of information that must span numerous contexts, physical and digital, suggests serious conflicts between utility and privacy(Chart, 1997). Additionally, the likely preservation of any recordable action can have a chilling effect on freedom of speech and public participation(Chart, 1997).

    A final consideration is how to evaluate ubiquitous systems. Designers must form a compelling user narrative for fulfilling a perceived or real need to justify the system, and more importantly, as a metric by which the system's impact can be measured(Chart, 1997). Further, establishing an authentic context of use for an evaluation is exceedingly difficult, considering the cutting-edge nature of ubiquitous computing, which can confound testing(Chart, 1997). Because of the non-finite-span of daily use (discussed above), task-oriented evaluation techniques are clearly not appropriate tools(Chart, 1997).


    Personal Thoughts

    Having quite embarrassed myself with my last short response's erroneous criticisms, I have decided to focus on just having an overview of the readings followed by these very tempered questions:

    The idea of peripheral context information seems interesting, but I would like to see some research on the psychological effects of exposure to extra stimulus like that compared to being given a quiet, calm workspace.

    Also, I wonder if there isn't an advantage to mastering generalized controls for efficiency over the more "natural" physical interfaces. For example, I can type far faster than I can write, and coupled with the shortcuts/functions of vim I believe I can trounce a user who is tied to physical manipulations to interact with an editor. Perhaps an approach more like what's in Vernor Vinge's Rainbows End is closer to the ideal.