Wednesday, November 11, 2009

Ubiquitous Computing

Ubiquitous computing has changed the way that we interact with computers, as they become an integral part of how we negotiate the world around us. This is a shift in the previous, more traditional, paradigm of our computer interaction. Computers are now embedded in most every aspect of our lives well beyond our use of desktop or laptop machines we use for work and recreation.

In Mark Weiser’s article, The Computer for the 21st Century, he states, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” As we move to smaller and more pervasive devices and integrated technologies, this statement has certainly been proven to be true, particularly where computers are concerned.

The Abowd-Mynatt article, Charting Past, Present, and Future Research in Ubiquitous Computing, references Weiser’s original project at Xerox PARC, and brings the ideas current and beyond. They note that Weiser’s vision included:

  • People and environments augmented with computational resources that provide information and services when and where desired, and that,
  • New applications would emerge and leverage off these devices and infrastructure.

They address three themes around ubiquitous computing: natural interfaces, contextual awareness, and the ability to automate the capture of live experiences and provide late access to those experiences. The dimensions of time and space play critical roles, as the goal of ubicomp must consider both environment and people. Time is another dimension that provides a challenge as the demand is for these systems to be available at all times.

Ubiquitous computing has become increasing more pervasive within the context of human daily life. It is no longer confined to the way we work, but is embedded in how we live, relate and communicate. The Internet provides a broad platform that is contextually rich in our current existence. It allows us to transcend time and space and connect and interact with one another in ways never possible before its inception. Information, even esoteric information, is available on demand via various search engines, which allows us to expand our knowledge base with immediacy. Social networking has changed the way we view human connections and made these connections more (or less) rich depending on one’s individual definition of “human connection.”

The desire for natural interfaces is becoming more of a reality as the use of metaphors is helping to drive design in that direction. The development of multi-touch devices, portability of technology, and on-demand computing also demonstrate the proliferation of computing technology that is designed to be integrated into daily life.

My concern, however, is at what cost to traditional, organic human development and cognition is this proliferation happening? What happens to us if the infrastructure fails? Is our reliance on technology dangerous to our ability to survive? At the very least, is our reliance on technology dangerous to our development?

Even in a time where much research is being conducted in the areas of virtual reality and artificial intelligence, they are both still artificial and have no organic basis as we do. In that sense, current technologies are still distinguishable from the fabric of our lives. Ubiquitous computing, therefore, in my mind, has value to us in terms of both simplifying and augmenting the human experience, but we should never become completely reliant on it because errors do occur. “In fact, it is endemic to the design of computer systems that attempt to mimic human abilities (Abowd-Mynatt, 34).”

References

Abowd, G. & Mynatt, E. (2000). Charting Past, Present, and Future Research in Ubiquitous Computing. ACM Transactions on Human-Computer Interaction, 7(1), 29-58.

Weiser, M. (1991). The Computer for the 21st Century. Scientific American, 265(3), 94-104.

Sunday, November 8, 2009

On Ubiquitous Computing

by David F. Bello

Earlier in this semester, I researched augmented reality applications for the purpose of comparing their use to the Plato's Allegory of the Cave. I found that one of the crucial requirements in developing an AR system was to enable interaction in realtime (Azuma). Being context-aware implies that the user has the ability to change that context, and the system must react accordingly. If there is delay, the illusion that this system is actually "augmenting" reality fails. This would qualify as a breakdown, and the suspension of disbelief that what the computer is displaying is actually a part of the real world is gone. The implication of time in context-aware systems conflicts with the statement Abowd and Mynatt make about time in these context-aware systems:


With the exception of using time as an index into a captured record or summarizing how long a person has been at a particular location, most context-driven applications are unaware of the passage of time. Of particular interest is understanding relative changes in time as an aid for interpreting human activity. For example, brief visits at an exhibit could be indicative of a general lack of interest. Additionally, when a baseline of behavior can be established, action that violates a perceived pattern would be of particular interest. For example, a context- aware home might notice when an elderly person deviated from a typically active morning routine (abowd and Mynatt 37).


These examples consider Time to be that abstract construction of the human mind which chunks activity into seconds, minutes, and hours. In all practical considerations of time, it must be considered at a deeper level: the system must take time to process information and power on and off, and the user always takes an unpredictable amount of time to actually perform tasks. The idea that a context-aware system is not directly impacted by the realtime aspect of its circumstance and context is false. If an existing structure, perhaps an RFID-tagged piece of clothing, burns up in a fire, is torn to shreds by rabid dogs, or disappears for any reason, the context-aware system, if it is to be considered truly "context-aware," must recognize this and shift its internal information structure to reflect this reality. If there is delay, its use breaks down.

However, Ubiquity doesn't necessarily imply augmenting and representing the existing environment, but often by creating new environmental elements, such as the whiteboard, are "ubiquitous" systems created. Though the "whiteboard" is simply software which is projected (and therefore directly augmenting an existing technology and physical surface), the physical infrastructure to support this technology, whether that be the visually coded boards themselves, the immense prospect of precisely maneuvering projectors and/or mirrors, and even architecting rooms based on the implementation of a whiteboard system, is going to alter the foundation of the environment. Mobile phones, on the other hand, rely on the invisible infrastructure of the wireless network. They fit into the pockets of pants that can just as easily hold keys or money. The ubiquity of the mobile phone is fundamentally different from that of the whiteboard and other shared technologies.

This is not to say that the infrastructure of wireless networks is wholly intangible. As Wendy Chun argues throughout her work, the fiber optic networks which underly all communicative computing determine much of that computing in and of itself. The cell phone towers in the wilderness can be stumbled upon by the outdoorsman, and the radiation of carrier coverage could longitudinally manifest in congruently invisible, yet efficiently malignant, cancer cells. More bars in more places could metaphorically call to their incredible reliance on the notion of place.

The mobile phone is an actor in the invisible technology of wide networked space. The whiteboard becomes the space itself. It is important to consider this element of context when considering the scale of these ubiquitous devices. The whiteboard is, in effect, a small, centralized and immovable object which must be approached by users; a wireless network allows the mobile phone to be used in any physical location within a range. It is the portability that allows the phone to be studied with the function of time, and relegates the whiteboard to unified space.

The goal of the natural interface, according to Abowd and Mynatt, is to more "off the desktop" (32). If this is the case, why would it seem that much more different to replace the desktop with simply another fixed point? The static altars of the terminal, whiteboard, and wall-embedded appliance are ubiquitous if and only if the user has entered that specific physical space; ubiquity to a much smaller degree: tantamount to just creating a huge desk and a huge desktop PC that the user pretends fills his entire environment. Real ubicomp comes from the entrance of computing technology into everyday life unbound by any locality: ubiquity on a global/personal scale. All bars in all places.

Shouldn't this then be expanded to all bars in all places at all times? That would be truly ubiquitous at the personal scale. I believe that is what Abowd and Mynatt propose. Not necessarily to inundate the user with constant attention requests and immutable ringtones, but to provide constant availability and, I believe they do use the word, "companionship." The question then becomes, do we want more ubiquity in the design of our computing devices?

"We" can be considered in terms of scale to be any number of individual groups or populations. I've created a bulleted list to pose a series of questions that range along this variable of user population:


  • Would the medical community benefit from the constant availability to databases of treatment references?
  • Would the suicidal teenager be served better with a constant connection to loved ones and congenial authority figures?
  • Would the parents of children benefit from the perpetual surveillance of GPS tracked pedophiles?
  • Would the child like constant streaming of entertainment and/or educational material which may contain dubious amounts of advertising?
  • Would the traveler prefer to have his or her movement tracked across the planet in order to receive notifications of delayed airplanes and awareness of baggage?
  • Would the IRS benefit from RFID tagging of all purchased items?
  • Would a single government benefit from having its military coordinate attacks based on Twitter data?
  • Would society as a whole benefit from any of the situations mentioned?
  • Would large corporations be able to capitalize on them?
  • Would the individual business-owner suffer from the standardization of scaled applications such as these?


Cloud-based computing already offers the ubiquity of information. The capability of devices to be mobile and attain continuous access to that information is already in existence. This is the stuff of science fiction, yet we live in this world. The flying car and other crushed dreams of cyberpunk have been outmoded by true ubiquitous computing in the form of Google Docs, the iPhone, and 3G data plans.

WORKS CITED


Abowd, G. & Mynatt, E. (2000). Charting past, present, and future research in ubiquitous computing. ACM Transactions on Human-Computer Interaction, 7(1), 29-58. (pdf)

Azuma, Ronald T. A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6, 4 (August 1997), 355 - 385. pdf)

Chun, W., (2006). Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Cambridge: MIT Press. (book site)