Earlier in this semester, I researched augmented reality applications for the purpose of comparing their use to the Plato's Allegory of the Cave. I found that one of the crucial requirements in developing an AR system was to enable interaction in realtime (Azuma). Being context-aware implies that the user has the ability to change that context, and the system must react accordingly. If there is delay, the illusion that this system is actually "augmenting" reality fails. This would qualify as a breakdown, and the suspension of disbelief that what the computer is displaying is actually a part of the real world is gone. The implication of time in context-aware systems conflicts with the statement Abowd and Mynatt make about time in these context-aware systems:
With the exception of using time as an index into a captured record or summarizing how long a person has been at a particular location, most context-driven applications are unaware of the passage of time. Of particular interest is understanding relative changes in time as an aid for interpreting human activity. For example, brief visits at an exhibit could be indicative of a general lack of interest. Additionally, when a baseline of behavior can be established, action that violates a perceived pattern would be of particular interest. For example, a context- aware home might notice when an elderly person deviated from a typically active morning routine (abowd and Mynatt 37).
These examples consider Time to be that abstract construction of the human mind which chunks activity into seconds, minutes, and hours. In all practical considerations of time, it must be considered at a deeper level: the system must take time to process information and power on and off, and the user always takes an unpredictable amount of time to actually perform tasks. The idea that a context-aware system is not directly impacted by the realtime aspect of its circumstance and context is false. If an existing structure, perhaps an RFID-tagged piece of clothing, burns up in a fire, is torn to shreds by rabid dogs, or disappears for any reason, the context-aware system, if it is to be considered truly "context-aware," must recognize this and shift its internal information structure to reflect this reality. If there is delay, its use breaks down.
However, Ubiquity doesn't necessarily imply augmenting and representing the existing environment, but often by creating new environmental elements, such as the whiteboard, are "ubiquitous" systems created. Though the "whiteboard" is simply software which is projected (and therefore directly augmenting an existing technology and physical surface), the physical infrastructure to support this technology, whether that be the visually coded boards themselves, the immense prospect of precisely maneuvering projectors and/or mirrors, and even architecting rooms based on the implementation of a whiteboard system, is going to alter the foundation of the environment. Mobile phones, on the other hand, rely on the invisible infrastructure of the wireless network. They fit into the pockets of pants that can just as easily hold keys or money. The ubiquity of the mobile phone is fundamentally different from that of the whiteboard and other shared technologies.
This is not to say that the infrastructure of wireless networks is wholly intangible. As Wendy Chun argues throughout her work, the fiber optic networks which underly all communicative computing determine much of that computing in and of itself. The cell phone towers in the wilderness can be stumbled upon by the outdoorsman, and the radiation of carrier coverage could longitudinally manifest in congruently invisible, yet efficiently malignant, cancer cells. More bars in more places could metaphorically call to their incredible reliance on the notion of place.
The mobile phone is an actor in the invisible technology of wide networked space. The whiteboard becomes the space itself. It is important to consider this element of context when considering the scale of these ubiquitous devices. The whiteboard is, in effect, a small, centralized and immovable object which must be approached by users; a wireless network allows the mobile phone to be used in any physical location within a range. It is the portability that allows the phone to be studied with the function of time, and relegates the whiteboard to unified space.
The goal of the natural interface, according to Abowd and Mynatt, is to more "off the desktop" (32). If this is the case, why would it seem that much more different to replace the desktop with simply another fixed point? The static altars of the terminal, whiteboard, and wall-embedded appliance are ubiquitous if and only if the user has entered that specific physical space; ubiquity to a much smaller degree: tantamount to just creating a huge desk and a huge desktop PC that the user pretends fills his entire environment. Real ubicomp comes from the entrance of computing technology into everyday life unbound by any locality: ubiquity on a global/personal scale. All bars in all places.
Shouldn't this then be expanded to all bars in all places at all times? That would be truly ubiquitous at the personal scale. I believe that is what Abowd and Mynatt propose. Not necessarily to inundate the user with constant attention requests and immutable ringtones, but to provide constant availability and, I believe they do use the word, "companionship." The question then becomes, do we want more ubiquity in the design of our computing devices?
"We" can be considered in terms of scale to be any number of individual groups or populations. I've created a bulleted list to pose a series of questions that range along this variable of user population:
- Would the medical community benefit from the constant availability to databases of treatment references?
- Would the suicidal teenager be served better with a constant connection to loved ones and congenial authority figures?
- Would the parents of children benefit from the perpetual surveillance of GPS tracked pedophiles?
- Would the child like constant streaming of entertainment and/or educational material which may contain dubious amounts of advertising?
- Would the traveler prefer to have his or her movement tracked across the planet in order to receive notifications of delayed airplanes and awareness of baggage?
- Would the IRS benefit from RFID tagging of all purchased items?
- Would a single government benefit from having its military coordinate attacks based on Twitter data?
- Would society as a whole benefit from any of the situations mentioned?
- Would large corporations be able to capitalize on them?
- Would the individual business-owner suffer from the standardization of scaled applications such as these?
Cloud-based computing already offers the ubiquity of information. The capability of devices to be mobile and attain continuous access to that information is already in existence. This is the stuff of science fiction, yet we live in this world. The flying car and other crushed dreams of cyberpunk have been outmoded by true ubiquitous computing in the form of Google Docs, the iPhone, and 3G data plans.
Abowd, G. & Mynatt, E. (2000). Charting past, present, and future research in ubiquitous computing. ACM Transactions on Human-Computer Interaction, 7(1), 29-58. (pdf)
Azuma, Ronald T. A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6, 4 (August 1997), 355 - 385. pdf)
Chun, W., (2006). Control and Freedom: Power and Paranoia in the Age of Fiber Optics. Cambridge: MIT Press. (book site)
No comments:
Post a Comment