Panel on Augmented Reality at Virtual Worlds Expo

I’m in LA this week for the Virtual Worlds Expo.

LA Convention Center

Tomorrow, as part of the track on the Future of Virtual Worlds, I will be moderating a panel on Augmented Reality.

Augmented Reality: Virtual Interfaces to Tangible Spaces

Augmented reality is an emerging platform with new application areas for museums, edutainment, home entertainment, research, and industry. Novel approaches have taken augmented reality beyond traditional eye-worn or hand-held displays, creating links between the real and virtual worlds. Join this panel of experts as they guide you to where the augmented world is headed next.

I’m joined by:

  • Marc Goodman, Director, Alcatel-Lucent
  • Eric Rice, Producer, Slackstreet Studios
  • Blair MacIntyre, Associate Professor, School of Interactive Computing, Director, GVU Center Augmented Environments Lab, Georgia Institute of Technology
  • David Orban, Founder & Chief Evangelist, WideTag, Inc.

I might mention the Radio 1 ‘Band in your Hand‘ project.

David Orban has already shared a summary of what’s on his mind in this space.

Blair has been doing interesting work using the open source Second Life client, augmenting reality with live embedded scenes from Second Life.

If you won’t be there (room 406AB, LA Convention Center, 10 – 11am tomorrow) what ideas would you like to see thrown into the discussion, and what questions would you like me to ask the panel?

One reply on “Panel on Augmented Reality at Virtual Worlds Expo”

  1. My research is essentially on the experience of ‘presence’ or ‘transportation’ (Lombard and Ditton), and whether it is a useful concept in AR.

    What has been interesting me most recently is the importance of attention in AR interfaces. In fully mediated environments, it doesn’t matter if the virtual objects steal or capture your attention. In the real world, it can be dangerous if augmentations take or try to keep your attention. In fact, what we want are augmentations that behave like real objects – just by being there, with almost zero cognitive load they tell you something about their function and the environment they are in. Real objects convey a lot of ambient information without the user ever having to think much about them. They also give up more information in response to more attention, but you can shift your attention at any time – they don’t require concentration. These are characteristics we should demand of augmentations and AR interfaces.

    Typical AR interfaces at the moment are very like RoboCop or Terminator – objects get isolated, and the visual field fills with useless trivia. It’s important that that is not the road we go down.

    Another important aspect is that the equipment with which we perceive the augmentations must not be socially isolating, like head mounted displays. People need to see your eyes in the real world. Mobile phones have a lot of promise, although they’re awkward to use for AR. Projectors and mobile projectors are perhaps the least socially isolating, although they have disadvantages of their own.

    So I suppose my questions to the panel are: how do they envisage organising virtual content in real spaces so that the user isn’t completely overwealmed. What technologies do they hope to see that will mitigate the display problem (lovely magic ultrasound haptics system recently). A key part of the definition of AR has always involved registration – making sure virtual things happen in the right place – do they worry about that? What is at the exact centre of Milgram et al’s virtuality continuum (reality-augmented reality-augmented virtuality-virtuality). And perhaps more philosophically, what do they think “realness” is.

Comments are closed.