Back from two days in Milton Keynes for ReLIVE08, the Open University’s conference on Researching and Living in Virtual Worlds.
The abstract said that
Roo Reynolds has offered to not pre-prepare any slides for his closing keynote, but instead create a short presentation on the fly during the other sessions. Drawing on the notes and photographs taken
during the conference, he’ll act as a virtual cheat-sheet for the event.
He’ll share his notes, including what he found most interesting and what he’ll take away from it, wrapping up the two days by distilling any key themes and considering what we’ve learned about learning. Perhaps he can pull the threads together into something which will make sense. It makes predicting what he’s going to say particularly tricky, but it could be fun.
The results from this afternoon are embedded below. I’ll let you decide how well I met my (scary, self-imposed) brief. I would say that I didn’t take as many photos as I planned (I either need a better camera or a portable lighting rig), and I ended up trawling my own back catalog of photos to illustrate certain points. Also, I was a smidgen more didactic than I’d intended. I was (and am) very tired. In fact, I was up at 2:30 am this morning pulling together my notes from yesterday. Four hours sleep is not enough for me and perhaps being tired made me more challenging – and less congratulatory – than I could have been.
More importantly, my apologies for only drawing on a very small selection of the papers presented at the conference. With 4 or 5 streams running at once (and especially with the rooms spread across the campus) it just wasn’t possible to see everything. Much of what I did see really impressed me and I really enjoyed the conference.
Update: a video of the presentation, with the slides nicely inter-cut, is now online.
- Videos from Day 1 (be sure to watch Ted Castranova’s opening keynote) and Day 2
- Schome is the project that made me shed a tear
- Simon Bignell aka Milton Broom has some excellent work in psychology. See especially Problem-based Learning in Virtual Interactive Educational Worlds for Psychology (PREVIEW-Psych)
- Sarah aka Intellagirl and her great presentation
- ‘Second Life is not the only fruit’ is the one-sentence summary of the latest snapshot of virtual world activity in UK Higher and Further Education by Virtual World Watch and the Eduserv Foundation. Well worth reading
- More blogs: ReLIVE08 live blog, Niall Sclater, Daniel Livingstone, …
- relive08: Flickr photos, Delicious bookmarks, Twitter search
I’d hoped to take notes during the ARG panel I was moderating at Virtual Worlds London this week, and even planned create the slides on the fly. No chance. Being on stage seems to reduce my IQ by at least 20 points. While I can pay attention to what is being said and take notes (I did quite a bit of that on both days of the conference) it’s not possible (for me, at least) to do all of that and ask questions.
Instead, I recorded the audio, took some notes on paper and threw some summary slides together on the way home. Here they are.
Back for day 2. Here’s the schedule.
My (live-blogged via Cover It Live) notes from day 2 of the conference are embedded below…
Here are my notes from day 1 (embedded below)…
Update: day 2 is now live.
ARGs (Alternate Reality Games) have become a hot topic in recent months. It’s hard not to think of an ARG as a virtual world in which the interfaces (including websites, email, text message, even telephones) are those we know from everyday life. Is there even more to them than that? Recent franchise tie-ins raise startling questions about business models, while war-stories about user engagement will be of interest to any virtual world designer. Do virtual worlds have anything to learn from ARGs? Find out from a selection of real-life ARG designers, developers and experts.
I’m going to be joined by Dan Hon (Co-founder and CEO, Six to Start [bio]), Kim Plowright (Production Manager, Oil Productions Ltd [bio]) and Fiona ‘Foe’ Romeo (Head of Digital Media, National Maritime Museum and Royal Observatory [bio]).
Just some of the questions I’m likely to ask include:
- Why do people play ARGs? Does there need to be a prize?
- How do you balance getting someone through the story vs keeping an interesting challenge?
- How do you maintain a believable universe?
- How do/can ARGs make money?
- To what extent can ARGs be user-created?
(I already have many more ideas than we’ll get through during the panel, but if you’d like to suggest topics for conversation or questions then I’m all ears.)
I’m planning to upload slides and notes asap after it’s all over. I don’t have any pre-prepared slides, but will be creating them on the fly in a similar way to the Augmented Reality panel I moderated at LA recently.
See you on Tuesday afternoon? According to the schedule, the ARG panel is 4:45 – 5:45pm on Tuesday. I’ll also be catching some former colleagues hard at work, including Rob‘s panel and Ian‘s talk on Tuesday morning.
David Orban recorded our panel, and has put the video online.
Here’s a slideshare of the accompanying videos and images we used to illustrate the panel.
And finally, from the live blog that David was maintaining, here are the notes taken and shared during the panel…
11:02 David Orban – I am now sitting on the podium with Roo on the Augmented Reality panel:
– Marc Goodman, Director, Alcatel-Lucent
– Eric Rice, Producer, Slackstreet Studios
– Blair MacIntyre, Associate Professor, School of Interactive Computing, Director, GVU Center Augmented Environments Lab, Georgia Institute of Technology
– David Orban, Founder & Chief Evangelist, WideTag, Inc.
– Andrew (Roo) Reynolds, Portfolio Executive for Social Media, BBC Vision (moderator)
11:03 David Orban – People are still streaming in, as certainly the parallel sessions are putting a strain on everybody’s decision centers. They are nice, and complementary, but the choice is still sometimes not easy.
11:05 David Orban – We are introducing ourselves with 25 seconds each. Laughter as Marc from Alcatel starts saying “I was born in Illinois at a very young age…”
11:08 David Orban – Starting from the Wikipedia page we talk about how each of us defines Augmented Reality. Blair says that it is where you are strictly registering media with the world. Eric says “There is so much data around us we just don’t see it.”
11:11 David Orban – Blair is describing the ‘magic lense’ and ‘magic mirror’ terminologies as Roo is showing short YT clips to illustrate the background of the concepts. The flow is pretty good and fresh. I hope that the audience is also feeling like this.
11:18 David Orban – Eric says that the best application of AR is gaming today.
11:23 David Orban – I described the difference between having the data (important), and visualizing the data (possible only after the first). And segwayed into the description of semantic understanding in synthetic worlds, folksonomies, and the necessity of relying on sensor networks to collect data that people won’t. The more autonomous the better. Blair agreed.
11:23 [Comment From Grace McDunnough] David, can you share the YT links? Thanks :)
11:25 David Orban – Grace, I will ask Roo for them on the fly. Now he is speaking, and asking us the potential of technologies to overcome the social isolation of visualization technologies.
11:26 [Comment From Mal Burns] oh – good question
11:26 David Orban – The panel is describing how looking at our phones all the time is normal, and how this is going to impact our social behavior more and more.
11:27 David Orban – YouTube link: http://www.youtube.com/watch?v=O2i-W9ncV_0
11:31 David Orban – I remarked that it will be all right, because as a society we will work out what makes sense, and what not, as we originally did with using mobile phones in restaurants, etc.
11:31 [Comment From Roo] More links: http://uk.youtube.com/watch?v=NLahYcb7Ppg and http://uk.youtube.com/watch?v=ODgZtriNYoc
11:31 David Orban – Thanks, Roo, for helping out here…
11:33 David Orban – Marc from Alcatel is describing a mixed reality application on a mobile phone.
11:33 David Orban – Roo is asking about interoperability and “is this as messy as in Virtual Worlds?”
11:34 David Orban – Blair says that you have to deal with these issues, and develop standards. There was a proposal of ARML, combine geotagging with semantics.
11:36 Roo – Pessimism from Eric on interoperability. “We will not learn” from our mistakes elsewhere
11:37 Roo – David comparing email w/ X400 and getting philosophical. “It’s a mistake to put reality on a pedestal”.
11:38 Roo – David: info should flow, and it should, but can only do so if interop. is possible.
11:38 Roo – David: Culture, Rights and Laws all need to be considered
11:40 Roo – Halting State by S
11:41 David Orban – Postsingular by Rudy Rucker. Freely available on http://www.rudyrucker.com/postsingular/
11:42 Roo – Halting State (Charles Stross)
11:43 Roo – Plus of course, Sterling and Spime
11:45 David Orban – Halting State url: http://en.wikipedia.org/wiki/Halting_State
Vernor Vinge: Rainbows End http://en.wikipedia.org/wiki/Rainbows_End
11:45 David Orban – Quote from Postsingular:
“Craigor was a kind of fisherman as well; that is, he earned money by trapping iridescent Pharaoh cuttlefish, an invasive species native to the Mergui Archipelago of Burma and now flourishing in the climate-heated waters of the San Francisco Bay. The chunky three-kilogram cuttlefish brought in a good price apiece from AmphiVision, Inc., a San Francisco company that used organic rhodopsin from cuttlefish chromatophores to dope the special video-displaying contact lenses known as web-eyes. All the digirati were wearing webeyes to overlay heads-up computer displays upon their visual fields. Webeyes also acted as cameras; you could transmit whatever you saw. Along with earbud speakers, throat mikes, and motion sensors, the webeyes were making cyberspace into an integral part of the natural world.”
11:46 David Orban – Blair: “I like to think of Second Life like hypertext systems were before HTML and the web”
11:47 David Orban – Eric adds: “We also need metadata for experiences.”
11:49 Roo – Question from Tish re Internet of Things: social purpose. Instrumentation of reality (Cory Doctorow), implications on the public space.
11:49 David Orban – Tish Shute of Ugotrade is asking relative to Bruce Sterling’s remark that this technology is also for social good. The instrumentation of reality instead of surveillance. We need to understand what public space is.
11:51 Roo – David says: it’s important. Web was not properly planned. OpenSpime working with EFF to find best practice. Yet some people think of Spime as Slime. Being more aware of our environment can be a very good thing
11:52 Roo – Eric: “totalitarianism is also user generated now”.
11:54 David Orban – Blair is describing how buildings can become intelligent without having full details, and achieve results nonetheless.
11:55 David Orban – Blair: “If it is centralized, who can decide who puts up an AR sign in front of the Staples center”.
11:57 Roo – Question: How will AR be picked up by commercial space?
11:59 David Orban – Collaboration, telework, modeling, are some of the application areas that are being mentioned.
12:28 David Orban – The panel ended. A lot of questions from the people coming up to the podium.
I’m in LA this week for the Virtual Worlds Expo.
Tomorrow, as part of the track on the Future of Virtual Worlds, I will be moderating a panel on Augmented Reality.
Augmented Reality: Virtual Interfaces to Tangible Spaces
Augmented reality is an emerging platform with new application areas for museums, edutainment, home entertainment, research, and industry. Novel approaches have taken augmented reality beyond traditional eye-worn or hand-held displays, creating links between the real and virtual worlds. Join this panel of experts as they guide you to where the augmented world is headed next.
I’m joined by:
- Marc Goodman, Director, Alcatel-Lucent
- Eric Rice, Producer, Slackstreet Studios
- Blair MacIntyre, Associate Professor, School of Interactive Computing, Director, GVU Center Augmented Environments Lab, Georgia Institute of Technology
- David Orban, Founder & Chief Evangelist, WideTag, Inc.
I might mention the Radio 1 ‘Band in your Hand‘ project.
Blair has been doing interesting work using the open source Second Life client, augmenting reality with live embedded scenes from Second Life.
If you won’t be there (room 406AB, LA Convention Center, 10 – 11am tomorrow) what ideas would you like to see thrown into the discussion, and what questions would you like me to ask the panel?
My boss’s boss Luba recently asked me to put together a short video for an internal conference on the future of applications. I didn’t have long, so I wandered around Hursley with my camera and my laptop for an afternoon, thinking out loud about the near future for virtual worlds.
For anyone following virtual worlds, none of this will come as a surprise. It’s just a very quick summary covering some subjects I tend to talk about a lot anyway.
In putting it together, I distracted myself by buying the full version of ScreenFlow, which made the gratuitous picture-in-picture stuff from 2:09 onwards stupidly easy and is generally a lot of fun.
Hello. My name’s Roo and I’m here in IBM Hursley.
There’s a number of virtual worlds projects now. Not all of them are external. Not all of them are public-facing, although there are some of those as well There’s a variety of recruitment events and conferences – public outreach is definitely a big thing for IBM in virtual worlds – but unlike most companies it’s not the only thing we do. We’re also exploring collaboration. There are a number of different projects now, inside IBM’s firewall, exploring what does it mean to come together and work as a team when you’re using a virtual world. Is it different to Instant Messaging? Is it different to using a teleconference? And the answer seems to be that yes, it is different.
In the last 12 or 18 months there have been a lot of people meeting and talking and signing deals and agreeing to interoperate and open up a lot more. Linden Lab have made a joint press release with IBM in which we talk about avatar portability and being able to move your avatar between virtual worlds. A lot of people hear that and they get confused. They start thinking, well I don’t want my Dwarf from World of Warcraft to move into my Second Life space, that would be nonsensical. And indeed it would be. There’s very limited appeal for that kind of interoperability. I think what people really could be thinking of instead is more like could I bring my friends list with me? Could I bring my contact list? Could I bring my wallet? Could I bring my inventory? What are the standards what are the services that are going to be required in order to make true interoperability between virtual worlds make sense.
Bringing together different services APIs and data sources in the intranet and visualising them and allowing people to come together and collaborate around those things. It’s all SOA. It’s all just Service Oriented Architecture. We’re simply treating a virtual world as another endpoint – another way of consuming and composing different services and bringing them together.
Once you get into the idea of a mobile device with a screen and a camera and sufficient processing power to do some interesting things then augmented reality starts to rear its head as well. This idea of dynamic overlays on top of the real world, and holding up your mobile phone and looking through the screen and using the camera and the onboard processing to display real-time information about the real world.
I don’t like making predictions, but I think I can pretty confidently say that we should pay attention to augmented reality. I think it’s going to be a pretty important theme in the next generation of applications.
A presentation I gave recently for a British charity. Ren Reynolds (no relation) suggested an alternative title for it, “Third Sector in Third Spaces”. That’s much better and I’d use something that next time.
I’ve trimmed it down to an hour, but included most of the Q&A/discussion at the end, which included:
- How to virtual currencies and real money interrelate?
- How did IBM get started in virtual worlds
- How do virtual worlds compare with teleconferencing and videoconferencing?
- What do you think we should be focusing on in the next 12 months? What kind of applications?
- User generated content vs keeping control
Richard Wallis of Talis recently asked some questions on his blog.
- Do you think Web 3.0 will be the label of the next technology wave?
- Will the next wave be based on Semantic Web technologies?
- Does it matter what we call it?
I have a bit of an affinity with Talis, since they also employ Ian Davis. Back in July 2005, wrote a wonderful essay on Web 2.0, which contains a quote I use a lot. I have come to rely on it to get technically-oriented (or technically-baffled) people out of the assumption that Web 2.0 is a standard, or a set of technologies. It’s not. In Ian’s words…
Web 2.0 is an attitude not a technology. It’s about enabling and encouraging participation through open applications and services. By open I mean technically open with appropriate APIs but also, more importantly, socially open, with rights granted to use the content in new and exciting context
Do you think Web 3.0 will be the label of the next technology wave?
I’ve wondered the same thing. In fact, I’ve half-jokingly used it as an assumption. Something like “if we understand Web 2.0, what will Web 3.0 be?”. Outside of that usage – the logical next wave of the web – I tend to avoid using it as a term. I don’t think it helps, and I can’t see it ever sticking. If that next wave does turn out to be based on something specific, like Semantic Web technologies, then the very fact that people already use ‘Semantic Web’ to mean that bundle of stuff should suggest that ‘Web 3.0’ would be an ugly and unnecessary alternative.
In November 2006, Nova Spivack said
“while we probably don’t need another label I would at least say that “Web 3.0” is less intimidating than the term “Semantic Web” to many. On the other hand, I can see a potential confusion arising from terms like Web X.0 as well.”
I think he’s right that we don’t need another label, and wrong that “Web 3.0” is less intimidating than ‘Semantic Web’. It’s less meaningful, which means people will still want to use “Semantic” to describe it.
But anyway, that brings us on to the second question:
Will the next wave be based on Semantic Web technologies?
Maybe. It’s one possibility, and they may at least play a part. Of course, the Wikipedia page on Web 3.0 contains a good roundup of some of the other potential directions, which include a move towards the web as a database, AI, open identity, even 3D Internet.
That last one has long been a passion of mine. I’m not just talking about the real-time participatory nature of virtual worlds, but the interoperable universe of virtual worlds, the standards and conventions which tie them together and the interconnection between virtual worlds and the 2D web. Inspired by people like Mark Wallace and ‘3.D’ (a hybrid of 3.0 and 3D), I’ve begun talking about Enterprise 3.D. I don’t know (or care) if it sticks, and I don’t even spend long explaining it. If people like it, they might use it. Otherwise I’ll have to find a better way of explaining myself.
Does it matter what we call it?
Well no. Not really. The one thing that I can’t possibly agree with is Nova’s proposal that
“Web x.0 terminology be used to index the decades of the Web since 1990. Thus we are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web 3.0, which fully arrives in 2010”
I don’t think that’s at all useful. There is no advantage to being able to say that ‘Web 4.0’ officially begins at the start of 2020 and Web 5.0 in 2030, ad nauseam.
I think these things tend to be spotted, rather than invented. Placing arbitrary dates on it doesn’t help, and ‘Web 3.0’ will probably never becomes common usage in the way Web 2.0 has. I’m continuing to keep my eyes open.