<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; virtual communities</title>
	<atom:link href="http://www.ugotrade.com/category/participatory-culture/virtual-communities/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>The Game is about the World not Dragons: Talking with Will Wright about Augmented Reality</title>
		<link>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/</link>
		<comments>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/#comments</comments>
		<pubDate>Thu, 04 Mar 2010 03:29:23 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Life]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D Mapping]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Blaise Aguera y Arcas]]></category>
		<category><![CDATA[crowd sourced intelligence]]></category>
		<category><![CDATA[DARPA AI]]></category>
		<category><![CDATA[Engage]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Games for Learning Institute]]></category>
		<category><![CDATA[high dynamic lighting photographs]]></category>
		<category><![CDATA[hyper-local experiences]]></category>
		<category><![CDATA[hyper-local search]]></category>
		<category><![CDATA[immersive games]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[siri]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[Stupid Fun Club]]></category>
		<category><![CDATA[The Sims]]></category>
		<category><![CDATA[The Sims2]]></category>
		<category><![CDATA[Wii]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5171</guid>
		<description><![CDATA[&#8220;The game is about the world not dragons,&#8221; Will Wright, Founder and Chief ExecutiveÂ  Stupid Fun Club, Creator of Spore and The Sims. I had a brief chat with Will Wright after his talk at Engage!, and I was delighted to hear that augmented reality is high on his agenda at the moment: &#8220;a lot [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><a href="http://www.stupidfunclub.com" target="_blank"><img class="alignnone size-medium wp-image-5200" title="Screen shot 2010-02-22 at 12.26.12 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-22-at-12.26.12-PM-300x289.png" alt="Screen shot 2010-02-22 at 12.26.12 PM" width="300" height="289" /></a><br />
</strong></p>
<p><strong>&#8220;The game is about the world not dragons,&#8221; Will Wright, Founder and Chief ExecutiveÂ  <a href="http://www.stupidfunclub.com" target="_blank">Stupid Fun Club, </a>Creator of <a href="http://www.spore.com/" target="_blank">Spore</a> and <a href="http://thesims2.ea.com/" target="_blank">The Sims.</a><br />
</strong></p>
<p>I had a brief chat with <a href="http://en.wikipedia.org/wiki/Will_Wright_%28game_designer%29" target="_blank">Will Wright</a> after his talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a>, and I was delighted to hear that augmented reality is high on his agenda at the moment:</p>
<p><strong>&#8220;a lot of our stuff is kind of in the experimental format right now, but definitely one of our strong interests is AR.&#8221; </strong></p>
<p>Will Wright will be coming to speak at <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality Event</a>, Santa Clara, CA., June 2nd, 3rd,Â  2010.Â  But, for now, here are a few hints at some of the directions that are intriguing him, e.g., the game potential of 3D mapping like <a href="http://www.ted.com/talks/blaise_aguera.html" target="_blank">Blaise Aguera y Arcas&#8217;sÂ  demo of augmented reality maps at TED</a> -Â  see the full conversation below.</p>
<p>There has been a vital shift, Will Wright points out.Â  Before the Wii,Â  immersive was understood asÂ  how much we were pulled into the world of the game.Â  Now immersive is how much the game pulls us deeper into our world, e.g., our relationship with the people we are playing with as in Rock Band, or engaging with other people&#8217;s crazy antics when playing Wii games.</p>
<h3><strong>&#8220;Computers are imagination amplifiers and toys are imagination constructors.&#8221;</strong></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/computerareimaginatinamplifiers.jpg"><img class="alignnone size-medium wp-image-5183" title="computerareimaginatinamplifiers" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/computerareimaginatinamplifiers-300x290.jpg" alt="computerareimaginatinamplifiers" width="300" height="290" /></a><br />
</strong></p>
<p><em>The slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a> </em></p>
<p>Will Wright&#8217;s talk was extraordinary, dense, layered, and deeply thought provoking.<strong><br />
</strong></p>
<p>I have picked out a few samples from Will Wright&#8217;s vast tome of slides here.Â  They are just a glimpse of the many insights he offered.Â  If you are still wondering what will transform augmented reality into a mainstream experience, I suggest studying this talk carefully (I think the audio will be posted on the <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage! web site</a>).Â  Also watch Will Wright&#8217;s, <a href="http://g4li.org/" target="_blank">Games For Learning Institute </a>talk at NYU, February 17th, 2010, <a href="http://g4li.org/archives/1986" target="_blank">archived here</a>.</p>
<p>Will Wright and <a href="http://www.stupidfunclub.com/home.html">Stupid Fun Club</a> are getting ready to takes us to the next level of imagination amplification and construction.</p>
<h3><strong>&#8220;Smart&#8221; things can make us dumber by overriding our instincts<br />
</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/replacingourinstincts.jpg"><img class="alignnone size-medium wp-image-5182" title="replacingourinstincts" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/replacingourinstincts-300x199.jpg" alt="replacingourinstincts" width="300" height="199" /></a></p>
<p>Just one of the many wonderful anecdotes Will Wright toldÂ  was the story of his experiences with a new &#8220;smart&#8221; car (he bought this car with the intent of exploring the pinnacle of the &#8220;smart&#8221; car experience). <em>The slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a></em></p>
<p>Increasingly, artifacts are being designed to send us more and more data, and this car was endowed with an array of sensors supplying data aimed at assisting parallel parking &#8211; a notoriously challenging aspect of driving.Â  But the carÂ  failed miserably in helping. Â  While parallel parking had been easy for him prior to being deluged with all this data, Will Wright pointed out, ironically, he had to learn to ignore this stuff to park the &#8220;smart&#8221; car.</p>
<p>Instinctively, we learn to filter the information necessary for parking to the relevant stuff.Â  This kind of pre-conscious filtering is a key challenge for augmented reality, and one that Will Wright, as a game designer, has given great deal of thought to.</p>
<p>As Will Wright pointed out, aÂ  lot of our ideas about augmented reality, and sensor enabled artifacts, are rooted in trying to give us more data, to &#8220;take over our instincts.&#8221; Â  Not only do these artifacts attempt to give us more data, which as in the case of the HUDs for parallel parking can get in the way of our own highly effective intuitive instincts.Â  But, as Will Wright also noted, these artifacts also have more data which they can deploy independently to override our instincts, e.g., the car detecting your head has turned back to talk to a passenger and applying the brakes!</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;Toys Encourage Agency&#8221;</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-19-at-3.14.53-AM.png"><img class="alignnone size-medium wp-image-5188" title="Screen shot 2010-02-19 at 3.14.53 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-19-at-3.14.53-AM-300x200.png" alt="Screen shot 2010-02-19 at 3.14.53 AM" width="300" height="200" /></a></p>
<p>Toys can be the antidote to instinct blocking &#8220;smart things.&#8221;Â  In contrast to &#8220;smart&#8221; data spitting cars that &#8220;take over&#8221; our instincts, toys encourage agency.Â  Will Wright gave the example ofÂ  high dynamic lighting photographs that make the world &#8220;toy like&#8221; and encourage us want to reach in and play with it (<a href="http://hdrcreme.com/photos/36-Sunset" target="_blank"><em>photo above from HDRCreme</em></a>).</p>
<h3>&#8220;What Computers are really good at is harvesting human intelligence&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/HiveMind1.jpg"><img class="alignnone size-medium wp-image-5194" title="HiveMind" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/HiveMind1-300x199.jpg" alt="HiveMind" width="300" height="199" /></a></p>
<p>Another key insight that Will Wright explored in depth in his talk was the significance ofÂ  crowd sourced intelligence (<em>the slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a>)</em>.Â  If the crowd is training the filter, he suggested to me, this might build the kind of context we need to build meaningful augmented reality experiences (for more on this see the conversation below).</p>
<h3>Talking with Will Wright at Engage!, NYC, 2010</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/WillWright2.jpg"><img class="alignnone size-medium wp-image-5174" title="WillWright2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/WillWright2-277x300.jpg" alt="WillWright2" width="277" height="300" /></a></p>
<p><strong>Tish Shute:</strong> I was very interested by the idea you put out that this deluge of information gathered by sensors is not necessarily a kind of nirvana for augmented reality, in fact it can be just the opposite.Â  In the embryonic world of augmented reality, we have two streams it seems at the moment &#8211; one is the idea of a kind of like hyper local nirvana imagined for AR, in which we get information relevant to us, when and where we need it. Â  But you talked about some of the problems in realizing this, didn&#8217;t you?Â  The other strandÂ  is the emerging stream of play which you are exploring..</p>
<p><strong>Will Wright:</strong> Right.Â  I think part of it is like what I was talking about-the way our senses are set up to know how to filter out 99% of what is coming into them.Â  That is why they work, and that is what is beneficial.Â  I think that is why AR needs to focus onâ€¦</p>
<p>You look at what I can find out on Google or whatever, the amount of information is just astronomical.Â  The hard part, the intelligent part, is how do you figure out that one tenth of 1% that I actually care about at this given second?</p>
<p><strong>Tish Shute: </strong> Yes.Â  Have you seen any examples of AR beginning to do that?</p>
<p><strong>Will Wright: </strong> No, not at all.Â  I think that you have to have a contextual understanding of where I am at, where my mindset is, what my situation is, what my goal state is in a moment by moment basis.Â  And then it is still a complex task.Â  But the very first thing we need is more context for building a filter.Â  See, that filter is changing every few minutes, you know, what I am filtering into my senses is changing, and my context is changing moment to moment.</p>
<p><strong><br />
Tish Shute: </strong> I really liked your emphasis on crowd sourced intelligence as the key power of a networked world, is this the seed..?</p>
<p><strong>Will Wright:</strong> Well, you can imagine crowd sourcing that filterâ€¦it would affect a million people and get a sense of what mental context that they were in and what filter they turned on.Â  And so, in a sense, the crowd is training the filter.</p>
<p><strong>Tish Shute:</strong> Yes.Â  The problem with projects like <a href="http://siri.com/" target="_blank">SIRI</a>, that is driven by the big DARPA AI project, CALO, is it is centralized &#8211; although I am not sure what they intend to do in terms of crowd source corrections?Â  But if it was all open and we could crowd source as well that would be interesting.Â  But in the end we need a framework for AR that is as open as the internet, don&#8217;t we?</p>
<p><strong><br />
Will Wright:</strong> Right.Â  I think the technological infrastructure needs to be much lighter so that it can be grounded in more like a Twitter feed or something.</p>
<p><strong>Tish Shute:</strong> Yeah.Â  Iâ€™m actually working on a project using the Wave Federation protocol as the basis for a<a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">n open communications framework for augmented reality, AR Wave</a> &#8211; not the Wave user interface, just the real-time federation protocol.Â  But, of course,Â  for it to become an open framework that could be a vehicle for crowd trained augmented reality it would need good take-up!</p>
<p><strong>Will Wright: </strong> Right.Â  You really want a million people involved.</p>
<p><strong>Tish Shute:</strong> Yes our dream is that the creation of augmented reality content will be as open, accessible andÂ  simple as making an html page, or contributing to a wiki.</p>
<p>So in terms of AR games what is interesting on the horizon, presumably games also have to solve the problems ofÂ  delivering a hyper local experience.Â  The car that you described in your talk tried hard to use augmented reality to solve the problem of parallel parking and ended up making it harder.Â  So giving us the information we need, where we need it, when we need it, and specific to who we are is going to be a big challenge.Â  But I mean in terms of games, what kinds of hyper local experiences will be most fun and what have you seen that is interesting in terms of augmented reality games up to now?</p>
<p><strong>Will Wright: </strong> Iâ€™ve not actually seen much at all.Â  Iâ€™ve seen people doing interesting stuff with like Google Maps.Â  They arenâ€™t really entertainment oriented, but I think you can start thinking aboutâ€¦</p>
<p>I mean I think for a lot of people, Google Street View is entertainment.Â  But I havenâ€™t really seen something that was really leaning into an entertainment application using existing technology and data that is already out there.</p>
<p>I mean I have seen some cool experiments-people playing Pac-Man in Washington Square and stuff like that, but nothing really serious.</p>
<p><strong><br />
Tish Shute: </strong>Yeah.Â  of course I think one of the missing links is that the barrier of entry is way to high for creating social augmented experiences for smart phones, and as you point out in your talk it is the social implications of the game is what makes it compelling.</p>
<p><strong>Will Wright: </strong> Also, I think using them [smart phones] as data aggregation devices rather than just data consumption devicesâ€¦so that people out there are using their phone, cameras, microphones, or whatever to gather data and get an experience where they are rewarded for gathering data.</p>
<p><strong>Tish Shute: </strong> Like <a href="http://foursquare.com/" target="_blank">foursquare</a> where you get the badges, and people can become the mayor of like a cafe or something.</p>
<p><strong>Will Wright:</strong> Right.Â  Yeah, you can imagine people using their phones to actually kind of pull informationâ€¦</p>
<p><strong>Tish Shute: </strong> A Dutch developer/artist/game designer, Thomas Wrobel,Â  <a href="http://www.lostagain.nl/" target="_blank">Lost Again</a>, came up with the original concept for the AR framework we are building on the Wave Federation protocol.Â  Thomas and his partner Bertine van Hovell design alternate reality games, amongst other things they doâ€¦so they are deeply immersed in the potential of the world as game.</p>
<p><strong>Will Wright:</strong> Yeah, one of my programmers actually works in Amsterdamâ€¦.there is a whole sub-communityâ€¦<br />
Well, yeah.Â  The possibilities are tremendous.Â  And Wii is actually training us that way [to be as much engaged with the other players in the physical space as the virtual game], so it is going to happen.</p>
<p><strong>Tish Shute: </strong> What are the most exciting things you see at the moment, and for the next 12 months for augmented reality?</p>
<p><strong>Will Wright:</strong> Gosh.Â  I mean I just think there is cool stuff happening in mapping, in general.<strong><br />
</strong></p>
<p><strong>Tish Shute:</strong> Like <a href="http://www.ted.com/talks/blaise_aguera.html" target="_blank">Blaise Aguera y Arcas&#8217;sÂ  demo of augmented reality maps at TED?</a></p>
<p><strong>Will Wright: </strong> Yeah, I thought the 3-D mapping with Microsoftâ€¦I think like the next level of that is going to be really compelling.</p>
<p><strong>Tish Shute:</strong> You see game potentials in that?</p>
<p><strong>Will Wright: </strong> Yeah.Â  You start overlaying really cool game potential on top of that.</p>
<p><strong>Tish Shute:</strong> Might you get interested and do something?</p>
<p><strong>Will Wright:</strong> Oh, yeah.Â  I mean in terms of games, that is one of my biggest interests, is AR.</p>
<p><strong>Tish Shute: </strong>Are you allowed to talk about anything specific at all?</p>
<p><strong>Will Wright:</strong> Not yet, no.Â  I mean a lot of our stuff is kind of in the experimental format right now, but definitely one of our strong interests is AR.</p>
<p><strong>Tish Shute: </strong> Yeah, absolutely.Â  We are over being tied to our desks to use computers -we want to be doing it anywhere, anytime, with anythingâ€¦</p>
<p><strong>Will Wright: </strong> Now the game is about the world instead of about dragons.Â  I love that.</p>
<p><em><a href="http://www.stupidfunclub.com/home.html"></a></em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook</title>
		<link>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/</link>
		<comments>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/#comments</comments>
		<pubDate>Sun, 17 Jan 2010 17:05:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Commons]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY Meetup]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave Wiki]]></category>
		<category><![CDATA[augmented reality conference]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality social commons]]></category>
		<category><![CDATA[brightkite]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Davide Carnivale]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[federated search]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[graffitigeo]]></category>
		<category><![CDATA[hacking maps]]></category>
		<category><![CDATA[Head Map manifesto]]></category>
		<category><![CDATA[imageDNS]]></category>
		<category><![CDATA[imagemarks]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Map Kiberia]]></category>
		<category><![CDATA[Mikel Maron]]></category>
		<category><![CDATA[mobile internet]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[Muku]]></category>
		<category><![CDATA[neo-viridian]]></category>
		<category><![CDATA[Nokia's ImageSpace]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open distributed AR]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[photo-based positioning systems]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[placemarks]]></category>
		<category><![CDATA[Planetwork]]></category>
		<category><![CDATA[Platial]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social cartography]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[SpinnyGlobe]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[trust filters]]></category>
		<category><![CDATA[Viridian]]></category>
		<category><![CDATA[viridiandesign]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[whurley]]></category>
		<category><![CDATA[yelp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5050</guid>
		<description><![CDATA[Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like Google Goggles, Point and Find, and SnapTell &#8211; see this post by Katie Boehret.Â  And while there may be no truly game changing augmented [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook.jpg"><img class="alignnone size-medium wp-image-5051" title="anselmhook" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook-300x225.jpg" alt="anselmhook" width="300" height="225" /></a></p>
<p>Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>, and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> &#8211; <a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank">see this post by Katie Boehret</a>.Â   And while there may be no truly game changing augmented reality goggles for a while, make no mistake, key aspects of our augmented view, factors that will have a lot to do with what we will actually see when an augmented vision of the world is a commonplace, are already in the works.Â  And, as Anselm Hook (pic above <a href="http://www.flickr.com/photos/caseorganic/2994952828/" target="_blank">from @caseorganic&#8217;s flickr</a>) notes:</p>
<p><strong>&#8220;There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see?&#8221;</strong></p>
<p>Cooperating services, e.g., Google Earth, Maps, Streetview, Google Goggles, and leader in local search like Yelp (<a href="http://www.huffingtonpost.com/ramon-nuez/google-is-getting-ready-f_b_426493.html" target="_blank">see here</a>) would have an enormous ability to filter and control a mobile, social, context aware view of the physical world, and Google themselves see an ethical quandary.</p>
<p><strong> &#8220;A Google spokesperson says this app has the ability to use facial recognition with Goggles, but hasnâ€™t launched this feature because it hasnâ€™t been built into an app that would provide real value for users. The spokesperson also cites â€œsome important transparency and consumer-choice issues we need to think throughâ€ </strong><strong> (quote from Wall Street Journal Column</strong><a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank"> by Katie Boehret)</a>.</p>
<p><a href="http://www.hook.org/" target="_blank">Anselm Hook</a> and <a href="http://paigesaez.org/" target="_blank">Paige Saez</a>, with great prescience, have been advocating a social commons for the placemarks and imagemarks to our physical world platform through a number of pioneering projects, including <a href="http://imagewiki.org/" target="_blank">imagewiki</a>.Â Â  I have interviewed both Anselm and Paige (upcoming) in depth, recently.Â  My talk with Anselm was nearly three hours long!Â  So I am publishing the transcript in two parts.</p>
<p>Understanding what it means to have a social commons forÂ  our physical world platform, and augmented reality, are key questions for all of us to think about, but especially important for those of us involved in the emerging industry of augmented reality.</p>
<p>Anselm <a href="http://blog.makerlab.org/2009/11/augmentia-redux/">notes</a> :</p>
<p><strong>â€œThe placemarks and imagemarks in our reality are about to undergo that same politicization and ownership that already affects DNS and content. Creative Commons, Electronic Frontier Foundation and other organizations try to protect our social commons. When an image becomes a kind of hyperlink â€“ thereâ€™s really a question of what it will resolve to. Will your heads up display of McDonalds show tasty treats at low prices or will it show alternative nearby places where you can get a local, organic, healthy meal quickly? Clearly thereâ€™s about to be a huge ownership battle for the emerging imageDNSâ€</strong></p>
<p>The mobile internet is moving beyond the internet in your pocket phase of mobility with mobile, social, proximity-based, context aware networks like <a href="http://www.foursquare.com/">FourSquare</a>, <a href="http://gowalla.com/" target="_blank">Gowalla</a>, <a href="http://brightkite.com/" target="_blank">Brightkite</a> and <a href="http://www.geograffiti.com/">GraffitiGeo</a> (see <a href="http://smartdatacollective.com/Home/23811">Smart Data Collective</a>) likely, soon, to start to take precedence over other forms of social network.</p>
<p>Regardless of the timeline for true augmented reality &#8211; 3D images &amp; graphics tightly registered to the physical world,Â  proximity-based social networking and real time search are already taking us into a hyper-local mode and the realm of augmented reality which is <strong><strong>&#8220;inherently about who you are, where you are, what you are doing, and what is around you&#8221; </strong></strong>(<a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> &#8211; see <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">here</a>).<strong><strong> </strong></strong>The ground is being prepared for augmented reality now.<strong><strong><br />
</strong></strong></p>
<p>If you have been reading Ugotrade, you will know I have been actively involved in developingÂ  an open, distributed AR platform/mobile social interaction utility for geolocated data based on the Wave Federation Protocol &#8211; AR Wave a.k.a Muku &#8211; &#8220;crest of a wave&#8221; (see my posts <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a>, <a href="http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">here</a> for more on this project, and the <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki</a> here).Â  Federation is, I believe, one vital aspect to developing a social commons for augmented reality and the physical world platform.</p>
<p>Also, a bit of news, I am co-chairing the upcoming <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Augmented Reality Event (are2010)</a> with <a href="http://gamesalfresco.com/about/" target="_blank">Ori Inbar</a> of <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://whurley.com/" target="_blank">whurley</a>.Â  Sean Lowery, <a href="http://www.innotechconference.com/pdx/Details/other.php" target="_blank">Prospera</a>, is the event organizer, and <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> has the support of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. Â  The <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> web site is live and there is an <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Open Call For Speakers</a>.Â   You can submit your proposals and demos for one of the three tracks, business, technology, or production <a href="http://augmentedrealityevent.com/speakers/call-for-proposals/" target="_blank">on the web site here</a>.</p>
<p><a href="http://augmentedrealityevent.com/" target="_blank"><img class="alignnone size-medium wp-image-5101" title="are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/are20101-300x60.png" alt="are2010" width="300" height="60" /></a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a> &#8220;prophet&#8221; ofÂ  augmented reality and more, &#8220;will deliver the most anticipated <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality keynote</a> of the year.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost.jpg"><img class="alignnone size-medium wp-image-5105" title="bruces-brasspost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost-300x225.jpg" alt="bruces-brasspost" width="300" height="225" /></a></p>
<p>It didn&#8217;t surprise me when Anselm mentioned that Bruce Sterling was a key influence for his work on the geospatial web and augmented reality.Â  Anselm explained:</p>
<p><strong>&#8220;Iâ€™d seen <a href="http://www.viridiandesign.org/notes/151-175/00155_planetwork_speech.html" target="_blank">a talk by Bruce Sterling</a> at an event called Planetwork [May, 2000]. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about.</strong> <strong>So, his influences is a pretty significant one to me at that exact moment.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> -Â  seems it is time for a &#8220;Neo-Viridian,&#8221;  revival!</p>
<p>This <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">post by Bruce Sterling on Pachube Feeds</a>, and Thomas Wrobel&#8217;s <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">prototype design for open distributed augmented reality on IRC</a>, were key inspirations for me when I began thinking about the potential of Google Wave Federation protocol for augmented reality.Â  I had been exploring <a href="http://www.pachube.com/" target="_blank">Pachube</a> and deeply interested in <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">the vision of Usman Haque</a>, but I had a real <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">aha moment</a> when I read this :</p>
<p><strong>â€œ(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))â€</strong></p>
<p>I think the AR Wave group will earn the extra credit and more very soon!Â  <a href="http://need2revolt.wordpress.com/about/" target="_blank">Davide Carnovale, need2revolt</a>, and <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a><strong> </strong>have been leading the coding charge, and there will be a very early AR Wave demo soon, perhaps as soon as the <a href="http://www.meetup.com/arny-Augmented-Reality-New-York/" target="_blank">Feb 16th ARNY Meetup</a>.Â  <strong><br />
</strong></p>
<p>Open access to the creation of view that will eventually find its way into AR goggles, will depend not only on the power ofÂ  an open distributed platform for collaboration like the AR Wave project.Â  Our augmented reality view will be constructed through complex &#8220;hybrid tracking and sensor fusion techniques&#8221; (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation will drive our augmented experiences, and at the moment, these kind of resources, at least at scale, are for the most part in private hands.</p>
<p>In the interview below, Anselm&#8217;s discussesÂ  how trust filters, and <span id="zuat" title="Click to view full content">being able to publicly permission your searches so that other people can respond and so that people can reach out to you, and the democratization of data in general, are even more of a concern </span>with augmented reality and hyper local search<span id="zuat" title="Click to view full content">.</span> The task of understanding what it means to haveÂ  a social commons for the outernet remains an open, and pressing question.</p>
<p>Anselm explains (see full interview below):</p>
<p><strong><span id="e18n" title="Click to view full content">&#8220;as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</span></strong></p>
<p><strong><span id="e18n" title="Click to view full content"> And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.&#8221;</span></strong></p>
<h3>Augmented Reality will emerge through sensor fusion techniques &amp; cooperating cloud services</h3>
<p>In 2010, sensor fusion techniques, computer vision technology in conjunction with GPS and compass data will create data linking that can enable the kind of augmented reality that has been the stuff of imagination for nearly four decades (see <a href="http://laboratory4.com/2010/01/the-reality-of-augmented-reality/" target="_blank">Jarrell Pair&#8217;s post).</a></p>
<p>Putting stuff in the world in 3D is of course key to the original vision of augmented reality, and one of its biggest challenges.Â  Augmented reality is going to be implicated in a real time mapping of the world at an unprecedented scale and granularity.Â  We have barely an inkling of the implications of this now.</p>
<p>Anselm and Paige have been working in the heart of the social cartography movement for nearly a decade.Â  The vision and experience of this community is vital to understanding how augmented reality and the world as a physical platform can evolve into something that benefits people and allows them &#8220;to have a better understanding of the opportunities around them.&#8221;</p>
<p>We have been hacking maps for millenia â€“Â  â€œfrom conceptual story mapping, to colloquial mapping in European development and the cartographic renaissance created by the global voyages and rediscovery of Ptolemyâ€™s mapsâ€ (<a href="http://highearthorbit.com/" target="_blank">Andrew Turner</a>).Â  And, recently, initiatives on a public-provided GIS, like <a href="http://opengeo.org/" target="_blank">OpenGeo</a>, have led the way toward more open, interoperable, geospatial data.</p>
<p>Mapping takes on a new an crucial role to augmented reality.Â  <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokia&#8217;s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.</p>
<p>And, if we see these kind of projects developed into a &#8220;photo-based positioning systems&#8221; -Â  &#8220;3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing&#8221; (Thomas Wrobel), we would find augmented reality leap forward over night.</p>
<p>It is time to take very seriously the vast opportunities and potential pitfalls of an augmented world.</p>
<p><strong><span id="vix9" title="Click to view full content">&#8220;when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist.Â  It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></strong></p>
<p><strong><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.&#8221;</span></strong></p>
<p>There are some initiatives emerging.Â  <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a> (who <a href="http://www.techcrunch.com/2009/12/08/tonchidot-sekai-camera-funding/" target="_blank">closed on $4 million of VC for augmented reality </a>last December) has helped create the <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> in Japan.Â  <a href="http://www.tonchidot.com/corporate-profile.html" target="_blank">CFO of Tonchidot</a>, <a href="http://www.linkedin.com/ppl/webprofile?action=vmi&amp;id=499984&amp;pvs=pp&amp;authToken=r8TF&amp;authType=name&amp;trk=ppro_viewmore&amp;lnk=vw_pprofile" target="_blank">Ken Inoue</a> explained in <a href="http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/" target="_blank">an interview with me in September 2009</a>.</p>
<p>&#8220;<strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing â€“ quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> would like to tackle in the not so distant future.&#8221;</strong></p>
<p>But<strong> </strong>the task of building a social commons for the physical world platform has only just begun.<strong><br />
</strong></p>
<p><strong><span title="Click to view full content"><br />
</span></strong></p>
<h3>Interview with Anselm Hook</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31.jpg"><img class="alignnone size-medium wp-image-5085" title="anselm3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31-300x225.jpg" alt="anselm3" width="300" height="225" /></a></p>
<p><em>photo from <a href="http://www.flickr.com/photos/anselmhook/3832691280/in/set-72157621946362509/" target="_blank">Anselm&#8217;s Flickr stream here</a></em></p>
<p><span id="u2mq" title="Click to view full content"><strong>Tish Shute:</strong> We <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">first met last year </a></span><span id="zjlm" title="Click to view full content"><a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">at Wherecamp</a>. </span><span id="suh4" title="Click to view full content">The start of 2009 was I think</span><span id="e_r5" title="Click to view full content"> the &#8220;OMG finally&#8221; moment for augmented reality and</span><span id="wo16" title="Click to view full content"> in less than a year AR, at least in proto forms, AR is breaking into the mainstream now! You are one of the founding visionaries/philosophers/hackers of the geo web and you have been thinking about geo web and AR for a long time &#8211; <a href="http://hook.org/headmap" target="_blank">all the way back to the legendary Head Map Manifesto</a>, and before.Â  Mostly recently you led the way in the very successful <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">ARDevCamp</a> in Mountain View. </span><span id="kn-y" title="Click to view full content"> Could you start by telling me a little bit about the history of your pioneering work with geolocated data?</span></p>
<p><strong>Anselm Hook: </strong>I am a long time Geo fanatic. I&#8217;m really interested in social cartography and what some people call public-provided GIS, thatâ€™s some language that people use. Anyway, my personal interest, when I talk to people who are non-technical (and it&#8217;s been a long term interest in the way I phrase it) is that I want to help people see through walls. So, the goal is very simple. I want people to have a better understanding of opportunities around them, the landscape around them. I always get frustrated when people make bad decisions because of a lack of information, especially when it&#8217;s related to their community and related to their environment. But, plainly put, I really just want &#8220;to help people see through walls&#8221;. It&#8217;s a very simple goal.</p>
<p><strong>Tish Shute:</strong> I know you worked on <a href="http://platial.com/" target="_blank">Platial</a>, which is really one of my favorite social mapping applications. It really broke new ground. What was the history of that? How did you get involved with Platial?</p>
<p><strong>Anselm Hook:</strong> Thatâ€™s an interesting question. It actually started at around 2000 when I saw Bruce Sterling talk. I had been writing video games for many years, and I was quite good at it, and I enjoyed it. But, the reasons I was doing it diverged from why the industry was doing it. I was making video games because I like to make shared spaces for my friends to play in and to share experience. I really enjoyed making shared environments. I worked on <a id="jrn-" title="BBS's" href="http://en.wikipedia.org/wiki/Bulletin_board_system">BBS&#8217;s</a> and my friends and I were always making these collaborative shared environments.</p>
<p>Once the video game industry kind of started to take off, I started to do high performance, 3D interactive video games and making compelling shared spaces, and it was a lot of fun. But, the frustration for me was that there was a huge industry growing around it and became very commercial. Although it paid well, it started to diverge from my values which were more centered around community environments, and shared understanding.</p>
<p><strong>Tish Shute:</strong> Yes very rapidly, the big games kind of devolved from the social aspects and became more and more into single player really, didnâ€™t they?</p>
<p><strong>Anselm Hook:</strong> It was the way, actually, because even though often you were in a many player world, you werenâ€™t collaborating, everything else became just a target.Â  I liked the idea of deep collaboration that calls the kind of playful space you see in IRC, or in the real world, where people are solving real world problems.</p>
<p>And I grew up in the Rockies, and I was always had a lot of access to the outside. So, I saw shared spaces and collaboration as a way to protect our environment. [ To step back ] I think people used different metrics <span id="gozb" title="Click to view full content">for measuring their choices in the world and many people have a value system centered around minimization of harm: making sure that the people are not hurt. But, my value system is different. I personally believe that protecting the planet is more important: to maximize biodiversity. I feel like protecting people around me comes from protecting the ecosystems they live in.</span></p>
<p><strong>Tish Shute:</strong> Thatâ€™s interesting, isnâ€™t it, because the history of Keyhole was really that, wasnâ€™t it.Â  Keyhole later became Google Earth, but I mean it began out of a project to look at what was going on in the ecosystem over Africa at that time, didnâ€™t it?<br />
<strong><br />
Anselm Hook:</strong> Yes, in fact many peopleâ€™s projects are stemming from an environmental concern. <a id="zxy9" title="Mikel Mironâ€™s" href="http://brainoff.com/weblog/">Mikel Maronâ€™s</a> works for example &#8211; heâ€™s doing <a id="euvm" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>, and he also worked on OpenStreetMaps.</p>
<p><strong>Tish Shute:</strong> Map Kiberia &#8211; that is the new project?</p>
<p><strong>Anselm Hook:</strong> Oh, yes his project is called <a id="r7ie" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>. Heâ€™s mapping a city in Africa.<br />
[For more see <a id="ngn." title="Map Kiberia's YouTube Channel" href="http://www.youtube.com/user/mapkibera">Map Kiberia&#8217;s YouTube Channel</a> &#8211; <a id="amqx" title="photo below" href="http://www.flickr.com/photos/junipermarie/4098163856/" target="_blank">photo below</a> from <a href="http://www.flickr.com/photos/junipermarie/">ricajimarie</a> ]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b.jpg"><img class="alignnone size-medium wp-image-5052" title="dhj5mk2g_487qfcv76ft_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b-300x199.jpg" alt="dhj5mk2g_487qfcv76ft_b" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> Right, great!</p>
<p><strong>Anselm Hook:</strong> When I started to look at GIS and mapping I started to meet people who had a very similar background. What happened to me is I kind of stepped away from games around the year 2000. Iâ€™d seen a talk by Bruce Sterling at an event called <a id="e8dn" title="PlaNetwork" href="http://www.conferencerecording.com/newevents/pla20.htm">PlaNetwork</a>. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about. So, his influences is a pretty significant one to me at that exact moment.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img class="alignnone size-medium wp-image-5053" title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>[For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> &#8211; seems that it is time for a &#8220;Neo-Viridian,&#8221;  revival.]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because now your paths are crossing again with augmented reality. You are on the same wavelength again.</p>
<p><strong>Anselm Hook:</strong> Itâ€™s funny, actually, Iâ€™ve had a couple of brief overlaps in that way.Â  Well, so in 2000 I<span id="mdsf" title="Click to view full content"> went to see this talk and I did a small project called &#8212; well, I called it <a id="bx3u" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a>. What I did is I mapped protests from a number of websites onto a globe to show the level of community opposition to the pending war in Iraq. It was the first time there had been a protest before a war. So, it was very interesting to me. [ See <a href="http://hook.org/headmap" target="_blank">http://hook.org/headmap</a> ]<br />
<strong><br />
Tish Shute:</strong> Thatâ€™s really fascinating. Do you have any pictures of that you could send me? </span></p>
<p><span id="r0h_" title="Click to view full content"><a href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/" target="_blank"><img class="alignnone size-medium wp-image-5054" title="dhj5mk2g_492ffct2df4_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_492ffct2df4_b-300x225.jpg" alt="dhj5mk2g_492ffct2df4_b" width="300" height="225" /></a></span></p>
<p><span id="mdsf" title="Click to view full content">photo from <a id="j05v" title="anselm's flickrstream" href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/">anselm&#8217;s flickrstream</a></span></p>
<p><strong>Tish Shute:</strong> Yes, Iâ€™ll definitely look <a id="ua2l" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a><span id="m0:j" title="Click to view full content"> up. It sounds very interesting.Â  One of the aspects of your work on geo-located data projects like this and <a id="h.gx" title="Platial" href="http://platial.com/">Platial</a> is that you really started to develop this idea of a culture of place, about how people make place. This was the wake up call to me regarding the power of networks combined with geo-data. </span></p>
<p><span id="m0:j" title="Click to view full content">We are hoping to extend this idea into augmented reality with the an open distributed platform for AR so that we can collaboratively map our worlds from the perspective of who we are, where we are, and what we are doing.Â  I know youâ€™ve just done some work recently in augmented reality.Â  I know you put the code up already. </span></p>
<p><span id="m0:j" title="Click to view full content">By the way, I love the way you take your philosophy into the way you make code &#8211; the practice of making some code, trying some things out, making it all public and publishing your findings, you know, your comments on that experience.Â  Perhaps you could recap sort of how you picked up recently on the state of play with augmented reality and what aspects you looked at, and what came out of that experience?</span></p>
<p><strong>Anselm Hook:</strong> So, itâ€™s a very simple trajectory. Coming out of the work I had done, <a id="cs18" title="Platial" href="http://platial.com/">Platial</a>, among other projects and I started to just look at the hyper-local and I suddenly realize that even those services werenâ€™t really speaking to living, and how to really see and solve local problems. What was missing was a sense of context.</p>
<p>The map doesnâ€™t know how youâ€™re feeling, it doesnâ€™t know if youâ€™re in a hurry, it doesnâ€™t know what you want, itâ€™s very static. Even the web maps are very static. And augmented reality for me I started to recognize as a combination of &#8212; well &#8212; itâ€™s probably collision of many forces, many forces that weâ€™re all a part of. Weâ€™ve also didnâ€™t realize that the real-time web is really important, itâ€™s part of<span id="bja1" title="Click to view full content"> what AR is about.</span></p>
<p>We have all started to realize that the context is important. You know, your personal disposition, your needs, if you want to be interrupted or not. That is the kind of thing that the ubiquitous computing crowd has talked about. We started to recognize that there are sensors everywhere, and the ambient sensing communities talked about that. So what is funny for me about augmented reality is I started realizing it is just a collision of many other trends into something bigger.</p>
<p>Everything else we thought was a separate thing is actually just part of this thing. Even things like Google Maps or mapping systems we think are so great are really just kind of almost an aspect of a hyper-local view. You actually donâ€™t really care what is happening 10 blocks away or 100 blocks away. If you could satisfy those same interests and needs within a single block, one block away, you would probably be really happy. You really just want to satisfy needs and interests, find ways to contribute, or get yourself fed, or whatever it is you want. And AR seemed to be the playground to really explore the human condition.</p>
<p><strong>Tish Shute:</strong> Anyway, I think one of the things that has been very amazing this year is we to have the good mediating devices that, for the first time, give us compasses, GPS, and accelerometers. But one sort of missing pieces with AR at the moment is [tracking, mapping, and registration] &#8211; the kind of things colloquial mappings of the world could be of great help with.</p>
<p>We have seen mapping coming out of the Flickr data, e.g., the University of Washington, put the maps together from the geo-tagged Flickr photos. Now if we could have that linked up with AR, then we have the kind of mapping we need to kind of really hook the geo-data onto the world in a way that goes beyondâ€¦you know, what compass and GPS can really deliver is pretty minimal at the moment.</p>
<p><strong>Anselm Hook</strong>: There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see? Are you going to see corporate advertising? Are you going to see your friendsâ€™ comments or criticisms? It is going to be an Iran or a democracy, right? It is unclear.</p>
<p><span id="vix9" title="Click to view full content">Right now there are some disturbing trends I have noticed. I am a big fan of Google Goggles. I think it is a great project. But when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist. It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></p>
<p><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.</span></p>
<p><strong>Tish Shute:</strong> Yes. And this is how we met, again, recently [over the project to create an open, distributed platform for AR using the Wave Federation Protocol]â€¦</p>
<p><span id="e18n" title="Click to view full content">This is something I feel really deeply is that, you know, basically we need the physical internet to be as open as, as the, as the internet, as the end-to-end internet has been. Or more so, actually, because the end-to-end internet has seen the trend has been to walled gardens.Â  Basically Facebook became enormous, an enormous walled garden which, I think, was despite, our predictions about them, [walled gardens] are the social experience really on the web.Â  It&#8217;s very much in walled gardens still and I, and I really feel that with the physical internet, we need to make great efforts not for it not just to be a series of small pockets of privately funded walled gardens.</span></p>
<p>There needs to be some kind of communications infrastructure that keeps it open so that was when I got interested in looking at the Wave Federation Protocol because it was a real time, you know, an open real time protocol that could possibly be a basis for that. But I think the point you&#8217;ve talked to just now, the mapping of the world and who has the &#8220;goggles&#8221;, i.e., the image data, image databases, that make the world meaningful is really, that&#8217;s still a, it&#8217;s still a BIG question [i.e. who controls the view?].</p>
<p>When I saw <a id="ewxn" title="ImageWiki" href="http://imagewiki.org/">ImageWiki</a>, [I realized] that is a piece that is vital for, for augmented reality. We need to have a huge social effort to be involved in this,Â  linking in and creating theÂ  physical internet, in creating the image hyperlinks that will make that meaningful.</p>
<p><span title="Click to view full content"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b.png"><img class="alignnone size-medium wp-image-5055" title="dhj5mk2g_493fv23rg33_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b-300x219.png" alt="dhj5mk2g_493fv23rg33_b" width="300" height="219" /></a></span></p>
<p><span id="e18n" title="Click to view full content"><strong>Anselm Hook:</strong> I think that&#8217;s a great point. The search interface, the kind of Internet that we&#8217;re used to, the way we talk to the network now, is fundamentally open end to end. Yes, you can have your oligarchies inside of it, as we see with Facebook, but you can always start your own venture up and you can do a search on something, and you can find that, that website and you can join it or you can put up your own webpage and people can find it. </span></p>
<p><span id="e18n" title="Click to view full content">The translation layer, the idea of text search and the ability to discovery power and the serendipity and the openness of that discovery, it&#8217;s pretty open right now. We do have some serious boundaries of language, which is one of the reasons I was working at the <a id="xg:8" title="Meadan.org" href="http://www.imug.org/events/past2007.htm#meadan">Meedan.org</a> [hybrid distributed, natural language translation] for a couple of years, trying to bridge that issue.</span></p>
<p>But here, as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</p>
<p>And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.</p>
<p><span id="fv3x" title="Click to view full content">We have with DNS, for example, the idea that you can register the domain name and people can search for it, and find it, and go to it. There&#8217;s no such thing as an Image DNS, or an image translation to DNS right now. What does it mean when everything is just &#8220;magic&#8221;, when there&#8217;s no way for you to be a part of the conversation, where you&#8217;re just a consumer of what people tell you, or of what one company right now, tells you, is reality? That&#8217;s a real concern.<br />
<strong><br />
Tish Shute: </strong>This, to me is the most important question at the moment. I mean, it&#8217;s the big one and it&#8217;s the place to put energy if you love the Internet [and what it can now become] right. You&#8217;ve got to put a lot of energy into this because this [a democratized view of the physical world as a platform] won&#8217;t just happen, because there&#8217;s a lot of momentum already for it to be heavily privatized, partly because, one reason is, some of the computer vision algorithms that, say, make sense of things like the geotag photographs are not open.Â  I mean, for example, the beautiful maps that have been made from the University of Washington [from Flickr geotagged photo sets], that isn&#8217;t in the public domain.</span></p>
<p><strong>Anselm Hook:</strong> Right. Tish, and in fact you&#8217;re referring to [with the maps from the Flickr photos] to ordinary maps and the fact we&#8217;ve already seen that maps lie, we&#8217;ve already, seen how much maps are reflecting a certain truth that becomes the normative truth. Google maps reflects roads, because this is roads and cars, right? Only recently have they thought about buses and walking. So the normative view that people assume is the reality, is showing off you know Starbucks, and roads, and cars, that becomes the default, those prejudices are just assumed, you know, the truth. But they&#8217;re not the truth at all.</p>
<p>I was talking to a friend of mine in Montreal, [Renee Sieber], and she said that their Indian portage routes are a bridge across land and water, they don&#8217;t think of a piece of land and a piece of water as being different things, they think of them as one thing: a route. It&#8217;s already a different kind of language we can&#8217;t even reflect it.</p>
<p>So not only is there this kind of formal, anthropological lie, in a sense, but there&#8217;s this way that we deceive ourselves because of our own prejudices.</p>
<p><strong>Tish Shute:</strong> Yes I agree and that&#8217;s why I think when I saw some of the things you had written on the ImageWiki point clearly to the need to create a social commons. We need a social commons for the real-time physical internet, we need it for the image hyperlinks that make sense of that.</p>
<p>And it&#8217;s a complicated thing in a sense, though, because we don&#8217;t actually have a good distributed infrastructure for AR yet, and I found exploring AR Wave, that at last we have the suggestion of an open, federated protocol for real-time communication &#8211; the wave federation protocol. [Real time communications is a very important part of AR].Â  It isn&#8217;t an actuality yet where lots of people are able to use it, set up their own servers, and there&#8217;s not a standard all the way throughÂ  [there is not a standard for how data is sent between the client and the server].</p>
<p>But Wave Federation Protocol does make possible truly distributed social AR.Â  I started thinking when I saw ImageWiki that to bring ImageWiki together with the social collaborative power of distributed AR.Â  This really would be the basis of creating a social commons for augmented reality and the physical world as a platform &#8211; the <span id="np6x" title="Click to view full content">start of a bottom up with deep social collaboration on how we create augmented reality colloquial maps that can inform a hyper-local of the world.</span></p>
<p><strong>Anselm Hook:</strong> Yes. When Paige Saez, John Wiseman, and myself, and a few other folksâ€¦ You know, Benjamin Foote, Marlin Pohlmann, and a couple other people started to play with this, we quickly found thatâ€¦ We started to realize, â€œOh, this kind of thing will be at least as popular as IRC. There will be at least as many people doing this as chatting in little virtual spaces. Thereâ€™ll be at least as many people decorating the world with augmented reality markup, and maybe using the real world as a kind of barcode for translating what youâ€™re looking at into an artifact, a digital artifact.</p>
<p>And<span id="csy2" title="Click to view full content"> that the size of that space was going to be huge, basically. Maybe not quite as commodifiable as Twitter, but certainly very energetic.</span></p>
<p>Many of the projects we did were just kind of looking at these kinds of issues sort of from an artistic, technical, and political point of view. We werenâ€™t so much posing complete solutions, but simply using a praxis to explore the idea with an implementation, as a foundation for this discussion. So I think we sort of opened that can of worms for sure.</p>
<p><strong>Tish Shute:</strong> Did you actually set up ImageWiki to be working as a location based app yet?</p>
<p><strong>Anselm Hook:</strong> It is a location based app. It collects your longitude, latitude, and the image and stores it. And then it uses that as a way to translate that image to anything else. It could be a piece of text or a URL.<br />
<strong><br />
Tish Shute:</strong> So there is a smartphone app, but you didnâ€™t take it as far as an AR app yet?</p>
<p><strong>Anselm Hook:</strong> No. We didnâ€™t do a heads-up view. There are apps on the iPhone store that do that, but they donâ€™t do the brute force image recognition that we were using. We used a third party off the shelf algorithm that we found on Wikipedia and downloaded the source code, and threw it on the server. And John Wiseman in LA wrote the scalable database backend so that we could scale the actualâ€¦<br />
<strong><br />
Tish Shute:</strong> So how did you set the iphone app up to work?</p>
<p><strong>Anselm Hook</strong>: The iPhone side was very simple. You take a picture of something and it tells you what it is. That is all it did. We would take the location, but the client side, the iPhone side, just rendered, returned to youâ€¦It said, â€œSomeone said that this picture of a barking dog is an advertisement for a local band.â€</p>
<p><strong>Tish Shute:</strong> Right. So basically it was a geo-tagged?</p>
<p><strong>Anslem Hook:</strong> Yes. We are just collecting the geo information. Actually, there were a whole lot of technical challenges. The whole idea of ImageWiki is actually kind of beyond our technical ability for a small team like us. It really does take a team, a group like Google, to do this kind of thing in a scalable way.<br />
<strong><br />
Tish Shute:</strong> Why is that?</p>
<p><strong>Anslem Hook:</strong> There are two sides. There is the curating the images. I think that is the job of groups like us &#8211; open source groups who can curate images <span id="vxty" title="Click to view full content">that are owned by the community. And then the searching side, the algorithm side, where you are actually matching the fingerprint of one image to images in your database, that takes a much moreâ€¦that is much more industrial.Â  We get both sides, ours is not a scalable solution. It is mostlyâ€¦proving that it could be done was important.<br />
</span><br />
<span id="a3ou" title="Click to view full content"><strong>Tish Shute: </strong>In terms of hooking Imagewiki up to the collaborative possibilities of AR Wave wouldn&#8217;t federation pose some interesting possibilities for scaling search algorithms and all that?</span></p>
<p><span id="vp27" title="Click to view full content"><strong>Anselm Hook:</strong> Yes. And what is funny also, incidentally, is that, nevertheless, we did look for some financial support for it, but we couldnâ€™tâ€¦we just didnâ€™t find the investors to scale it. Now, other companies like SnapTell took a shot at it. And they have an app in the iPhone store where you can point at a beer bottle and get back the name of the beer bottle.</span></p>
<p>The classic example everyone uses is a book. Amazon has all the image jackets of all their books. You can point SnapTell at almost any book and get back links to buy that at Amazon, the price of the book, and user comments on the book. So they are treating Amazon as the canonical voice of the book, for better or worse. That is the state of the art so far, up until Google Goggles came out a little while ago, which actually blows it out of the water. But, that is where we are now.</p>
<p><strong>Tish Shute: </strong>Right. But the point you raise about how when something like Amazon comes canonical of what is book, right, this is the whole point, isnâ€™t it?</p>
<p><strong>Anselm Hook:</strong> Is Amazon truth? Itâ€™s not bad. Jeff Bezos seems like a nice guy, but, you know.</p>
<p><strong>Tish Shute:</strong> And this is the point of having these open infrastructures for this.Â  And this should be obvious in a way, but it comes back to the thing about what made the Internet great was the fact that even though as you note, you get an oligarchy like Facebook, but people always could just go off and do something else, right? Because the fundamental infrastructure was basically open and designed to be available for everyone. And many people have championed that and fought for it hard [to maintain this openness] havenâ€™t they? They have devoted their lives to keeping it that way, even if the oligarchies have done their thing.<br />
<strong><br />
Anselm Hook:</strong> Yes. There are really some things that are underneath all of this that havenâ€™t been solved yet.</p>
<p>One is that the trust in social networks has not been built yet, so we canâ€™t do peer based recommendations very well. We canâ€™t filter noise by peers. Twitter kind of is moving there, but I donâ€™t just want to listen to my Twitter friends. I want to listen to my friends of friends. If I am getting truth from somebody, I want to get that truth from people my friends say that they trust.</p>
<p>Then the second problem is that there is a search business. My friend Ed Bice, who owns <a id="lir5" title="Meedan" href="http://beta.meedan.net/">Meedan</a>, always says that a search itself, a search request, is an opportunity to makeâ€¦is a publishing moment. It is an opportunity to say what you think. In the real world, if you are just hanging out with humans and you look somewhere, other people might look at your gaze and they might look at what you are looking at. Your gaze itself is a public act.</p>
<p>Gaze is a soft act, but it is one that is visible. With Google, the gaze<span id="zuat" title="Click to view full content"> of four billion people is invisible. We don&#8217;t what people are looking at, there is no opportunity to participate. Let me give you a real example.Â  I have taken a image of something of the bust of figure or a statue.Â  Why can&#8217;t the museum in Cairo look at my request and tell me oh yeah that is Tutankhamen, or that is Nefertiti right? Why can&#8217;t they have a chance to participate in the search and respond to me?</span></p>
<p><span id="zuat" title="Click to view full content"> Right now the the only person that responds is Google when I do a search. We need to invert the search pyramid and open up search, so that search is a democratic act, so that you can publicly permission your searches so that other people can respond and so that people can reach out to you, not just you having to do a dialogue. </span></p>
<p><span id="zuat" title="Click to view full content">The common example of this.. and we see this everywhere: I am looking for a slice of pizza right, now I am hungry I want some pizza. I have to ask Google, look find twelve websites, call twelve phone numbers, and talk to each of the twelve stores, and ask them are they open late, is the food organic, is the food in any good, do my friends like it.</span></p>
<p>Whereas what I should be able to do is just say it&#8217;s a search moment and I am interested in pizza. If those pizza places my criteria like you know my friend&#8217;s like them and they are organic, they are open, then that pizza place can call me. I have the money why should I do the search? So the whole business of search, the whole structure of search is predicated around a revenue model, but its a really short-sighted revenue model, its not a brokerage.</p>
<p>Search isn&#8217;t search, search is hand waving.Â  These should be moments for us to have a discourse. So problem we are seeing in AR with communication of the right information is actually underneath AR, at the level of the whole infrastructure.</p>
<p>Search needs to be inverted, trust filters need to be built. We need to democratically own our data institutions.Â  We don&#8217;t right now.Â  That will be more of a concern, especially with AR.</p>
<p><strong>Tish Shute: </strong>Yes, especially with AR, which is this why got all excited about federation.Â  Do you think federation has the potential, an opportunity to create [the new infrastructure you describe?]</p>
<p><strong>Anselm Hook:</strong> Absolutely,Â  its absolutely what we must do. It is much harder to do. It is absolutely critical.</p>
<p><span id="lwzk" title="Click to view full content"><strong>Tish Shute:</strong> And why is it much harder to do? Could you explain that?</span></p>
<p><strong>Anselm Hook:</strong> Well, it&#8217;s very easy for a bunch of hackers to build a service that you log into and fetch some data, it&#8217;s a single thing. They don&#8217;t have to talk anybody, they can use their own protocols, they can hack it, it&#8217;s a big black box, behind the scenes. There&#8217;s running back and forth in a giant Chinese room delivering manuscripts and scrolls to you. Whatever is behind the black box, you donâ€™t care, it just works.Â  But when you federate, you need to actually publish and have standards, and then you&#8217;re talk about semantic, everyone starts getting really excited and wave some hands. It becomes a disaster. It&#8217;s, at least, another power order, more difficult than DIY, build it yourself.</p>
<p><strong>Tish Shute:</strong> So, in terms of what Google Wave have done with their approach to federation, what do you think have been their achievements and what do you think is their obstacles? What do you think are the failings of the Wave? Because it&#8217;s the first big public major player backed approach to something federated, isnâ€™t it? In real time.</p>
<p><strong>Anselm Hook:</strong> Yes. I think the most important non-federated service on the planet today is Twitter.Â  <a id="uhg3" title="Ident.ic.a" href="http://identi.ca/group/identica">Identi.ca</a> it&#8217;s not getting any traction with respect to Twitter. [ Even though ] Identi.ca is a federated version of Twitter and is very good. [ Identica is now <a id="w05j" title="Status.net" href="http://status.net/">Status.net</a> ] . So, we see already there that small players arenâ€™t being competitive. Then look at other services like IRC. IRC is the secret backbone of the Net. All the open source projects, all the teams, all the people that work on opensource projects are all on IRC. It&#8217;s the only way they get anything done.</p>
<p>With Google Wave, and the protocols underneath Google Wave, we see an attempt to build a similar kind of real time, but distributed protocol. I think it&#8217;s the right direction. I think, people should pick up the offering and make their own servers. I think that protocol is really great, I think the fact that is compressed, its high performance, <span id="md2h" title="Click to view full content">it is small, real-time of blobs of data flying around, all exactly the way it should be done. It is getting close to this kind of rewrite of the Internet that people keep talking about, because, you know, the net protocols are so bad, it is starting to treat the idea of intermittent exchanges being more transitory, volatile, and not heavy.</span></p>
<p><strong>&#8230;.to be continued.Â  Part 2 coming soon!<br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
		<item>
		<title>AR Wave: Layers and Channels of Social Augmented Experiences</title>
		<link>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/</link>
		<comments>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/#comments</comments>
		<pubDate>Tue, 13 Oct 2009 18:52:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR Browser]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[augmentaion]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Channels and Social Augmented Realities]]></category>
		<category><![CDATA[citi sensing]]></category>
		<category><![CDATA[citizen sensing]]></category>
		<category><![CDATA[Clayton Lilly]]></category>
		<category><![CDATA[cybernetics vs ecology and human waste]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[eco mapping]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geospatial web]]></category>
		<category><![CDATA[geospatial web and augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave as an AR enabler]]></category>
		<category><![CDATA[Google Wave enable augmented reality]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[green tech augmented reality]]></category>
		<category><![CDATA[immersive sight]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Layers]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[Life Clipper]]></category>
		<category><![CDATA[life streaming]]></category>
		<category><![CDATA[location based media]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Mannahatta]]></category>
		<category><![CDATA[map based augmentation]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[modulated mapping]]></category>
		<category><![CDATA[modulated napping]]></category>
		<category><![CDATA[multi-user]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[non euclidian geometry]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[Seanseable Labs]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality experiences]]></category>
		<category><![CDATA[sound augmentation]]></category>
		<category><![CDATA[Thomas K. Carpenter]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Wave as a platform for augmented reality]]></category>
		<category><![CDATA[Wave Blip]]></category>
		<category><![CDATA[Wave Bots]]></category>
		<category><![CDATA[Wave playback]]></category>
		<category><![CDATA[Wave playback feature]]></category>
		<category><![CDATA[Wave Robots]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4585</guid>
		<description><![CDATA[It is now nearly two weeks since the Google Wave preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the Google Wave Federation Protocol and servers (click on the image to see [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>It is now nearly two weeks since the <a href="http://wave.google.com/" target="_blank">Google Wave </a>preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a> and servers (click on the image to see the dynamic annotated sketch <a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank">or here</a>).</p>
<p>Even in the short time we have had to explore Wave, some very exciting possibilities are becoming clear. Thomas puts some of the virtues of Wave as an AR enabler succinctly when he writes:</p>
<p><strong>â€œWave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</strong></p>
<p><strong>These are all incredibly useful properties for any AR experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the userâ€¦..a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secureâ€¦data is only exchanged when necessary, and will stay local if no one else needs to view it.</strong></p>
<p><strong>Wave allows bots to run on itâ€¦allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</strong></p>
<p><strong>For all these reasons and more, Wave makes a great platform for AR.â€</strong></p>
<p>There will be much more <span>coming soon on Wave enabled AR because the Google Wave invites have begun to flow out to a wider community now. This week, many of our small ad-</span>hoc group looking at the development challenges and implications of Google Wave for AR actually got into Wave for the first time.</p>
<p>Many thanks to all the people who have contributed to this discussion so far including: Thomas Wrobel, Thomas K. Carpenter, Jeremy Hight, Joe Lamantia, Clayton Lilly, Gene Becker and many others.</p>
<p>We will be setting up some public AR Framework Development Waves this week.Â  If you have any trouble finding them, or adding yourself to it, please add Thomas and I to your contact list.Â  I am tishshute@googlewave.comÂ  Thomas is darkflame@googlewave.comÂ  The first two are currently called:<strong> </strong></p>
<p><strong><br />
AR Wave: Augmented Reality Wave Framework Development</strong> (developer forum)</p>
<p><strong>AR Wave: Augmented Reality Wave Development</strong> (for general discussion)</p>
<p>The discussion so far has been in two areas. On the one hand, it is gear-heady and focused on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a>, code, development challenges, and interfacing to mobile, while on the other hand people have been looking at use cases and questions of user experience.</p>
<p>Distributed, â€œshared augmented realities,â€ or â€œsocial augmented experiences&#8221; â€“ that not only allow mashups, &amp; multisource data flows, but dynamic overlays (not limited to 3d), created by users, linked to location/place/time, and distributed to other users who wish to engage with the experience by viewing and co-creating elements for their own goals and benefit &#8211; are something very new for us to think about.</p>
<p>As, Joe Lamantia, puts it, now:</p>
<p><strong>â€œthereâ€™s a feedback loop between which interactions are made easy by any given combo of device;/ hardware / software / connectivity, and the ways that people really work in real life (without any mediation / permeation by tech).â€</strong></p>
<p>Joe Lamantia whose term, <strong>â€œsocial augmented experiencesâ€</strong> I borrow for this post title, has done some thinking about <strong>â€œconcepts and models for understanding and contributing to shared augmented experiences, such as the social scales for interaction, and the challenges attendant to designing such interactions.â€ </strong>Check out <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia&#8217;s blog </a>for more on this later this week.</p>
<p>It is very helpful, as Joe points out, to shift the focusÂ  back and forth between the experience and the medium.</p>
<p>It is super exciting to have clear evidence that shared augmented realities are no longer merely possible, but highly probable and actually do-able now.</p>
<p>I shouldÂ  be absolutely clear about what Google Wave does to enable AR because obviously Wave plays no role in solving image recognition and tracking/registrations issues.Â  But, for example, Wave protocols and servers do provide a means to exchange, edit, and read data, and that enables distributed, social augmented realities.</p>
<p>Thomas explains how the newly named &#8220;AR Blip&#8221; works as:</p>
<p><strong>&#8220;An AR Blip is simply a Blip in wave containing AR data. Typically this would be the positional and url data telling a AR browser to position a 3d object at a location in space.</strong></p>
<p><strong>In more generic terms, an AR Blip allows data of various forms (meshes,text,sound) to be given a real-world position.&#8221;</strong></p>
<p>I have mentioned in other posts (<a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>) that Wave can be used for AR as precise or as loose as the current generation devices can handle. And as the hardware and software for the kind of AR that can put media out in the world to truly immerse you in a mixed space, the frameworkÂ  shouldÂ  be able to handle this too.</p>
<p>(a note on the Wave playback feature &#8211; this opens up a whole new world of possibilities.Â  Check out <a href="http://snarkmarket.com/2009/3605" target="_blank">this post</a> on some of the implications of playback for writing!)</p>
<p>The use cases we have been coming up with are too numerous to go into in detail this post<span>.Â  The open nature of an AR framework/Wave standard will lead to many new applications we have barely begun to imagine.Â  As Thomas points out, different client software can be made for browsing, potentially allowing for various specialist browsers, as well as more generic ones for typicalÂ  use. T</span>he multitudes of different kinds of data in/output that could be integrated in an open AR framework as it evolves are mind boggling.</p>
<p>But, for now, someÂ  obvious use cases do come to mind:<br />
eg.</p>
<p>- Historical environmental overlays showing how a city used to be/and how this vision may be constructed differently by different communities</p>
<p>- Proposed building work showing future changes to a structure/and the negotiations of this future (both the public and professionals could submit their own comments to the plans in context), seeing pipes, cables and other invisible elements that can help builders and engineers collaborate and do their work.</p>
<p>- Skinning the world with interactive fantasies</p>
<p>I asked Thomas to help people understand how Wave enables new interactions to data by explaining how Wave could enable citi sensing and citizen sensing projects (e.g.<a href="http://tinyurl.com/y97d5zr" target="_blank"> this one being pioneered by Griswold</a>):</p>
<p><strong><strong>&#8220;Sensors, both mobile and static could contribute environmental data into city overlays;</strong></strong></p>
<div><strong><strong>â€”temperature, windspeed, air quality (amounts of certain particles) water quality, amount of sunlight, Co2 emissions could all be feed into different waves. The AR Wave Framework makes it easy to see any combination of these at the same time.&#8221;</strong></strong></div>
<div><strong><strong><br />
</strong></strong></div>
<p><strong><strong> </strong></strong>Having these invisible aspects of the world made visible would create ways to improve sustainability, social equity, urban management, energy efficiency, public health, and allow communities to understand and become active participants in the ecosystems and infrastructure of their neighborhoods.</p>
<p>The key is reflecting thisÂ  kind of data back to people &#8220;making it not back story but fore story,&#8221; right where we are, right where it happens, as well as having it available for analysis.</p>
<p>As well asÂ  creating new opportunities to interact/respond to/and enhance data, making visible the invisible as <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko&#8217;s</a> work on <a href="http://www.amphibiousarchitecture.net/" target="_blank">Amphibious Architecture</a> and <a href="http://www.haque.co.uk/" target="_blank">Usman Haque&#8217;s</a> project <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> shows, can also create new connections/understandings between humans and the non human&#8217;s that share our world, e.g. fish, plants, waterways.</p>
<p>At a more prosaic levelÂ  potential buyers of property could see more clearly what they are buying, city planners could see better what needs to be worked on, and environmental researchers could see more clearly the impact people are having on an area.</p>
<p>Also Wave can provide some of the framework necessary to begin to begin to address tricky problems of privacy. Sensitive data can be stored on private waves, e.g. medical data for doctors and researchers, but the analysis of theÂ  data could still be of benefit to all, e.g., if it&#8217;s tied disease occurrences to locations andÂ  relationships between the environmental data and health wereâ€¦quite literallyâ€¦made visible.</p>
<p><strong>&#8220;The publication of energy consumption and making it visible as overlays, could help influence the public into supporting more energy efficiency companies and businesses. It could also help citizens to try to keep their own energy usage down, to try to keep their street in â€œthe green.â€</strong></p>
<p>Thomas notes:</p>
<p><strong>&#8220;With all of the above, it becomes fairly trivial to write persistent Wave-bots that automatically send notice when certain criteria are met (pollutants over a certain level, for example). On publicly readable waves, anyone can use the data in their local computers, process it, and contribute results back on a new wave. Alternatively, persistent remote severs could run Cron jobs, or other automated processing, using services such as App Engine to run wave robots.</strong></p>
<p><strong>All these possibilities become â€œfreeâ€ when using Wave as a platform for geographically tied data.&#8221;</strong></p>
<p>But of course this is just the beginning!</p>
<p><em>Recently, I talked at length with Jeremy Hight who has been thinking about, designing and creating shared augmented realities, that anticipate the kind of dynamic, real time, large scale architecture we now have available through Wave,Â  for quite some time now.Â Â  This is exciting stuff. </em></p>
<p><em><br />
</em></p>
<h3><strong>Modulated Mapping:</strong> Talking with Jeremy Hight about Layers, Channels andÂ  Social Augmented Experiences</h3>
<p><strong><strong> </strong></strong></p>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5.jpg"><img class="alignnone size-medium wp-image-4611" title="modulatedmapping5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5-230x300.jpg" alt="modulatedmapping5" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><strong><em><span>image from Volume Magazine (Hight/Wehby)</span></em></strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I know you have been involved in locative media from its early days. Perhaps we can talk about how AR continues the locative media journey?</p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> gave me this distinction, recently:<em> &#8220;AR is about systems that put media out in the world, and immerse you in a mixed space. Â Even the current &#8220;not really registered&#8221; mobile phone AR systems are still &#8220;sort of&#8221; AR (e.g., Layar, etc).</em></p>
<p><em>Locative media/ubicomp/etc are very different, in that they tend to display media on a device (phone screen) that is relevant to your context, but does not attempt to merge it with the world.<br />
The difference is significant, and making it clear helps people think about what they do and what they want to do, with their work. The locative media space though points toward future AR systems (when the technology catches up!).&#8221;</em></p>
<p><strong><strong>Jeremy Hight: The need is to finish the arc that locative media and early AR have started and to now truly return to the map itself, but as an internet of data, interactivity, channels of data , end user options like analog machines once were but in high end tools, a smart AI-ish ability for it to cull data for the user, and to allow social networking to be in real world places on the map both in building augmentation and in using and appreciating it..not hacks..which have their place&#8230;but a rhizome, a branched system with shared root,end user adjustable and variable..this is the key.</strong></strong></p>
<p><strong><strong>This takes AR and mapping and makes a possible world of channels in space and this eventually can be a kind of net we see in our field of vision with a selected percentage of visual field and placement so a geo-spatial net, a local to world wide fusion of lm into a tool and educational tool</strong></strong></p>
<p><strong><strong><span>VR[virtual reality] has greatly advanced, but in nodes as it has limitationsâ€¦LM [locative media] is the sameâ€¦AR [augmented reality] is the way..</span></strong><strong> it now has locative elements and aspects of VR integrated into its functionality and nodes&#8230;it is the best option with all of these elements, greater hybridity and data level potential a well as end user and community sourcing potential</strong></strong></p>
<p><strong><strong>I wrote an essay for Archis&#8217; Volume, the architecture magazine on a near future sense of some of this&#8230;.a visual net on the lens like ar but with smart objects and social networking and dissent.</strong></strong></p>
<p><strong><strong>I also wrote of these things for immersive graphic design, spatially aware museumÂ  augmentation, education through ar and lm and nod to the base interface of eye to cerebral cortex in layered and malleable augmentation in my essay <a href="http://www.neme.org/main/645/immersive-sight" target="_blank">&#8220;Immersive Sight&#8221;</a> a few years back</strong></strong></p>
<div id="gqg9" style="text-align: left;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b.jpg"><img class="alignnone size-medium wp-image-4601" title="dgznj3hp_3dj7g8zf7_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b-300x225.jpg" alt="dgznj3hp_3dj7g8zf7_b" width="300" height="225" /></a></strong></div>
<p><strong><strong>image [above] is simple illustration of a possible example on a screen or in front of eye where in a mondrian show..the graphic design of information actually builds as one moves</strong></strong></p>
<p><strong><strong>(key is calibrated spatial intervals and related layers of further augmentation which is logical due to location and proximity)</strong></strong></p>
<p><strong><strong>from immersive sight on immersive graphic design:</strong> <em>&#8220;The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.</em></strong></p>
<p><strong><em>Another interesting possibility is that individual points of augmentation donâ€™t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect). </em></strong></p>
<p><strong><em>Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.</em>&#8220;</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>One of the things I found very inspiring when I read your papers was that your ideas are not all dependent on a model of AR that would necessarily require goggles, back packs and lots of CPU/GPU &#8211; not that that wouldn&#8217;t be nice, but that even using &#8220;magic lens&#8221; AR of the kind smart phones has enabled in an open distributed framework would open up a lot of new possibilities for what you call modulated mapping wouldn&#8217;t it?Â  What kind of social augmented realities might be enabled by a distributed infrastructure like this [AR Wave]?</p>
<p><strong><strong>Jeremy Hight: right&#8230;.I see that as wayyy down the road&#8230;most important is the one you talk about as it is more immediate and thus more essential and needed. Eventually the goggles will be like a contact lens and a deep immersive ar version ofÂ  this will come, that to me is certain, but a ways down the road.Â  An incredible amount is possible now, and this is a more pragmatic move as opposed to the more theoretical of what is a few steps from here. Thus it is more important and essential now. Tools like Google Wave are taking what even 2 years ago was more theoretical discussions of what may be and instead introducing key elements to a more immediate, powerful, flexible level of augmentation. What have been hacks and isolated elements are to be integrated and social networking, task completion, shared tools and graphics building and geo-location.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think some people question what augmented reality has to bring to the continuum of location based experiences that other forms of interface/mapping do not?</p>
<p><strong><strong><span>Jeremy Hight: rightâ€¦.and the schism between its commercial </span></strong><strong>flat self and tests with physics etc and in between &#8230;there are a lot of unfortunate assumptions it seems as to where ar and lm cross and how ar can be many things beyond deep immersion or the opposite pole of a hockey puck having a magic purple line etc&#8230;.like lm is seen as either car directions or situationist experiments with deep data&#8230;..the progression to me is deeply organic&#8230;.and now augmentation can be more malleable, variable and end user controlled.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes, it is really exciting time for AR.Â  Historically AR research has gone after the hard problems of image recognition, tracking and registration because we have had available to us these dynamic, real time, large scale architectures like Wave available (until now!),Â  so less work has been done on exploring the possibilities for distributed AR fully integrated with the internet and WWW hasn&#8217;t it?</p>
<p>A distributed augmented reality framework such as we have envisaged on Wave wouldÂ  allow people to see many layers from many different people at the same time. â€¬And this kind of model has been part of your thinking and fundamental to your work for a while, hasn&#8217;t it? But it is a very new idea to most people to think about collaboratively editing layers on the world, and to be able to viewÂ  augmented space through channels and networked communities?Â  Could you explain some of the ways you have explored these ideas and how they could be explored further now to create meaningful experiences for people?</p>
<p><strong><strong><span>Jeremy Hight: right..exactlyâ€¦modulated mapping to me can be an amazing tool for studentsâ€¦back end searching data visualizations and augmentations based on their needsâ€¦while they do something else on their computer or iphoneâ€¦that can be amazing..and not deep </span></strong><strong>immersive..The map can be active, malleable, open source fed, and even, in a sense, intelligent and able to adapt. The possibility also exists for this map to have a function that based on key words will search databases on-line to find maps, animations, histories and stories etc to place within it for your study and engagement. The map is thus a platform and yet is active. Community is possible as people can communicate graphically in works placed on the map and in building mode in the tool. All the tropes of locative media are to be in a </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> system of channels of augmentation and a spatial net. The software by design will allow development on the map and communication like programs such as second life but in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> itself.</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1.jpg"><img class="alignnone size-medium wp-image-4607" title="interactive 3d map copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1-246x300.jpg" alt="interactive 3d map copy" width="246" height="300" /></a></strong></p>
<p><strong><strong><em><strong><span>image from Parsons Journal of Information Mapping Volume 2 (Hight/Wehby)</span></strong></em></strong></strong></p>
<p><strong><strong><span>I wrote an essay a few years ago for the Sarai reader questioning the traditional map and its semiotics and need to reconsider â€“ then did work looking into it and what those dynamics were and they got into 2 group shows in museums in Russiaâ€¦so it actually was my arc toward modulated mappingâ€¦an interesting way to it! But yes the map itself..this is a huge area of potential and non screen based alone navigation etc. I see now that my 2 dozen or so essays in lm,ar, interface design and augmentation have all also been leading in this direction for about 10 years now</span></strong></strong></p>
<p><strong><strong>Tish Shute: </strong>IÂ  love immersive visualization but can we &#8220;return to the map &#8211; the internet of data&#8221; as you mentioned earlier and produce interesting augmentation experiences that go beyond locative media&#8217;s device display mode without having the goggles, for example, through the magic lens of or smart phones?</strong></p>
<p><strong><strong>Jeremy Hight: yes, absolutely.Â  the map in the older paradigm is an artifice born often of war and border dispute and not of the earth itself and its processes&#8230;the new mapping like google maps is malleable, can be open source, can read spaces and can be layers of info in the related space not plucked from it as in the past..this is amazing. the old map also was born of false semiotics/semantics like &#8220;discovery of new lands&#8221; or &#8221; pioneer&#8221;Â  while the places were there already and names often were of empire&#8230;now this is no longer the case</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2.jpg"><img class="alignnone size-medium wp-image-4608" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2-300x233.jpg" alt="jeremy map small2 copy" width="300" height="233" /></a></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>So geoAR is an a better way to express a new social relationship to mapping? And how does this fit into the evolving arc of locative media that evolves into augmented reality?</p>
<p><strong><strong>Jeremy Hight:&#8230;early lm was mostly geocaching and drawing with gps..it took new paradigms to invigorate the fieldÂ  a lot of folks focus on tools and what already is, cross pollination can ground ideas that are more radical&#8230;a metaphor in a sense to place what can be in a familiar context.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>one of the great disappointments in VR has been its isolation from networked computing and also, up to now, augmented reality &#8211; to achieve an immersive experience withÂ  tight registration of media/graphics have to create separate system isolated from the internet and power of the web.</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.this will change. vr is to me an island but ar takes a part of it and shifts the paradigm and new things open this way. Do you know the project <a href="http://www.lifeclipper.net/EN/process.html" target="_blank">&#8220;life clipper&#8221;</a>? friends of mine..doing interesting things..they are a clear bridge betwen lm and ar&#8230;.and from vr</strong></strong></p>
<p><strong><strong>in ar augmentation and what is being augmented become fused or in collision or in complex interactions as a means to a larger contextualization and exploration of what is being augmented..this is true in immersive or non ar&#8230;.huge potential</strong></strong></p>
<p><strong><strong>vr is a space, now can be surgery which is amazing. but not layered interaction, thus an island and graphic iconography on a location can use symbolic icons which opens up even more layers (graphic designer/information designer in me talking there I suppose..)</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes !Â  talk to me more about layers and channels I think this is one of the most interesting questions for meÂ  in augmented reality at the moment &#8211; what can we do with layers and channels and the new possibilities on connections between people and environments that these can create?</p>
<p>The ability for anyone to post something is critical to the distributed idea but one of the reasons I am so excited by Google Wave is I am fascinated by the playback function. How do you think this will enable new forms of collaborative locative narratives (<a href="http://snarkmarket.com/2009/3605" target="_blank">nice post on Wave playback here </a>).</p>
<p><strong><strong>Jeremy Hight: We are in an age of cartographic awareness unseen in hundreds of years. When was the last time that new </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> tools were sold in chain stores and installed in most vehicles? When was the last time that also the augmentation of maps was done by millions (Google map hacks, etc)? The ubiquitous gps maps run in automobiles while people post pictures and graphic pins to denote specific places on on-line maps.</strong></strong></p>
<p><strong><strong>The need is for a tool that combines all of these new elements into an open source, intuitive layered and rhizomatic map that is porous (like pumice, organic in form yet with â€œbreathing roomâ€ ),ventilated (i.e: adjustable, a flow in and out), and open (open source,open access,open spatialized dialog).</strong></strong></p>
<p><strong><strong><span> I wrote of this in my essay &#8220;Revising the Map: Modulated Mapping and the Spatial Interface .&#8221;(</span></strong><span> </span><a id="h0qr" title="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )" href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%20%29"><span>http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )</span></a></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3.jpg"><img class="alignnone size-medium wp-image-4609" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3-300x206.jpg" alt="jeremy map small2 copy" width="300" height="206" /></a></strong></p>
<p><strong><em><strong><span>image from Parsons Journal of Information Mapping (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> One mapping project I really like is <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>.Â  How could distributed AR contribute to a project like <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>?</p>
<p><strong><strong>Jeremy Hight: that is a good example..imagine taking manhattan and having channels of options to overlay, that being an excellent option, and imagine being able to even run a few at once with deliniating icons..you can augment a space with history, data, erasure, narrative, scientific analysis, time line of architecture, infrastructure, archaeological record etc&#8230;.endless possibilities, and this agitates place and place on a map into an active field of information with end user control&#8230;and open options for new layers</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>and do you think we could do interesting things with AR on a project like Mannahatta even with the current mediating devices we have available &#8211; i.e. our smart phones as obviously the rich pc experience of Mannhatta has built for it&#8217;s web interface would not be available as AR at this point?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.k.i.s.s right?Â Â  these projects do not have to only be immersive and graphic intensive&#8230;&#8230;take how people upload photos onto google maps&#8230;.just make that on a menu of options, there are some pretty cool hacks already..<br />
&#8230;options is key, a space can have a community as well, building on it in software, and others navigating it, i see it near future and down the road..always have with ar really</strong></strong></p>
<p><strong><strong><a href="../wp-content/uploads/2009/10/locativenarratives1.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1.jpg"><img class="alignnone size-medium wp-image-4596" title="locativenarratives1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1-230x300.jpg" alt="locativenarratives1" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><em><strong><span>image from Volume Magazine (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Jeremy Hight: and yes, a lot of people focus on ar as its limitations and processing power needs as a major road block</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>so do you see AR on smart phones adding any value to a project like Mannahatta?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;that it can be integrated into other similar works and even disparate but cloud linked ones&#8230;so a place can be &#8220;read&#8221; in diff ways on the iphone&#8230;.beyond its map location, and more can be possible if you are there&#8230;others away, so it becomes channels of augmentation</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>AR like locative media puts who you are, where you are, what you are doing, what is around you center stage in online experience but it also &#8220;puts media out in the world&#8221; &#8211; people I think understand this well as a single user experience but we are only just beginning to think about how this will manifest as a social experience &#8211; could explain more about modulated mapping as an experience of social augmentation?</p>
<p><strong><strong style="background-color: #99ff99; color: black;"><span>Jeremy H</span>ight: Modulated</strong> <strong style="background-color: #ff9999; color: black;">Mapping </strong><strong>is a tool that will allow channels to be run along the map itself. This will allow one to view different icons and augmentations both as systems on the map and in deeper layers of information (photos, videos, animations,Â  visualizations, etc) that can be turned on and off as desired. The different layers of icons and data may be history, dissent, artworks, spatialized narratives, and annotations developed that are communally based on shared interests, placed spatially and far beyond. The use of chat functionality in text or audio will be open in building mode and in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> navigation/usage as desired. This also allows a community to develop or augment in the spaces on the earth. These nodes can be larger and open or small and set by groups in their channel. The end result is an open source sense of </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> that will also have a needed sense of user control as one can select which layers of augmentation they wish to see and interact with at any time. It also will incorporate all the functionality of locative media in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software and </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong>. In building mode and in map mode, icons will be coded to represent within channels (remember that the person using it has selected channels of augmentation from many based on their current interests and needs). Icons will be coded as active to show work in progress in cities and the globe to both invite participation and to further agitate the map from the sense of the static as action is visible even with its icons as people are working and community is formed in common interest/need .</strong></strong></p>
<p><strong><strong>locative media got a buzz for &#8220;reading&#8221; places&#8230;when I helped create locative narrative that was what blew me away back in 2001&#8230;that we could give places a voice by placing data from research and icons on a map&#8230;&#8230;this meant lost history or augmentation was possible as kind of voices of a place and its layers&#8230;&#8230;.I called it &#8220;narrative archaeology.&#8221; We now have tools that can push these ideas and concepts farther..much farther&#8230;and with a range beyond what was before, and then the map was just a tool&#8230;.but now we are returning to the map itself&#8230;..and this as place as much as marker..this is where ar takes the ball to use a bad metaphor</strong></strong></p>
<p><strong><strong>also that project could only work if you came to our spot of a 4 block augmentation and with us there to lend you our gear&#8230;we are far beyond that now but it had its place</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>How do you see &#8220;in context&#8221; AR and something we might call &#8220;context aware&#8221; cloud computing models interacting?</p>
<p><strong><strong>Jeremy Hight: sure&#8230;and I must add that I have issues with cloud computing as much as it is a good idea..</strong>.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>because of loss of autonomy?</p>
<p><strong><strong>Jeremy Hight: tivo is simply a hard drive&#8230;but it keyword reads and givesÂ  suggestions..that is the is cro magnon link to what can be</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The nice thing about Wave is because of the Federation model, the cloud model and local store ur own data models should work together.<strong><strong><span> </span></strong></strong></p>
<p><strong><strong><span>Jeremy Hight: yes..that is better&#8230;..loss of autonomy also opens up the arbitrary which is the flaw of search engines as we know itâ€¦even Bing fails to me in that sense</span></strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>how do you mean, could you explain?</p>
<p><span> </span><strong><strong><span>Jeremy Hight: spidersÂ  cull from wordsÂ  but cull like trawlers at sea â€¦. tested Bing with very specific requests.. it spat out the same mass of mostly off topic resultsâ€¦.</span><br />
<span> I wonder if there is a way to cull from key words and topics from a userâ€¦not O</span>rwellian back end of courseâ€¦but from their preferences, their searches etc..</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>did you see the discussion on search in the AR Framework doc? AR search will be a massively important thing that will take a lot of intelligence and all sorts of algorithm development won&#8217;t it?</p>
<p><strong><strong>Jeremy Hight:It also has one area of key functionality that moves into more intuitive software. Upon continued usage, the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software will â€œlearnâ€ and search based on key words used and spheres of interest the user is </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> or observing as mapped and will integrate deeper data and types of animations, etc. into the map or will have them waiting to be integrated upon user approval as desired. Over time the level of sophistication of additions and of search intuition will increase dramatically. The search can also, if the user wishes, run in the back end while working in the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> program, or in off time as selected while doing other tasks. It also can never be used if one is not interested. One of the key elements of this </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> is that it is not composed of a closed set or needs user hacks to augment, but instead is to evolve and deepen by user controls and desired as designed. Pre-existing data,visualizations and augmentations can be integrated with relative ease.</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>One of the things that Joe Lamantia points out about social augmented experiences is that they will operate across a number of different scales &#8211; conversation &gt; product design &amp; build team &gt; neighborhood / town fixing potholes &gt; global community for causes. How do designs for channels and layers change across these different social scales?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> quote myself &#8230;&#8221;The &#8220;frontier&#8221; is often defined as the space just ahead of the known edge and limit, and where it may be pushed out deeper into the previously unknown. The frontier in the world of ideas is not the warm comfort of what has been long assimilated; and the frontier in the landscape is not of maps, but of places beyond and before themâ„</strong></strong></p>
<p><strong><strong>The border along what has been claimed is not only that of maps â€“ it is of concepts, functions, inventions and related emergent industries. Ideas and innovations are like the cloud shape that briefly forms around a jet breaking the sound barrier, tangible yet not fully mapped into measure. It is when things are nailed down into specific entities, calibrated and assessed, that the dangers may inflict themselves â€“ greed, competition, imitation, anger, jealously, a provincial sense of ownership either possessed or demanded&#8221;. (from essay in Sarai reader). Otherwise channels and augmentation do not have to be socio-economically stratifying or defined by them. We built 34nÂ  for almost nothing on older tools.</strong></strong></p>
<div id="yqjj" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><img class="alignnone size-medium wp-image-4599" title="dgznj3hp_1g3svj8fq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b-300x225.jpg" alt="dgznj3hp_1g3svj8fq_b" width="300" height="225" /></a></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><span> </span></a></strong></div>
<p><strong><em><strong><span>image from 34north 118westÂ  (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>The ar that is not deep immersion can be more readily available and channels can be what end users need like the diversity of chat rooms or range of Facebook users among us.</strong></strong></p>
<p><strong><strong>I had two moments yesterday that totally fit what we talked about.Â  I went to west hollywood book fair and traditional directions off of mapping for driving directions were wrong and we got lost&#8230;our friend could only get a wireless signal to map on itouch and we had to roam neighborhoods then we called a friend who google mapped it and we found we were a block away&#8230;.so a fast geomapping overlay with an icon for the book fair on some optional grid service or community would have made it immediate.Â  Then at the book fair talked to a small press publisher who is trying to map works about los angeles by los angeles authors on a map..she was stunned when I told her it could be a kind of google map feature option</strong></strong></p>
<p><strong><strong>it also has great potential to publish and place writing and art in places..both for commentary and access. imagine reading joyce in chapters where it was written about and then another similar experience but with writers who published on a service into their city.</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> The challenge of shared augmented realities is not just a matter of shipping bits around, but also of how it we will use channels and layars &#8211; to create and negotiate different, distributed perspectives, understand a shared common core/or expressions of dissent (this came up in an email conversation with <a href="http://www.oreillynet.com/pub/au/166" target="_blank">Simon St Laurent</a>).</p>
<p><strong><strong><strong>Jeremy Hight:</strong> well my example earlier could have been communal in a way too..a tribe sort of augmentation channeling &#8230;.like subscribing to list servs back in the day but of augmentation communities/channels, and for folks to build and use in shared live form, coordinating too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong> </strong>one good thing though about building an open AR Framework is that as bandwidth/CPU/hardware gets better shared high def immersive experiences could be supported by the same framework..</p>
<p><strong><strong>Jeremy Hight: excellent</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>were you thinking of the image recognition and tracking with this example?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> yeah&#8230;.like scanning across a multi channeled google map augmentation with diff icons and their connected data&#8230;and poss social networking and fle sharing even in that mode&#8230;and rastering etc&#8230;.could be cool with google wave </strong><strong><span>- on the map..then zooming in a la powers of ten..(eames film).</span></strong></strong></p>
<p><strong><strong>-</strong><strong><span>I have pictured variations of this for a few years now in my head like the example of my friends and I yesterdayâ€¦we could have correlated a destination by icons in diff channels..one being lit events within lit channel in l.a mapâ€¦maybe things streaming on it tooâ€¦remote info and video etc&#8230; that would be awesome</span></strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> So many of the ideas in you paper on modulated mapping (see <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a>) are brilliant use cases for shared augmented realities. Perhaps you could talk more your ideas about locative narrative because this is something I think is at the core of the kinds of experiences that a distributed AR Framework would make possible?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> on the project &#8220;34 north 118 west&#8221; we mapped out a 4 block area for augmentation of sound files triggered by latitude and longitude on the gps grid and map and the map on the screen had pink rectangles that were the &#8220;hot spots&#8221; where the augmentation had been placed.</strong></strong></p>
<div id="nwc6" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b.jpg"><img class="alignnone size-medium wp-image-4600" title="dgznj3hp_0gg994bf9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b-300x225.jpg" alt="dgznj3hp_0gg994bf9_b" width="300" height="225" /></a></strong></strong></div>
<p><strong><em><strong><span>image of interactive map with map based augmentation connected to audio augmentation on site for 34north 118west (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>We researched the history of the area and placed moments in time of what had been there at specific locations &#8230;.I called this <a href="http://www.xcp.bfn.org/hight.html" target="_blank">&#8220;narrative archaeology&#8221;</a> as it allowed places to be &#8220;read&#8221; by their augmentations&#8230;info that was of the place beyond the immediate experience (diff types of info) that otherwise would be lost or only found in books or web sites elsewhere. there now are locative narratives around the world but they need to be linked.Â  from humble origins &#8220;narrative archaeology&#8221; went on to be recently named of the 4 primary texts in locative media which is pretty amazing to me&#8230;but it is growing</strong></strong></p>
<p><strong><strong>- the limitations then were what I called the &#8220;bowling alley connundrum&#8221; &#8211; the specifc data had to reset like pins&#8230;..and was isolated&#8230;.this led me to think about ar back then and up to now.Â  How these could lead to much more from that point, data that would be more layered, variable , fluid..yet still augmented place and sense of place and social networking within data and software</strong></strong></p>
<p><strong><strong><a href="http://34n118w.net/34N/" target="_blank">lifeclipper</a> to me is a bridge</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>But Life Clipper is isolated from the internet currently is it?</p>
<p><strong><strong><span>Jeremy Hight: yes&#8230;ours was too.. that is what google wave makes possible.. our project only ran on our gear..in 4 blocksâ€¦with additional auxi</span>liary info online, and not malleable..but hey 2001 and all..</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>so the sites for 34 north 118 west are still active though?</p>
<p><strong>Jeremy Hight: oh yeah!</strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>nice I really like sound augmentation &#8211; have you seen <a href="http://www.soundwalk.com/blog/tag/augmented-reality/" target="_blank">Soundwalk</a>?</p>
<p><strong><strong><span>Jeremy Hight: yes, very cool..</span> </strong><strong>we chose sound only as it fought the power of image..instead caused a person to be in a sense of two places and times at once</strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> and in 2001 that was definitely a visionary project!</p>
<p>You must be very excited that finally the pieces are coming together to make this stuff scale!</p>
<p><strong><strong><strong>Jeremy Hight:</strong> I can&#8217;t even tell you!! it is funny..i have known that this would come..just waited and waited&#8230;</strong></strong></p>
<p><strong><strong>..knew it needed the right people and tools..</strong></strong></p>
<p><strong><strong><span>..so the bowling alley connundrum led me to develop my project shortlisted for the iss (international space station)Â  as I thought a lot about how points and works are not to be isolatedâ€¦but connectedÂ  and should be flowing in diff parts of a mapâ€¦.to open up perspective and connected augmentations , but also to think about the map againâ€¦not as a base only. then moved into my work with new ways to visualize time and it all really began to gell.Â  The ideas first were published as an essay</span></strong><span> </span><a id="qw.2" title="(http://www.fylkingen.se/hz/n8/hight.html)" href="http://www.fylkingen.se/hz/n8/hight.html"><span>(http://www.fylkingen.se/hz/n8/hight.html)</span></a><span> </span><strong><span>and later my project blog</span></strong><span> (</span><a id="bp.b" title="http://floatingpointsspace.blogspot.com/)" href="http://floatingpointsspace.blogspot.com/%29"><span>http://floatingpointsspace.blogspot.com/)</span></a></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>One thing I noticed when I was reading your paper is how you have been exploring non-euclidian geometries.Â  Could you explain how this is part of your idea of modulated mapping?</p>
<p><strong><strong><span>Jeremy Hight: Yes, this first came to me when my wife was reading to me from a book on the Poincare Conjecture and I was hit with a new way to measure events in time and after months of sketches, schematics and research came to see how it could also be connected to a geo-spatial web of projects and augmentations.Â  It was published in the inaugural issue of Parsons School of Design&#8217;s Journal of Information Mapping which was an exciting fit.</span></strong><span><strong> I call it &#8220;Immersive Event Time&#8221;</strong>(</span><a id="o3rt" title="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)" href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%29"><span>http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)</span></a></strong></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b.jpg"><img class="alignnone size-medium wp-image-4634" title="dgznj3hp_4cxz57xgv_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b-195x300.jpg" alt="dgznj3hp_4cxz57xgv_b" width="195" height="300" /></a></strong></span></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b.jpg"><img class="alignnone size-medium wp-image-4635" title="dgznj3hp_5g68k9ggh_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b-300x225.jpg" alt="dgznj3hp_5g68k9ggh_b" width="300" height="225" /></a><br />
</strong></span></p>
<p><strong><strong>so the last 3 years I have been working on how it could all work as channels of augmentation, and building and navigation as open and community in a sense as well as ai capability that was the time work especially. how time as experienced within an event is not a time &#8220;line&#8221;Â  but points on and within a form&#8230;.and how this model is better for visualizing events in time and documenting them. it actually sprang form reading a book on the poincare conjecture, popped a bunch of other stuff together so one could visualize an event in time as like being in the belly of a whale..with time as the ribs..and our measure of time as the skin&#8230;and moving within it&#8230;.hoping this will be used as educational tool</strong></strong></p>
<p><strong><strong>and this also can be tied to ar and map again&#8230;how documentation of important events can be kept within icons on a google map..then download varying visualizations based on bandwidth and desired format</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>I have been thinking about is the new forms of social interaction/agency that these kinds of augmentations of space/place/time will create.Â  it seems there are two poles &#8211; one is the area Natalie Jeremijenko explores of shifting social relations from institutions/statistics to real time/location based/interactions and new forms of social agency.Â  The other pole completely is more like the cloud based AI and perhaps crowd sourced machine learning.</p>
<p>Your ideas explore the possibilities of both these poles.Â  And certainly one of the big deals of distributed AR integrated with would be the possibilities it opened up both for new forms of networked social relationships and for new ways to draw on network effects.</p>
<p><strong><strong><strong>Jeremy Hight:</strong> and cross pollinations within &#8230;that is what my mind goes to</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>The other night I met Assaf Biderman, MIT, from the <a href="http://senseable.mit.edu/trashtrack/" target="_blank">Trash Track</a> team. Trash Track doesn&#8217;t utilize AR but I could see that there are possibilites there.<br />
What do you think?</p>
<p><strong><strong><span>Jeremy Hight: yes, absolutely,</span> </strong><strong>there can sort of skins on locations that user end selection can yield &#8230;like channels of place&#8230;.and can range from pragmatic core to art and play and places between&#8230;.how this recalibrates the semiotics of map&#8230;more than just augmentation as seen as a kind of piggy back on map..map becomes interface and defanged platform if you wil, interestingly my more poetic/philosophic writing led me here too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> I know they are at very different poles of the system but I do wonder how AR can bring some of the level of social agency/interaction that <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a> works on into a productive interaction with the kind of innovations in Machine learning that Dolores Lab style machine learning!!and others are pioneering?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> Natalie&#8217;s genius to me is in practical functional tech that also opens deeper questions and even new openings of what is needed..amazing layers in her work that way.. succint yet deep..very deep</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>Yes &#8211; I a just writing a post about her work &#8211; I find it deeply moving the way she has delved into the possibilities to using technology to open us up to our world.Â  One of the reasons I find distributed AR so interesting is because it will make it possible for all kinds of people to create and use augmentation in their lives and communities.</p>
<p>So to return to how a distributed AR framework could contribute to a project like Trash Track?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> what about using it for community, dissent and awareness raising then?Â  like Natalie&#8217;s work but building like a communal work of multiple points, like the old adage of the elephant and the blind menÂ  sorry..metaphor &#8211; like one of my points in immersive sight was how one could take augmentation as multiple works sort of turning the faces of a thing or place&#8230;and how this would make a larger work even in such a flow so people moving in a space could also build..</strong></strong></p>
<p><strong><strong>what of ar traces left as people move calibrated to user traffic and trash as estimated in an urban space&#8230;like it goes back to chris burden in the 70&#8242;s making you know that as you turn the turnstyle you are drilling into the foundation and may be the one that collapses the building?</strong></strong></p>
<p><strong><strong>so their movements leave trash. Natalie is all about raising awareness to cause and effect and data , space and ecology. love that.Â  so maybe &#8230;<br />
a feedback loop , artifact and user end responsibility can leave traces &#8230;trash&#8230;</strong></strong></p>
<p><strong><strong>.. cybernetics vs ecology and human waste</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>could you elaborate?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> brain fart&#8230;that the mass of trash people leave is a piece at a tiime&#8230;.and how like the space shuttle mission when it was argued first true cybernaut occured&#8230;.one cord to air for astronaut..one for computer on their back to fix broken bay arm&#8230;if there is a way to build on that and in relation to the topic&#8230;..how this can go further, that machines do not waste as much&#8230;as ar is a means to cybernetic raise awareness..eh..</strong><strong><span>In a sense it is likeÂ  the space shuttle mission when arguably the first true cybernaut occurredâ€¦.one cord to air for astronaut..one for computer on their back to fix broken bay armâ€¦if there is a way to build on that and in relation to the topicâ€¦..how this can go further, that machines do not waste as muchâ€¦as ar is a means to cybernetic raise awareness..eh.. hmmm.</span>.. </strong><strong> sensors etc&#8230;wearables too &#8211; could be eco awareness with data and machine and human</strong></strong></p>
<p><strong><strong>what about a cloud computing system with a slight ai in the sense of intuitive word cloud and interest scans&#8230;..so as one moves through say new york they can be offered new ai data and services as they move ? could also be of eco interests? concerns about urban farming, eco waste, air pollution etc&#8230;.perhaps with (jeremijenko element here) Â sensors placed in locations and these also giving data reads in public areas Â with no input but hard data itself&#8230;&#8230;hmm..could be interesting</strong></strong></p>
<p><strong><strong>it can also give info of the carbon footprints (estimated prob unless data is public record somehow) of chain businesses Â and data on which are more eco friendly as well as an iconography color coded and icon coded to the best places to go to support greening and eco friendly business? Â and the companies could promote themselves on this service to attract eco aware customers who would be seeing them as kindred spirits and helping the<br />
larger effort?</strong></strong></p>
<p><strong><strong>kind of eco mapping..and ar on mobile app</strong></strong></p>
<p><strong><strong>what about sensors that read air pollution levels, levels of solar radiation (to aid with skin protection in shifting light values in a city space..ie put on some skin cream now&#8230;), light sensors that detect density and over density in public spaces&#8230;to use the old trope in art of reading crowds in a space..but instead could indicate overcrowding, failing infrastructure in public spaces (which is a congestion that leads to greater pollution levels as well as flaws in city planning over time..), and perhaps a tie in to wearables&#8230;&#8230;worn sensors Â on smart clothes&#8230;.this could form a node network of people in the crowds &#8230;.and also send data within moving in a space&#8230;</strong></strong></p>
<p><strong><strong>here is a kooky thought&#8230; what of taking the computing power and data of people moving in a space..and not only get eco data and make available to them levels of<br />
data..but make possibly a roving super computer&#8230;crunching the deeper data of people open to this&#8230;&#8230;a hive crunching deeper analysis of the space, scan properties from sensors, and even a game theory esque algorithm of meta data if say 40 people out of 50 hit on a certain spike or reading&#8230;and even their input&#8230;..I worked in game theory for paleontology in this manner for a time as a teen&#8230;.a private project&#8230;&#8230; Â  the reading can lead to a sort of meta read by what hits most consistently..as well as in their input..text of what they experienced, observed,postulated,analyzed even&#8230;. this could be really interesting&#8230;even if just the last part from collected data and not from any complex branching of servers..</strong></strong></p>
<p><strong><strong>I thought at 19 or so that the flaw in paleontology was in how so many larger theories were shifting exhibitions and larger senses of things like were there pre-historic birds that were mistaken for amphibean and then back again&#8230;.so why not make a computer program and feed all the papers published into it and see what hits were counted in terms of an emerging meta theory&#8230;and landscape of key points being agreed upon&#8230;this data would be in a sense both algorithmic and a sort of unspoken dialogue &#8230;came from a lot of study of game theory one summer&#8230;</strong></strong></p>
<p><strong><strong>hope this makes some sense&#8230;I forgot to mention that I originally planned to be a research meteorologist and my plan in middle school or so was to get a phd and develop new software to have a global map and then run models of hypothetical storms across it in real time animations of cloud forms, radar and wind analysis/fields, barometric pressure spaghetti charts etc&#8230;.and to also do 3d cut away models of storm architectures&#8230;so been into visualizations of complex data and mapping for a long time!</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>Wow let me think about this one!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/feed/</wfw:commentRss>
		<slash:comments>18</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Is it â€œOMG Finallyâ€ for Augmented Reality?: Interview with Robert Rice</title>
		<link>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/</link>
		<comments>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/#comments</comments>
		<pubDate>Sun, 18 Jan 2009 01:03:32 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual goods]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[AR Geisha Doll]]></category>
		<category><![CDATA[compass in the android]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[hybrid augmented/virtual reality]]></category>
		<category><![CDATA[immersive mobile augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[massively multiuser augmented reality]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Neogence]]></category>
		<category><![CDATA[next generation transparent wearable displays]]></category>
		<category><![CDATA[NYC Tech Meetup]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[socializing sensor data]]></category>
		<category><![CDATA[Unreal 3]]></category>
		<category><![CDATA[Web Alive]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2620</guid>
		<description><![CDATA[Neogence is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, Robert Rice.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&#38;D labs: â€œthe patio on the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-2557" title="neogencesekrithqpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/neogencesekrithqpost.jpg" alt="neogencesekrithqpost" width="450" height="412" /></p>
<p><a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a> is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, <a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a>.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&amp;D labs: â€œthe patio on the lower left is where I do a lot of pacing and smoking my pipe and the porch and office upstairs is whereÂ  a lot ofÂ  meetings have been held.â€</p>
<p><a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a> (<a id="x_:i" title="@RobertRice" href="http://twitter.com/RobertRice" target="_blank">@RobertRice</a> ), CEO of <a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a>, recently tweeted:</p>
<p><em><strong>Iâ€™m changing my name to Robert Mobile Ubiquitous Geospatial Augmented Rice. Iâ€™m betting on radical changes in next 18 months.</strong></em></p>
<p>Although Robertâ€™s new AR platform is still under wraps, I think you will get a good idea of what direction he is going in from this interview (full text at end ofÂ  this post). Robert is the author of â€œ<a id="c:rr" title="MMO Evolution" href="http://books.google.com/books?id=dkZ-6C5utz8C&amp;dq=MMO+Evolution&amp;printsec=frontcover&amp;source=bn&amp;hl=en&amp;sa=X&amp;oi=book_result&amp;resnum=4&amp;ct=result" target="_blank">MMO Evolution</a>â€ and is a key developer and thought leader in persistent immersive environments, simulations, virtual worlds and massively multiplayer games as well as large scale communities and social networking.</p>
<h3>It is OMG finally, at least, for minimally immersive but truly useful AR.</h3>
<p>Since the launch of Android a new generation of useful augmented reality applications like <strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> are emerging.</p>
<p>After the last<a href="http://www.meetup.com/ny-tech/calendar/9466657/" target="_blank"> NYC Tech Meetup</a>, myÂ  friend <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas</a>,Â  <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">(</a><a title="@NatDefreitas" href="http://twitter.com/natdefreitas" target="_blank">@NatDefreitas</a>),Â <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank"> </a>or rather Nathan Mobile Meets Social Freitas, demoed for me a cool graffiti appÂ  he has developed on Android.Â Â  You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art.Â  For more cool mobile projects Nathan is working on &#8211; <a href="http://blog.twittervotereport.com/" target="_blank">Vote Report </a>and data collection for mass events, a guide to pubs and nightlife in New York City, and more, see his blog, â€œNathanâ€™s<a href="http://openideals.com/" target="_blank"> OpenIdeals. </a>With Camera, GPS, compass, and accelerometer, and APIs on Android for temperature, light meters, (no hardware yet), Nathan says Android:</p>
<p><a href="http://openideals.com/" target="_blank"><em><strong> </strong></em></a><em><strong>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ </strong></em></p>
<p>As Nathan is fond of saying:</p>
<p><strong><em>The compass is a killer app enabler!</em></strong></p>
<p><a href="http://openideals.com/" target="_blank">Also see </a><a id="ixwx" title="OpenIntents" href="http://code.google.com/hosting/search?q=label:sensors" target="_blank">OpenIntents</a> for some interesting Android Sensor projects.</p>
<p><img class="alignnone size-full wp-image-2558" title="wikitudepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/wikitudepost.jpg" alt="wikitudepost" width="450" height="356" /></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> was one of <em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Thomas Wrobel</a>â€™s</strong></em> two top AR milestones for 2008 (see <a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/" target="_blank">Gamesalfreso</a>):</p>
<p><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> I think. Seems the first released, useful, AR software.</strong></em></p>
<p><em><strong></strong></em> <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> is also a remarkable breakout for AR &#8211; but useful, nah.</p>
<p>I asked Robert if he also thought <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> asÂ  significant breakthroughs:</p>
<p><em><strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</strong></em></p>
<p><em><strong>Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</strong></em></p>
<p><em><strong>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</strong></em></p>
<h3>So when is it â€œOMG finally!â€ for massively multiuser augmented reality?</h3>
<p><img class="alignnone size-full wp-image-2559" title="ar-guipost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-guipost.jpg" alt="ar-guipost" width="450" height="360" /></p>
<p>The picture above is from <a id="kzm2" title="benjapo's portfolio" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">benjapoâ€™s portfolio</a> on istockphoto &#8211; also see the <a id="cqhi" title="istock video here" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">istock video here</a>.</p>
<p><a id="ylpn" title="Alex Soojung-Kim Pang considers" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">Alex Soojung-Kim Pang</a> (who weighed in recently on the <a id="vr8o" title="twitter-baby" href="http://www.endofcyberspace.com/2008/12/twitter-baby.html" target="_blank">twitter-baby</a> debates &#8211; see my <a href="http://tishshute.com/twitter-baby-debates" target="_blank">KickBee Posterous</a> blog) challenges design assumptions for augmented reality that take as a given the userâ€™s desire for numerous private enhancements to their reality.</p>
<p>Alex points out less will probably be more so that enhancements do not impinge on shared experience.Â  See his write up of a talk he gave at the Royal College of Art, <a id="bxx1" title="&quot;and the end of my own private Shibuya.&quot;" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">â€œand the end of my own private Shibuya.â€</a> Photo below by <em>StÃ©fan, â€œ</em><em><a href="http://www.flickr.com/photos/st3f4n/130889444/in/pool-84787688@N00">Karaoke in Shibuya</a></em><em>â€œ</em></p>
<p><em></em><em><strong>Part of the pleasure of these streetscapes is precisely that theyâ€™re collectively experienced, rather than individual visions: for even a brief period, we share with other postmodern, globe-hopping flaneurs and expatriates and temporary natives the light of the ABC-Mart sign and storefront.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2560" title="karaokepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/karaokepost.jpg" alt="karaokepost" width="450" height="338" /><br />
</strong></em></p>
<p>It is collective experience of enhanced, augmented, virtual or real experiences that interests me too. This is one of the reasons I find <strong><em><a href="http://www.pachube.com/" target="_new">Pachube</a></em></strong> and the <a href="http://www.eeml.org/" target="_blank">EEML project </a>of Haque Design and Research so interesting.</p>
<p><strong><em>Extended Environments Markup Language (EEML), a protocol for sharing sensor data between remote responsive environments, both physical and virtual. It can be used to facilitate </em><em>direct connections between any two environments; it can also be used to facilitate many-to-many connections as implemented by the web service <a href="http://www.pachube.com/" target="_new">Pachube</a>, which enables people to tag and share real time sensor data from objects, devices and spaces around the world.</em></strong></p>
<h3>â€œDistinctions between virtual and real are as quaint and outmoded as distinctions between mind and bodyâ€ (Usman Haque)</h3>
<p><img class="alignnone size-full wp-image-2603" title="chair1post1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair1post1.jpg" alt="chair1post1" width="150" height="150" /><img class="alignnone size-full wp-image-2602" title="remotechair-slpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/remotechair-slpost.jpg" alt="remotechair-slpost" width="150" height="150" /><img class="alignnone size-full wp-image-2604" title="chair2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair2post.jpg" alt="chair2post" width="150" height="150" /></p>
<p>Usman Haque (founder of <a href="http://www.haque.co.uk/pachube.php" target="_blank">Pachube</a> and <a href="http://www.haque.co.uk/" target="_blank">Haque Design and Research</a>) points out this is an underlying premise of his work &#8211; and augmented reality (full interview coming up soon!).</p>
<p>The pictures above show the Haque Design project, <a href="http://www.haque.co.uk/remote.php" target="_blank">Remote</a>:</p>
<p>â€˜<em><strong>Remoteâ€™ connects together two spaces, one in Boston the other in Second Life, and treats them as a single contiguous environment, bound together by the internet so that things that occur in one space affect things that happen in the other and vice versa &#8211; remotely controlling each other.</strong></em></p>
<p>There was a discussion in twitter recently about how the terms like Second Life, Exit Reality, Virtual Worlds are misleading and outmoded. As Robert pointed out we need:</p>
<p><em><strong>one word pleaseâ€¦that sums up virtual and/or augmented reality, interactive, immersive, virtual worlds, mmorpgs, simulations, etcâ€¦ also, I really donâ€™t like the term â€œaugmented realityâ€ or â€œmixed realityâ€. Neither is all that great. And NO â€œmatrixâ€ or â€œmetaverseâ€ please</strong></em></p>
<p>Robert argues strongly that there is a stultification both in virtual world technology &#8211; much of what we call virtual world technology was already, basically, where it is now in the mid 90â€™s. And MMOGs have devolved into gameplay design â€œthat emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.â€</p>
<p>Robert suggested I take a cruise through a new Virtual Space -Â  <a href="http://www.cooliris.com/">CoolIris</a> to find some good pictures for this post (note the partnership between <a href="http://blog.cooliris.com/2009/01/14/cooliris-and-seesmic-streamline-video-blogging/" target="_blank">CoolIris and Seesmic to Streamline Video Blogging.</a> I added the Cooliris Plugin to Firefox and typed Augmented Reality into search and soon I was cruising a highway of images and links. The Road Map image grabbed my attention (see below). It shows the continua that <a href="http://www.metaverseroadmap.org/" target="_blank">the Metaverse RoadMap</a> authors thought are likely to influence the ways in which the Metaverse unfolds. It is â€œa map of the spectrum of technologies and applications ranging from augmentation to simulation; and the spectrum ranging from intimate (identity-focused)external (world-focused)â€</p>
<p><img class="alignnone size-full wp-image-2561" title="metaverseroadmap" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/metaverseroadmap.jpg" alt="metaverseroadmap" width="452" height="427" /></p>
<p>Quite to my surprise, when I clicked out of <a href="http://www.cooliris.com/">CoolIris</a> to the source for the image, I found it had been drawn from a post I wrote in May 2007, <em><strong><a id="jv.r" title="Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle." href="../../2007/05/22/hybridized-digitalphysical-worlds-where-pop-and-corporate-cultures-mingle/" target="_blank">Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle.</a> </strong></em>My post talks about a number of hybridization experiments that were bringing together lifelogging, sensors everywhere, simulation, virtual worlds, and augmentation.</p>
<p>The striking difference from 2007 to now is that we have definitely moved on from mere experimentation. And the poles of the continua<em><strong> intimate/extimate, augmentation/simulation </strong></em>as<em><strong> </strong></em>expressed in the Metaverse Roadmap are now becoming entwined (note the picture above seems to be slightly different to the one used in the road map as <a id="vdcf" title="posted here" href="http://www.metaverseroadmap.org/overview/" target="_blank">published here</a> &#8211; perhaps I had an early version?)</p>
<h3>&#8220;Augmented Reality is not just about overlaying dataâ€¦&#8221; (Robert Rice)</h3>
<p><img class="alignnone size-full wp-image-2562" title="totalimmersion" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/totalimmersion.jpg" alt="totalimmersion" width="450" height="332" /></p>
<p>Th<em>e </em>screenshot above is from <a id="c7vm" title="TotalImmersions video" href="http://www.t-immersion.com/en,video-gallery,36.html#">TotalImmersions video</a> demoing Augmented Reality with 3D Cell Phones.<em> Also see <a id="tvca" title="video of their immersive games" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">video of their immersive games</a>, and FutureScope kiosks <a id="eje0" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a> and <a id="h-:s" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a>.<br />
</em><br />
<a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/">Gamesalfreso</a> noted that Will Wright, delivered the best <a href="http://www.pocketgamer.co.uk/r/Various/Spore+Origins/news.asp?c=8725" target="_blank">augmented reality quote</a> of the year. When describing AR as the way of the future for games, Will Wright said:</p>
<p><em><strong>â€œGames could increase our awareness of our immediate environment, rather than distract us from itâ€.</strong></em></p>
<p>Robert points out in this interview the term Augmented Reality itself has become associated with a very limited understanding of what â€œenhancing your specific reality,â€ is really about. Robert notes:</p>
<p><em><strong>it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.</strong></em></p>
<p><em><strong>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</strong></em></p>
<p><strong><em>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</em></strong></p>
<p><strong><em>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</em></strong></p>
<p><strong><em><br />
</em></strong></p>
<h3>Goggleâ€™s Are Not The Secret Sauceâ€¦</h3>
<p><strong><em><img class="alignnone size-full wp-image-2563" title="ar-catpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-catpost.jpg" alt="ar-catpost" width="137" height="150" /><img class="alignnone size-full wp-image-2564" title="goggles-avatarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/goggles-avatarpost.jpg" alt="goggles-avatarpost" width="150" height="150" /><br />
</em></strong></p>
<p>AR Cat left and Robert Rice right</p>
<p>What has come to be associated with the term Augmented Reality, in the popular imagination &#8211; an idea of 3D graphics projected over markers that has been forever waiting for the advent of â€œwicked next generation transparent wearable displaysâ€ &#8211; nirvana for augmented reality. While such displays may be nirvana for AR (and they could be with us in less than twenty four months), Goggles are not the â€œsecret sauceâ€ of AR as Robert points out.<strong><em><br />
</em></strong></p>
<p><em><strong>All the glasses are, is another display device. At the end of the day, it doesnâ€™t matter if you are looking at an LCD monitor, an IPhone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</strong></em><br />
<em><strong><br />
The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2585" title="vuzix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/vuzix.jpg" alt="vuzix" width="450" height="318" /><br />
</strong></em></p>
<p>There has been quite<a href="http://www.realwire.com/release_detail.asp?ReleaseID=10934" target="_blank"> a buzz going around</a> about the new <a href="http://www.vuzix.com/iwear/products_wrap920av.html" target="_blank">Vuzix Eyewear</a>, and recently Robert talked with Vuzix and checked The Wrap 920AV eyewear out:</p>
<p><em><strong>Vuzix is not alone in pursuing the ultimate in hardware, at least as far as wearable displays. However, I think they are much farther than the rest of the pack in vision, roadmap, and execution. They have put together a team that has a sense of urgency and ambition that will blow the industry away. After talking to them, I got the feeling that they really know what they are doing and there is a lot of mind blowing stuff in their pipeline. Iâ€™m sure they are one of the few companies that really gets it and has a clear vision of the future. Definitely my first choice to work with.</strong></em></p>
<p><em><strong><br />
</strong></em></p>
<h3>Hybrid Augmented/Virtual Reality</h3>
<p><img class="alignnone size-full wp-image-2566" title="qa_2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/qa_2post.jpg" alt="qa_2post" width="450" height="347" /></p>
<p><a id="va0_" title="Cory Ondrejka posted" href="http://ondrejka.blogspot.com/2009/01/anybots-telepresence-robot.html" target="_blank">Cory Ondrejka posted</a> this picture of the anybots telepresence robot and â€œcongrats to <a href="http://www.tlb.org/">Trevor Blackwell</a> and the rest of the <a href="http://anybots.com/">Anybots</a> team on the launch of <a href="http://anybots.com/abouttherobots.html">QA at CES</a>.â€Â  Cory (one of the founders and former CTO of Second Life) also made some predictions for Virtual Worlds, some optimistic and some less so, including â€œthe increasing need to be able to diversify the Second Life product offering to begin truly rebuilding the code base.â€</p>
<p>Robert is unabashedly irritated with the state of play in Virtual Worlds and MMOGS:<br />
<em><strong><br />
</strong><strong>Unless both industries (Virtual Worlds and MMOGs) have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</strong></em></p>
<p><em><strong></strong><strong>Iâ€™d like to see someone grab an engine like Offset, Crytek, HERO, or Unreal 3, and smack on a fat MMO server infrastructure (Eve or Bigworld)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</strong></em></p>
<p>I did just find out that Nortelâ€™s <a id="qkxv" title="WebAlive is powered by the Unreal 3 engine" href="http://www2.nortel.com/go/news_detail.jsp?cat_id=-8055&amp;oid=100251105&amp;locale=en-US" target="_blank">WebAlive is powered by the Unreal 3 engine</a>. You <a id="xqbw" title="can try WebAlive" href="http://www.lenovo.com/elounge" target="_blank">can try WebAlive</a> out here.</p>
<p>Robert<strong><em> </em></strong>points out how rare it has become to see people really push virtual worlds technology and MMOGs into entirely new directions.Â  Although, of course, there are exceptions.Â  I managed to engage some interest from Robert in the possibilities the <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">opensource modular architecture of OpenSim</a> opens up, and <a id="vx_i" title="the augmented reality experiments from Georgia Tech with Second Life" href="http://arsecondlife.gvu.gatech.edu/" target="_blank">the augmented reality experiments from Georgia Tech with Second Life</a> (screenshot below) got praise from Robert for trying to do something new. (Georgia tech have also put out a <a id="kfzj" title="virtual pet app for the iphone" href="http://uk.youtube.com/watch?v=_0bitKDKdg0" target="_blank">virtual pet app for the iphone</a> ).</p>
<p><img class="alignnone size-full wp-image-2567" title="picture-4" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/picture-4.png" alt="picture-4" width="321" height="245" /></p>
<p>But while Robert clearly has zero patience for virtual world technology which he sees stuck in the mid nineties, he notes:</p>
<p><em><strong>the innovative and wonderful stuff about SL isnâ€™t SL, it is what people are doing and creating on their own with terrible tools *IN* SL</strong></em> [Second Life].</p>
<p>The immersive mobile augmented reality platform Robert is building, he hopes, will generate this kind of user creativity but with 21st century tools.</p>
<h3>So is it â€œOMGâ€ finally for the Augmented Reality we have dreamed about?</h3>
<p>According to Robert:</p>
<p><em><strong>It really boils down to a markerless solution and a good application.</strong></em></p>
<p>In the interview below we cover a number of topics including business models for Augmented Reality, e.g., how business models based on micro-transactions and virtual goods will translate to Augmented Reality.</p>
<p>Many of the challenges to becoming mainstream faced by virtual worldsÂ  are similar to the challenges AR must overcome. Robert discusses these including the interface/gui that is a critical element for AR, solving the riddle of one world or many, patent wars in Virtual Worlds and Augmented Reality, the role of Augmented Reality in the future of sustainable computing, and what interoperability is about.</p>
<h3>The Back Story for AR/VRâ€¦</h3>
<p>In case you want to get up to speed on the required background reading forÂ  Augmented Reality. This is Robertâ€™s required reading list and Denno Coil is an absolute <strong>must</strong> see (feel free to add to this list in the comments, please).</p>
<p>â€œIf you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!)â€</p>
<p><img class="alignnone size-full wp-image-2568" title="dennoucoil" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/dennoucoil.jpg" alt="dennoucoil" width="450" height="256" /></p>
<p>Screenshot from Denno Coil from<a id="yic5" title="Concrete Badger" href="http://www.concretebadger.net/blog/2007/12/17/dennou-coil-full-series-2007-in-12-day-4/" target="_blank"> Concrete Badger</a>.</p>
<h3>Interview With Robert Rice</h3>
<p><strong>Tish Shute:</strong> I am glad to hear that you are working on this [an immersive mobile augmented reality platform]!</p>
<p><strong>Robert Rice:</strong> We switched gears from MMO stuff about a year ago and we are finally getting some traction. It is very hard doing anything in this economy right now, but we found an opportunity to take AR to a new level beyond what you see on youtube. AR is still too â€œcuteâ€ and novelty. We donâ€™t want to play around.</p>
<p><strong>Tish Shute:</strong> I like Wikitude â€˜cos it even manages to do something useful!</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> Yeah, useful = traction. Now that we are getting near a prototype we are starting to get a lot of interest even though we are still technically way under the radar.</p>
<p><strong>Tish Shute:</strong> r u funded?</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> privately funded, some revenues from an early license, and ongoing discussions with several institutional investors. So, we have some funding, but nothing spectacular just yet.</p>
<p><strong>Tish Shute:</strong> are you just developing an AR platform?</p>
<p><strong> Robert Rice:</strong> hrm, sort of, but not just that. By platform I mean tools, sdk, and infrastructure plus some applications. The idea is to build something that facilitates everyone else making cool things and useful applications for different industries/sectors</p>
<p><strong>Tish Shute:</strong> Yes that is the cool thing to do but isnâ€™t that hard to fund!</p>
<p>(Robert grins) Well, that depends on the business model. Weâ€™ve got that figured out. Iâ€™d be absolutely happy if everyone and their brother were making applications on our stuff that gives us an edge on market penetration/saturation. There are plenty examples that prove the model. If you give people free and easy to use tools, they will run with it. ARtoolkit for example, has tons of people making nifty things and posting videos on youtube that has pushed them to the forefront as THE AR middleware to use right now, or heck, look at youtube free service, and they dominate video sharing.Â  Sure there will be a lot of â€œnoiseâ€, but there will also be a lot of â€œsignalâ€ that will rise to the top, facilitating and enabling is creating value in its own right.</p>
<p><strong>Tish Shute:</strong> But how do you expect to monetize?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> There are a good half a dozen ways to monetize AR or an AR platform.</p>
<p><strong>Tish Shute:</strong> What are your top 3?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, microtransactions, localized mobile advertising, and enterprise solutions (visualization)</p>
<p><strong>Tish Shute:</strong> Do you think the consumer market will give the lead?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Iâ€™m not sure. We are getting people from academia, intelligence, defense, border security, and some corporate types knocking on our door already, and pretty aggressively. It may be that those sectors push AR before consumer entertainment really kicks off.</p>
<p>But going back to a discussion we had earlier &#8211; yes working with â€œno markersâ€ is a big deal.</p>
<p><strong>Tish Shute:</strong> Can you talk about what you are doing there or is it still under wraps?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I can say that between some university tech transfer and some of our own proprietary stuff, we are using some fairly common visual tracking technology. if you are really plugged into the AR scene, you will know there are probably half a dozen visual tracking methods out there. We just looked for the best one, licensed it for commercial use, and then started working our magic. This is a very small piece of the overall effort, but worth noting.</p>
<p>The downside with working with university tech is that it is usually based on research, incomplete, and not wrapped up in a nice commercial package on the upside, it can be a good start to build on.</p>
<p><strong>Tish Shute:</strong> As you know I am very interested in â€œtechnology that mattersâ€ in particular tech that can help us accomplish the urgent goal of sustainable living.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: oh, Iâ€™m pretty keen on sustainable living as wellâ€¦after I sell off a few companies and have money of my own, Iâ€™m going to get into arcologies<br />
â¦<br />
Robert grins</p>
<p>The interesting thing with the visual stuff combined with our other tech, is that we can make things multiuser, persistent, dynamic, and mobile.<br />
The markers (fiducials) are really really limiting outside of basic applications. You canâ€™t really plaster everyone and everything with a marker.Â  And they are, by nature, static (even if they are animated or whatever).</p>
<p>Alsoâ€¦ our stuff works indoors and outdoors even without a GPS connection.<br />
â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> Now that does sound interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, with visual, you donâ€™t need a compass or accelerometers either. Less hardware : )</p>
<p>You start with wifi triangulation or gps coord to get a â€œbruteâ€ location, and then you use the visual stuff for down to the meter accuracy and that by nature, gives you your orientation and positioning.</p>
<p><strong>Tish Shute: </strong>Wow this is beginning to sound very interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Once you have that, it doesnâ€™t matter where you go, it continues to track and continually refines areas you have been before. Weâ€™ve spent the last year figuring all this out. There are so many problems and obstacles that are going to be developing in the future for anyone trying to do what we are, but we have already discovered solutions.</p>
<p>oh, visual tracking = gesture based interfaces too thatâ€™s going to take some work, but its doable.Â  The real pain in the ass there isnâ€™t the actual tracking, it is in the interface design.</p>
<p>Thatâ€™s something that almost every AR company, venture, and research program is missing out on entirely. They are so focused on making cute things with markers.Â  They are missing the larger problems of AR Spam, interface, iconography, GUI, metaphor, interoperability, privacy, identity.</p>
<p><strong>Tish Shute:</strong> So how are you dealing with all that!!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> We took the backwards approach of trying to think where we want things to be in ten years (and we read all the cool booksâ€¦Vinge, Stephenson, Gibson, etc.) and then we spent time trying to think of what the potential problems areâ€¦.like AR spam. Its bad enough when a giant penis flies by in second life, we donâ€™t want that to happen in a global wireless AR platform.</p>
<p><strong>Tish Shute: </strong>Do you have a prototype yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, 6 months away from youtubing something. Problem has been slow funding, which equals slow development. We also donâ€™t want to show our cards too soonâ€¦too many potential competitors out there.</p>
<p>â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> when you say microtransactions what is the business potential there?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>hrm last year I think, $1.5B was spent on virtual items. Thatâ€™s games and virtual worlds. That should hit $5B in a couple of years. Thatâ€™s basically people buying and selling things like WoW gold or items in SL or whatever. microtransactions, is basically the same thing, but in AR space.</p>
<p>Why couldnâ€™t a 3D artist make a wicked animated 3D dragon, and then sell it to someone else? With AR, you could sit it on your shoulder. With a good scripting engine, you could train it to do stuff. Thats what I want to enable.</p>
<p>tools + sdk + platform = enabling people to make and create. Add in a commerce level (microtransactions) and wala.</p>
<p><strong>Tish Shute:</strong> At the moment all of these virtual goods are very platform specific, is that a problem for you?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Not at all. This is at a higher level. You have to switch mental models when you talk about what AR could or should be. For example, lets contrast the web and virtual worlds. For every virtual world you go to, you have to download a whole new client. Imagine if that model was applied to the webâ€¦ you would need a brand new browser for every website you went to. That is just soâ€¦wrong.</p>
<p>Itâ€™s the same thing for ARâ€¦people are thinking about it with the same mental and business models and development philosophies as virtual worlds or web.Â  There are some things and aspects that work fine, but not everything.</p>
<p>Virtual worlds, are, by nature, necessarily different and walled gardens. The idea of 100% open and interoperable virtual worlds is a red herringâ€¦it sounds good but in practice it is a really dumb idea.</p>
<p><strong>Tish Shute: </strong>I was wondering if you had a way to leverage all the 3D content already created â€˜cos that would jump start things in AR wouldnâ€™t it?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Oh yeah, thatâ€™s easy. They all use the same polygons. Any virtual item in any game or virtual world is likely created with 3D studio or maya or something similar would be easy to convert and use.</p>
<p><strong>Tish Shute:</strong> So people could bring their WoW weapons into your system?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Not legally, but sure. Its just a 3D model with a texture.Â  It doesnâ€™t matter if you use corel draw or photoshop or paintshop proâ€¦.or one screwdriver or another. Part of my teamâ€™s advantage, is that we are all experienced in MMORPG and virtual world design and development. We know the tools, the tech, and what works and what doesnâ€™t.</p>
<p><strong>Tish Shute:</strong> But some of the 3D content created in the social worlds is what has most value to people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Right, and that can be exported out easily.</p>
<p><strong>Tish Shute: </strong>But back to â€œrealâ€ life applications. Is you platform really markerless?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes.Â  marker = printed icon or glyph, also known as a fiducial</p>
<p><strong>Tish Shute:</strong> But u must have some marker?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, more accurately, you need a point of reference.</p>
<p>Visual tracking has been around for more than a decade.Â  Lots of work for robots and other sectors.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> But isnâ€™t the specificity of reference n terms of RL applications a vital key, for example, for a database of things?</p>
<p>Robert grin That is a different problemâ€¦tracking, registration, mapping, positioning, etc. That question has to do with mapping which is related to visual tracking, but not the same thing. We have a rather unique approach to some of this that I canâ€™t discuss (patent pending).</p>
<p><strong>Tish Shute: </strong>But for example, to create an augmented natural history of food &#8211; say I want to point at the slab of meat on my plate and know where that cow came from, what feed lot how it was treated etc</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>That is not possible without ubiquitous nanotechnology. Shall I explain?</p>
<p><strong>Tish Shute:</strong> Yes please!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Ok, lets step back a minute and turn that burger back into a cowâ€¦ the first problem (of this particular situation) is differentiating from one cow to another since most cows look alike, you can either attempt to discriminate visually (cow patterns) or use a much simpler option, like giving each cow a rfid chip in their bell, or hoof</p>
<p>Now, most people would try to figure out how to jam all sorts of info in the rfid chip, which sounds like a good idea, but isnâ€™t, the trick would be to simply use the rfid to store a unique identifier with is then linked to a database elsewhere, or hoof.</p>
<p>That database should continually be updated with whatever relevant information you need so as you get close with your AR laptop, wearable displays, or embedded brain chip, you get the identifier broadcast, then you get the info downloaded to you, and it â€œsticksâ€ to the cow with the generic visual tracking (object following, even simple bounding box is sufficient for a slow moving cow)</p>
<p>So, up to that point, you can get tons of information about that specific cow, that cow population (remember, AR is not just about overlaying dataâ€¦it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.) Tie in data visualisation and some farmer tools and all sorts of other things happen. Now, lets move the timeline ahead a bit.</p>
<p>The butcher gets the cow and does his handiworkâ€¦because we know all the info about the cow, all of the meat can be properly labeled and marked. Ideally, with a UPC code or a unique glyph (somewhat problematic depending on how many unique glyphs you can create) so, while you are in the grocery store, you can access the relevant shopping dataâ€¦age of cow, state of origin, type of feed, how many spots, how much body fat, which butcher, whatever not because of what is inside the package, but the package itself.</p>
<p>Getting back to your hamburger, the problem is that it is a burgerâ€¦there is nothing to distinguish that burger from another one at the tableâ€¦unless you stuck a rfid chip in it or splattered it with ink and a unique glyph, or maybe a special one of a kind plate.</p>
<p>However, a properly designed AR system could say â€œhey! that/s a hamburger! and I know I am at Fat Daddyâ€™s Burger Joint in Raleigh North Carolina on Glenwood Avenue, and I know that they cook their burgers this particular way, and their meat supplier is those guys over there, and they usually get their cow meat from a farm out in Utahâ€</p>
<p>With ubiquitous nanomites or whatever, then its not that far out to consider edible nanos that are in the meat and that broad cast info so a slab of meat can tell you about itself and broadcast that to the general public.</p>
<p><strong>Tish Shute:</strong> What useful scenarios can we create without the nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> If it wasnâ€™t a burger or a consumable organic, the scenario changes.</p>
<p><strong>Tish Shute: </strong>What is the time scale on nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ehhhhhhh 20 years minimum if we are lucky. They sound good on paper, but there is a whole book worth of problems and why they are so far offâ€¦as consumer grade, all over the place, type of stuff.</p>
<p><strong>Tish Shute:</strong> Did you see the Nokia Home Control center?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, I saw the Nokia stuff.</p>
<p>AR for sensors, like security systems, temperature control, etc. all become â€œsources of dataâ€ that a AR system can visualize. So yes, thats easily doable. You could do that in a short period of time with some half decent engineers.</p>
<p>The trick of what Nokia is doing is aggregating sensor data from a building/home/facility, mashing it together, and sending the mobile device alerts and data visualization conceptually rather simple, but no one has done it right or well yet.</p>
<p>It wouldnâ€™t surprise me if Nokia pulled it off.</p>
<p><strong>Tish Shute:</strong> yes and if they do and someone does an AR interface to it that would be an inflection point for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> In a roundabout way, yes. You could get data directly from your house, or get it through your mobile device and in either case, use the AR for visualization and control.</p>
<p>The interface/gui is a critical element for AR. That is one of the areas where it, as an industry, risks doing a bad job and turning into just a fad or another novelty like VR.Â  Virtual worlds have been struggling with that for a while, but MMORPGs have had the effect of extending their life cycle</p>
<p><strong>Tish Shute: </strong>Yes VWs have not solved the interface problem.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The interface is one of their problems yes. Most virtual worlds are stuck in 1996/98</p>
<p><strong>Tish Shute:</strong> If ARÂ  is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc. seems that it is the ideal interface for home control?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Well for home control, you must know:</p>
<p>1) Who am I? Am I authorized to know this information? Am I a guest?</p>
<p>2) Where am I? Is this my house? or someone elses?</p>
<p>3) What am I doing? Do I want to make all the doors lock? Turn on or off lights? Open the garage door? Trigger the security alarm?</p>
<p>So the same questions apply</p>
<p>Iâ€™d say that all virtual worlds are stuck in the mid 90s. They are at least a decade behind the game worldsâ€¦in technology, design, implementation, architecture, etc. etc. In my opinion, things like Second Life are shameful in how they are presented as state of the art, innovative, ground breaking, new, wonderful, and world changing.</p>
<p>But thats another topic of conversation : )</p>
<p><strong>Tish Shute: </strong> Well for me the contribution of VWs is the presence enabled real time interaction with application (as 3D info machine) and context with other people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Oh,there is no doubt that they are greatly useful and have a phenomenal amount of potential.</p>
<p>They *could* be all those things I just said that SL isnâ€™tâ€¦the problem is that they are either just existing, or they are meandering around without any real focus or direction. They arenâ€™t evolving.</p>
<p>Even MMORPGs are losing their way and beginning to stagnate terribly</p>
<p><strong>Tish Shute:</strong> yes I agree</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>But, AR has the potential to change a lot of things.</p>
<p>Im sure you have seen <a id="n_22" title="the yellowbook commercials" href="http://www.youtube.com/watch?v=zdPFBTQpk-U" target="_blank">the yellowbook commercials</a>? The technologies you are seeing here are doable in hrm, a year or less maybe. The tricky part is the interactivity and AIâ€¦that is, the content. Everything else isnâ€™t a problem. The avatar there could be photorealistic or stylized like a WoW character.</p>
<p>You could do that to some degree with markers for registration but dynamically changing the content linked to those markers is a little weird</p>
<p>(by the way, for the record, I like markers just fine, I just donâ€™t think they are useful for real-world mobile applications)</p>
<p>I also think that the guys that want to dust the planet with miniature rfid chips are on crack and are going about it the wrong way</p>
<p><strong>Tish Shute: </strong>A high level of interactivity is hard though. Isnâ€™t it? Even in VWs it is very limited.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it depends if you can track what the user is doing, and interpret that properly. Interactive is also a very lose term.</p>
<p>Clicking a button and making a light blink could be considered interactive.</p>
<p><strong>Tish Shute: </strong>In VWs a high level of interactivity wouldÂ  be to wield a virtual hammer and have a real nail go in! is physics part of the problem?</p>
<p><strong>Robert Rice:</strong> physics arenâ€™t difficult, plenty of middleware out there for it. The problem with that isnt so much the physics as much as it is the scale and purpose</p>
<p><strong>Tish Shute:</strong> well for robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> that gets into a conversation about meshes, textures, and volumetric collision detection and stuff</p>
<p><strong>Tish Shute:</strong> virtual robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> You mean teleremote/telepresence of real robots?</p>
<p><strong>Tish Shute: </strong>yes!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ah, for that, you need some tactile feedback and some other stuff &#8211; doable, but insanely difficult. Thatâ€™s why you donâ€™t see a whole lot of remote controlled surgery robots all over the place.</p>
<p>They do existâ€¦</p>
<p><strong>Tish Shute: </strong> Will AR contribute to sustainable living by freeing us from some of our energy hogging devices?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>AR will ultimately encourage energy saving and recycling. where did I leave a light on at? where is the nearest trash can? what is the UV index outside today?</p>
<p>Yes, computers are energy hogs, but as we start seeing larger SSD drives, more efficient CPUs (even if the number of cores increases in multiples), and so on, the power will go down.</p>
<p>Also, think about thisâ€¦wearable displays potentially use less energy than LCD monitors on your desk.</p>
<p><strong>Tish Shute: </strong>Yes I should pick the brains of my intel chums on energy saving!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Getting rid of the monitor and switching to solid state drives will save an assload of power. Yes, I said assload.</p>
<p>Tell your intel chums to quit screwing around with single core mobile CPUs. We need multiple cores, that are smaller, faster, and use less power.</p>
<p><strong>Tish Shute: </strong>Is AR is the sustainable future of VW and MMOGs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The fun stuff will happen when they are both integrated in some fashion.</p>
<p><strong>Tish Shute:</strong> So perhaps this is why the Georgia guys are thinking in trying to combine AR and SL (<a id="boum" title="see video here" href="http://uk.youtube.com/watch?v=O2i-W9ncV_0&amp;feature=related" target="_blank">see video here</a>).</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That first video was pretty damn cool. It just pains me that they are using SL for it. And omg, all those markers on the table.</p>
<p>Although, I could care less about seeing my SL avatar on my coffee table. I would rather see an avatar representing ME in the real world, moving around in a virtual world that is a â€œto scaleâ€ replica of the real world. That is MUCH more interesting and innovative.</p>
<p>But even if I donâ€™t like where they are going, or that they are using SL, the important thing is that they are doing something and forging ahead. I have a massive amount of respect for anyone, private, government, or academic, that is doing that.</p>
<p>And yes, the door (or window, or looking glass) has to work both ways for maximum potential, at least, thatâ€™s what Id like to see. They donâ€™t *have* to, but it would be rather cool.</p>
<p>And going back to sustainability, AR has the potential to make monitors generally obsolete, laptops too. Thatâ€™s a lot of power hungry devices with all sorts of metals and batteries inside.</p>
<p>But, even if the tech was absolutely crazy awesome right this minute, it would take a little while for consumer adoption.</p>
<p><strong>Tish Shute:</strong> But AR unleashes the mobile device?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, AR is going to be built on powerful mobile devices for the near future, eventually embedded comps in clothing and whatnot. But that is a ways off</p>
<p>Entertainment is going to be the first huge driver.</p>
<p><strong>Tish Shute:</strong> So people will get used to having a pet virtual dragon on their shoulder first?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, virtual dragon is way cool, easy tech for games, and can eventually be leveraged into a smart agent which becomes a practical applicationâ€¦agent based contextual search, etc. Yes, entertainment will also drive people to get used to the tech</p>
<p><strong>Tish Shute: </strong>Oh thanks for turning me on to <a id="kzbv" title="gamesalfresco" href="http://gamesalfresco.com/" target="_blank">gamesalfresco</a>!<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Ive noticed that the good stuff usually gets linked to there. They donâ€™t list my blog, but thatâ€™s what I get for staying under the radar and not posting often. But anyway, gamesalfresco is the first place I send people that need a crash course in AR. Great site, great owner.</p>
<p><strong>Tish Shute:</strong> So are you in agreement with Thomas Wrobelâ€™s positioning ofÂ <a href="http://www.mobilizy.com/wikitude.php" target="_blank"> </a><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong></em> and <em><strong><a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> </strong></em>as being significant milestones for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</p>
<p class="MsoNormal">Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</p>
<p>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</p>
<p><strong>Tish Shute:</strong> Many people would say SnowCrash (metaverse) is now and Halting State (AR) is ten years from now. But you are seeing a development timeline for some popular AR apps in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Anyone that says SnowCrash is -now- is living in a box. Virtual Worlds, Virtual Reality, and immersive tech in general stopped innovating in the mid 90s. Iâ€™m continually flabbergasted at the number of people that think that things like Second Life are state-of-the-art or innovative. You might as well try to market a walkman as cutting edge, even though we have IPods out there.</p>
<p>Id like to see someone grab an engine like offset, crytek, hero, or unreal 3, and smack on a fat mmo server infrastructure (eve or big world)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</p>
<p><strong>Tish Shute:</strong> Why doesnâ€™t anyone do that?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Its not cheap, people will only fund a copy of something that exists already, people fear change and innovation, etc, The list goes on. The right money goes to the wrong people all the time.</p>
<p>Alternatively stated, there is a lot of â€œright idea, wrong implementationâ€</p>
<p>MMORPGs carried the torch and have made huge strides on the technology front, but have devolved in design. More often than not the gameplay emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.</p>
<p>Unless both industries have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</p>
<p>But yes, Iâ€™d say that the next 18 months are going to be very interesting with a lot of money being thrown around, new ventures, and plenty of content/applications. I expect most of this will be centered on single user AR experienced through a mobile device with a screen (iphone, android, etc.). I expect that there will be a significant boost after Vuzix releases some of their wearable *transparent* displays, putting Microvision back into the â€œhas potential but is too quietâ€ position.</p>
<p><strong>Tish Shute:</strong> AR conjurs an image in many peopleâ€™s minds of dreadful head gear!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, it is either transparent wearable displays (in eyeglass formfactor) or nothing. HMDs with miniature LCD or OLED displays are good for streaming video, but for the mobile ubiquitous AR we all dream about, it has to be something that looks and feels like a pair of Oakleys.</p>
<p>I should also mention that several different types and modes of AR are going to find themselves being defined and refined over the next two years as we continue to blaze new trails, establish a lexicon (we keep borrowing terms from games, VR, virtual worlds, mmorpgs), and really work out the how as well as the why.</p>
<p>Even though the idea of AR has been around for a long time, the technology is just beginning to emerge, and very few people are even looking far enough ahead to figure out the problems and solutions that the tech creates. Really, who is thinking about how to deal with AR spam right now?</p>
<p><strong>Tish Shute: </strong>Do you see any successful networked AR applications emerging in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes and no.</p>
<p>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</p>
<p>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</p>
<p>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</p>
<p>The big thing Iâ€™m worried about is AR becoming the next silicon valley trendâ€¦once they realize the potential, an enormous amount of capital will flow to a bunch of startups with half baked ideas, weak business models, ten year old tech, and a lot of overhyped marketing. That is the very thing that will kill this technology as something that has true power and potential to literally change the way we interact with each other, our surroundings, information, and media.</p>
<p><strong>Tish Shute: </strong>Do you think AR has value for a project like Pachube that helps us connect dtat from lots of different environments and sensor actuator data?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I think that AR has value as an interface to this data (essentially data visualization based on information streaming from a sensor or source that is interpreted in some dynamic graphical manner that has meaning). This is one of the â€œbig areasâ€ where ubiquitous augmented reality and wearable computing will really shine. Iâ€™ll definitely be keeping an eye on Pachube .</p>
<p><strong>Tish Shute:</strong> I canâ€™t help it! I am really interested to hear more about the Vuzix glasses?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, everyone is getting hung up on the glasses as the end-all be all and having markers everywhere too.</p>
<p>All the glasses are, is another display device. At the end of the day, it doesnt matter if you are looking at a lcd monitor, a iphone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</p>
<p>The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</p>
<p>I think that we are within 24 months of the magic wearables (these new ones by vuzix are probably the real first generation attempt at doing it right). They wont be perfect, but I expect they will be functionalâ€¦and once we have functional, we can start doing the good stuff.</p>
<p><strong>Tish Shute:</strong> You mentioned you disappointement with VWs and MMORPGs earlier.Â  Could you tell me more about that?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Yeah, there was an evolutionary divergence between virtual worlds and mmorpgs a while back. One stagnated almost completely, and the other leapt ahead in one sense and devolved horribly in the other sense. Neither is where the state of the art should be.Â  That is a whole other conversation, and probably a second book.</p>
<p><strong>Tish Shute:</strong> So making AR persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc. that is where your efforts are going?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. I fully expect that the hardware is almost ready for it. You can cobble together some amazing things in the lab right now, and I think commercial viability is imminent. The real value (as far as Iâ€™m concerned) is in making it mobile, wireless, persistent, and massively multiuser. You could argue that augmented reality will take over where virtual reality failed and become internet 3, internet one being the internet, internet two being the webâ€¦</p>
<p>mmorpgs are nothing more than single player games in a multiuser environment these days. Iâ€™m more than a bit bitter about it. All the right money went to the wrong people, and the best games we have are barely shadows of what we could have had by now.</p>
<p><strong>Tish Shute:</strong> Are there any open source AR platform dev projects?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>open source? hrm, Im sure there are multiple ones out there</p>
<p>if not entirely open source, there are plenty of things to experiment with that are generally free if you arenâ€™t trying to sell something, DART and ARTOOLKIT come to mind as very accessible applications.</p>
<p>Marker based AR is very important right nowâ€¦it is easy, low tech, understandable, highly customizable, and most importantly, accessible to the average joe. Ultimately though, we need a method of pure trackingâ€¦no markers glued to everything on the planet, no â€œbillions of RFIDsâ€ embedded in every square inch of every object on the planet, etc.</p>
<p><strong>Tish Shute:</strong> What do you mean by interoperability in AR? And what do you think about the development of standards?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Ooh, good question.</p>
<p>Ok, so the internet is basically computers communicating with computers, and the web is mostly pages linking to other pages (Iâ€™m greatly oversimplifying here). Hold this thought for a minute.</p>
<p>Switch over to MMORPGs. If you want to play in one (or a virtual world), you need to download a client that is specific to that world. One client does not work with another world. There are plenty of efforts to change this, but they are all barking up the wrong tree. The specific uniqueness of each world defeats the need and purpose of true interoperability, unless you completely reinvent the whole thing with a common backbone, features, functionality, etc. The very nature of virtual worlds and mmorpgs rebels against this.You absolutely do not want an avatar from second life running around in world of warcraft (for reasons that should be obvious).</p>
<p>On the other hand, with the web, you can use just about any client (browser) to access nearly any website (some requiring plugins or whatever).</p>
<p>The thing with augmented reality, is how do we go about making this? Iâ€™ve seen a few people thinking about this from the wrong perspective. There was a question at the last techcrunch to the Sekai Camera guys (a conceptual AR application for the iphone) where someone on the panel wanted to know how website owners would convert their content for augmented reality. BZZZZZT! That is a fundamental misunderstanding of what AR is, or could be, and it falls into the same trap I see a lot of people doingâ€¦and that is looking at AR through the web 2.0 lens or the virtual world lens. It is absolutely fundamentally different at the coreâ€¦sure there are similarities: it has social networking/media applications and properties, and it has 3D graphics, but it stops there.</p>
<p>Ubiquitous augmented reality will be dramatically different depending on which standards, approaches, and philosophies get the most traction first. Will you walk down the street with your AR glasses and have a pop up every 30 feet asking you if you want to access the AR content on another server? Will you then have to register, subscribe, or whatever?</p>
<p>Or will all AR content be mediated by one sole master control server deep in the bowels of google? What about some other option? Will you need different sets of glasses to access different features and content from multiple sources?</p>
<p>At the end of the day, it should not matter what brand of glasses you are wearing, you should never have to deal with AR server popups to join/subscribe, and so forth.</p>
<p>Interoperability, in the context of what I was saying earlier, is the sense of how to build the infrastructure so all of this is seamless to the end user, but still maintaining the features/functionality necessary for all of what augmented reality promises usâ€¦I dont want to see everything in AR space, I want to be able to tune in or filter out some things, and I want to customize the snot out of what I see (perhaps changing metaphors or â€œholoscapesâ€), and so on. It all has to work together and simplify the end-user experience or it wonâ€™t get anywhere</p>
<p><strong>Tish Shute: </strong>So what caused the stagnation of new development and devolution of MMOGs in you opinion?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>yes, look at all the hope and hype for the mmorpgs released in the last 12 months really, what is different or better? Now, what is worse?</p>
<p>I bet any decent mmorpg gamer could give you a list of 2 or 3 things for the first question and 20-30 things for the second.</p>
<p>And, VWs seem to be stuck in a feedback loop</p>
<p><strong>Tish Shute: </strong>feedback loop?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Imagine nailing one of your feet to the ground and then trying to run â€™round and â€™round and â€™round.</p>
<p><strong>Tish Shute:</strong> Why do you think this happened to VWs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Men in suits and flashy watches.</p>
<p>actually, hang onâ€¦..</p>
<p>I saw a video clip the other day from a conference about using various virtual and game technologies for simulations and other real world applications several people were talking about â€œavatar technologyâ€ and how theirs was better than their competitions and what not.</p>
<p>Now, can you tell me what â€œavatar technologyâ€ is? Avatar technology is a red herring. Avatar technology is the same thing as calling a toaster a new â€œfire technology.â€</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The problem is that a lot of people that donâ€™t have a clue about what they are doing are selling the tech to other people that have no clue what they are buying, but they feel like the should for some unknown reason.</p>
<p>That is happening all over the government, academic, and industrial sectors now with a few companies selling virtual worlds (again, mid 90-s tech) as the ultimate solution to all problems.</p>
<p>Anyway, getting back to your question</p>
<p>Once virtual reality started getting some buzz, some people got greedy and jumped into the avatar/virtual world thing and tried making it commercial too soon half of the 3D chat worlds were being jammed into platforms for virtual shopping malls.</p>
<p>Most of the money funding tech R&amp;D started funneling towards VRML, and doing 3D in web pages, etc.</p>
<p><strong>Tish Shute: </strong>yes horrible idea trying make web pages 3D IMHO</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The money people got involved too soon, and then the greedy people jumped in and tried patenting everything possible. Take a look at the worlds.com patent for 3D worlds.</p>
<p>They filed it back in 2000 or so and it was awarded in 07 (it shouldnt have been in my opinion) now they are suing everyone they can.</p>
<p><strong>Tish Shute: </strong>Will there be patent wars in AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, the AR patent wars will be legendary once people start waking up to the real potential here.</p>
<p>The only solution is for everyone to band together and pre-emptively patent or make public domain every possible patentable concept, technology, or implementation for AR otherwise, you havenâ€™t seen anything yet.</p>
<p><strong>Tish Shute:</strong> Is the AR community organized enough to do that yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That depends on how my company fares in the next six months.</p>
<p><strong>Tish Shute:</strong> Will you patent or make your tech public domain?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I plan on patenting the snot out of everything we can possibly think of, and then giving away our content creation tools and SDK stuff for free. The whole goal of what we are trying to build is to empower the end user and facilitate the creation of a wonderful world of augmented reality.</p>
<p>There are some things we will make public domain for sure, on top of that</p>
<p><strong>Tish Shute:</strong> So back to my question on networked real time experience. Will we have networked Real time AR experiences in the next 18 months</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It is possible, yes. Other than what we are doing, I am not aware of anyone else taking the same approach we are, but the potential for an â€œunder the radar ventureâ€ (much like my own company) is definitely there.</p>
<p><strong>Tish Shute: </strong>Will you use cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>I think thatâ€™s overrated and probably another attempt at the whole â€œthin clientâ€ model that some companies have been pushing for the last 20 years.</p>
<p>It sounds good on paper, but ultimately takes power and control away from the end user.</p>
<p><strong>Tish Shute:</strong> cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. You know, we arenâ€™t playing around, We are totally building â€œTHE ARâ€ that everyone keeps dreaming about. None of this cute stuff you see on youtube. Actually, if you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!).</p>
<p><strong>Tish Shute:</strong> So scaling the real time experience wonâ€™t be a problem in your project hehe</p>
<p>Cos no sharding allowed in AR right</p>
<p>And if you have lots of API calls?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: haha, sharding is one of the dumbest things to happen to the VW/MMO industry</p>
<p>It is a solution to a technical problem that was relevant 15 years ago.</p>
<p><strong>Tish Shute:</strong> so why did it stick (i know men in suits)</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it stuck because â€œthats what the other guys didâ€ and the mmo designers are too lazy to reconcile gameplay for PvP and RP gamers</p>
<p>However, there is a curious problem between dealing with â€œone worldâ€ and â€œanyone can start their own custom AR serverâ€</p>
<p><strong>Tish Shute: </strong>Now that is a very interesting problem the one world and own AR server</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It took me a few weeks of not sleeping to figure that one out. It gets back to the interoperability issue</p>
<p><strong>Tish Shute:</strong> What did you come up with?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> a solution. Thats all I can say for now on that.</p>
<p><strong>Tish Shute</strong>: eeextra seeekrit!</p>
<p>Well I will definitely have to bug you on that.</p>
<p>The problem has produced some creativity in OpenSim with people coming up with hybrids of p2p and oneworld</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> As far as virtual worlds are concerned, they need to look at the problem from a different perspective. They are trying to make all virtual worlds interoperable intead of creating a new model for interoperable worlds that new ones will be created to adhere to.</p>
<p><strong>Tish Shute: </strong>well some people are. I would say most OpenSim developers see their modular approach doing this.Â  And you choose to interoperate based on what modules you have activated and then social agreementsâ€¦</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, thats a start, but that only works on a functional and social level &#8211; doesnâ€™t account for content (story, mythos, game rules), unique data (my +3 sword), or the concepts of commerce, inherent value, and intellectual property</p>
<p>Enabling my WoW avatar to run around in SL and vice versa creates more problems than it solves.</p>
<p>Its like two alien races working hard to make sure that their two spaceships can dock but no one is paying any attention to the fact that race A breathes nitrogen and race B breathes sulpher.</p>
<p>Its technically possible, but they are missing the boat on the content side of the problem.</p>
<p><strong>Tish Shute:</strong> Yes but donâ€™t you think when a modular open source tech for virtual worldsÂ  becomes pervasive, what will happen is that those interested in a similar genre will increasingly use the module in ways that allows their content to interoperate if they want it too</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>everyone has to use the same backend tech, and the front end clients need to adhere to the same standards. Bu I have to admit, I havenâ€™t been paying much attention to the vw space in the last 9 months or so.</p>
<p>Oh I have to run now.Â  But download and install <a id="vsnt" title="cooliris" href="http://www.cooliris.com/" target="_blank">cooliris</a>. I promise you will be blown away and will start using it to search for images and videos</p>
<p>Its frigging awesome.</p>
<p><strong>Tish Shute:</strong> Will do!Â  Thanks so much great talking to you. I canâ€™t wait for your launch.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/feed/</wfw:commentRss>
		<slash:comments>27</slash:comments>
		</item>
		<item>
		<title>Hacking the World in 2009: Google Street View, &#8220;Smart Stuff,&#8221; and Wikiculture.</title>
		<link>http://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/</link>
		<comments>http://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/#comments</comments>
		<pubDate>Mon, 29 Dec 2008 19:20:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Architecture Working Group]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MQTT and RSMB]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2463</guid>
		<description><![CDATA[Google Street View Hacking This Google Street View Hack (via @timoreilly) will get my nomination for a Hacking the World Award this year, if there is such an award. A parade (the screenshot opening this post), a marathon,Â a mad-scientists laboratory, a sword fight, and more (see The Infonaut Blog) were staged all along the route [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/sampsoniawaypost.jpg"><img class="alignnone size-full wp-image-2475" title="sampsoniawaypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/sampsoniawaypost.jpg" alt="" width="450" height="274" /></a></p>
<h3>Google Street View Hacking</h3>
<p><a href="http://www.wikio.com/video/576734" target="_blank">This Google Street View Hack</a> (via<a href="http://twitter.com/timoreilly" target="_blank"> @timoreilly</a>) will get my nomination for a Hacking the World Award this year, if there is such an award.</p>
<p><a href="http://maps.google.com/maps?cbp=1,262.96388206761037,,0,16.58444579096093&amp;cbll=40.456878,-80.01196&amp;layer=c&amp;ie=UTF8&amp;ll=40.458499,-80.009319&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=zHdES6mj-vBrH2nF-K9ROQ" target="_blank">A parade</a> (the screenshot opening this post), <a href="http://maps.google.com/maps?cbp=1,260.87215088682916,,0,8.64102186979147&amp;cbll=40.457046,-80.011085&amp;layer=c&amp;ie=UTF8&amp;ll=40.458671,-80.00845&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=81ALq0NpV6uyLEF5S5ENhw" target="_blank">a marathon</a>,Â <a href="http://maps.google.com/maps?cbp=1,160.10914016686365,,0,33.949139944215034&amp;cbll=40.456949,-80.011593&amp;layer=c&amp;ie=UTF8&amp;ll=40.458573,-80.008954&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=C4I-QLkZJoT1SHXslK5f7Q" target="_blank">a mad-scientists laboratory</a>, <a href="http://maps.google.com/maps?cbp=1,9.995045624107206,,0,10.698194796922357&amp;cbll=40.457636,-80.00767&amp;layer=c&amp;ie=UTF8&amp;ll=40.459103,-80.006486&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=W_ox0QPcWyPqWGNPiK91Nw" target="_blank">a sword fight</a>, and more (see <a href="http://www.infonaut.ca/blog/?p=290" target="_blank">The Infonaut Blog</a>) were staged all along the route of the Google Street View truck by artists Robin Hewlett and Ben Kinsley working in conjunction with the local community and Google Street View<em><strong>. </strong></em></p>
<p>The Google Street View Hack suggests at a myriad of possibilities for anyone with their eye on the prize for a great world hack for 2009.Â  In my mind&#8217;s eye, I imagine the Google Street View truck&#8217;s trek across the planet triggering local environmental street action carnivals wherever it goes.</p>
<p>Local energy conservationists,<a href="http://www.nytimes.com/2008/12/27/world/europe/27house.html?_r=1&amp;pagewanted=all" target="_blank"> &#8220;passive house&#8221; architects</a>, retrofitters, could turn the arrival ofÂ  Google Street View into an occasion to create projects for a sustainable future &#8211; a traveling StreetCamp (see <a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">my post on HomeCamp &#8217;08 here</a>).Â  As Google Street View intends, surely, to go everywhere,Â  this would be a global hack for sustainable living that crossed the bounds of the physical and the virtual.Â  And the vast public record of Google Street View would became a generative engine and global resource for sustainable living.</p>
<h3>Working together on the noble aim of sustainable living</h3>
<p>- this is my (and many other people&#8217;s) big theme for 2009.</p>
<p>A Hacking the World award should also go toÂ  <a href="http://www.pachube.com/">Pachube</a> &#8211; &#8220;patching the planet&#8221; &#8211; for demonstrating that instrumenting the world is not merely a Sci FiÂ  fantasy anymore.Â  By facilitating &#8220;interaction between remote environments, both physical and virtual,&#8221;Â  Pachube demonstrates (see <a href="http://community.pachube.com/?q=node/1" target="_blank">diagram here</a>) how we have only just begun to dip our toes into the many new opportunities we have to work together to save energy, rethink our culture of consumption, and to reboot our failing economy under a new sustainable operating system.</p>
<p>Energy awareness unlike the glut of information we have in entertainment and games suffers from a dearth of information. We really have very little idea about what we are consuming and the waste we are producing.Â  So more Hacking the World Awards should go to projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> &#8211; creating the world&#8217;s energy meter, and <a href="http://www.wattzon.com/" target="_blank">Wattzon</a> &#8211; your personal energy meter, for giving us new ways to understand and work with energy data.</p>
<p>Many people and organizations, given the information, will change their behaviours. But the cultural changes necessary for sustainable living are deep and old habits die hard (see <a href="http://www.nytimes.com/2008/12/27/opinion/27sat1.html" target="_blank">this disturbing report</a> on the recent return to SUV buying in November as soon as gas prices fell!).</p>
<h3>AÂ  Small Community of Volunteers Can Bring Change on a Global Scale</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/homecampthethrongpost.jpg"><img class="alignnone size-full wp-image-2535" title="homecampthethrongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/homecampthethrongpost.jpg" alt="" width="450" height="153" /></a></p>
<p>Picture above by <a href="http://benjaminellis.co.uk/" target="_blank">Benjamin Ellis</a>, &#8220;HomeCamp &#8211; The Throng,&#8221; from his <a href="http://www.flickr.com/photos/tags/homecamp08/" target="_blank">Flickr</a><a href="http://www.flickr.com/search/?q=homecamp&amp;w=29034542%40N00" target="_blank"> stream.</a></p>
<p>One of my favorite &#8220;instrumenting the world&#8221; projects to date and another top contender for a Hacking the World Award is <span class="entry-content"><a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp â€˜08</a></span> (see my <a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">previous post</a>).Â  HomeCamp brings together a community of creators and enthusiasts ofÂ  &#8220;smart stuff,&#8221; creating <a href="http://meta.wikimedia.org/wiki/Wikiculture" target="_blank">a wikiculture</a> for the noble cause of sustainable living.</p>
<p>The key to whether &#8220;instrumenting the world&#8221; empowers people and changes our lives for the better will be the capacity our systems of instrumentation have for what Jonathan Zittrain in <em><strong>&#8220;</strong></em><a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It:,&#8221; </a><em><strong> </strong></em>defines as generativity, i.e.:Â  &#8220;the system&#8217;s capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences&#8221; ( Zittrain, 2008).</p>
<p>Generativity is the &#8220;secret sauce&#8221; that makes the difference between, for example, <a href="http://www.wikipedia.org/" target="_blank">Wikipedia</a> and its all but forgotten predecessor &#8211; the &#8220;written by experts&#8221; <a href="http://en.wikipedia.org/wiki/Nupedia" target="_blank">Nupedia</a>.</p>
<p>Jonathan Zittrain writes:</p>
<p><em><strong></strong></em></p>
<p><em><strong>Wikipedia stands for more than the ability of people to craft their own knowledge and culture.Â  It stands for the idea that people of diverse backgrounds can work together on a common project with, whatever its other weaknesses, a noble aim </strong><strong>- bringing such knowledge to the world. (p.147)</strong></em></p>
<p>At <a href="http://en.oreilly.com/web2008/public/content/home" target="_blank">Web 2.0 Summit</a>, Jonathan Hochman (<em><strong><a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Known as </a><a href="http://en.wikipedia.org/wiki/User:Jehochman">Jehochman</a> on Wikipedia</strong></em>), shared with me his insider perspective as a Wikipedia administrator. The <a href="http://www.ugotrade.com/2008/12/26/wikipedia-houdini-google-street-view-instrumenting-sustainable-living#link_1">full interview</a> with Jonathan is later in this post.</p>
<p>Jonathan comments on the role of wikiculture in sustainable living:</p>
<p><em><strong>&#8220;Sustainable Living requires everything to become more efficient. Incentives need to line up with conservation priorities. This requires a radical change to the way we govern ourselves. Command economies, whether commanded by politicians or capital, lead to huge inefficiencies.&#8221;</strong></em></p>
<p>And surely, if we have learned anything in 2008, we have learned that very bad things happen when the complex systems of modern life are left in the hands of a few people motivated solely by the urge to make profit.</p>
<h3>Hacking Design and Planning Processes for Real Estate and Transportation with Virtual Worlds</h3>
<p><object width="400" height="302" data="http://vimeo.com/moogaloop.swf?clip_id=2326434&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" type="application/x-shockwave-flash"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2326434&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /></object></p>
<p>This great machinima by Azwaldo Vilotta shows the progress so far on the <a href="http://studiowikitecture.wordpress.com/2008/12/12/now-is-an-ideal-time-to-join-wikitecture-40/" target="_blank">Wikitecture 4.0 project</a>, â€˜Re-Inventing the Virtual Classroomâ€™ for the University of Alabama.</p>
<p>Though still a niche market Virtual Worlds are growing at a steady pace.Â  As I mentioned in my previous post, energy hungry avatars themselves will be a target for optimization in 2009.Â  But as my personal power usage breakdown from <a href="http://www.wattzon.com/" target="_blank">Wattzon</a> shows, cutting down the amount of flying I do in 2009 would be far more effective in reducing my carbon footprint than deciding not to log into Virtual Worlds!</p>
<p>Note: Read Write Web&#8217;s recent post, &#8220;<a href="http://www.readwriteweb.com/archives/enterprise_virtual_worlds.php" target="_blank">Report Enterprise Virtual Worlds More Effective Than Web Conferencing</a>.Â  Also check out <a href="http://www.projectchainsaw.com/" target="_blank">Web.Alive</a>, and <a href="http://immersivespaces.com/" target="_blank">Immersive WorkSpaces</a> and Dusan Writer&#8217;s post on &#8220;<a href="http://dusanwriter.com/index.php/2008/12/20/thinkbalm-the-immersive-internet-and-collaborative-culture/" target="_blank">ThinkBalm,The Immersive Internet and Collaborative culture</a>,&#8221;</p>
<p>My friend Melanie Swan points out in her <a href="Jimmy Wales recent personal appeal for support for Wikipedia." target="_blank">Top Ten Computing Trends for 2009</a>, that Virtual Worlds not only have the power of the 3 Cs (communication, collaboration and commerce) but they are fast expanding into <a href="http://www.3pointd.com/20070406/rapid-architectural-prototyping-in-second-life/">rapid prototyping</a>, <a href="http://your2ndplace.com/node/926">simulation</a> and <a href="http://sldataviz.pbwiki.com/">data visualization</a>.</p>
<p>My Hacking the World, 2008, Awards for Virtual World innovation would go to three potentially world changing projects for sustainable living:</p>
<p>1) <a href="http://studiowikitecture.wordpress.com/" target="_blank">Studio Wikitecture</a>, (see <a href="http://studiowikitecture.wordpress.com/" target="_blank">&#8220;Reinventing the Virtual Classroom&#8221;</a> for The University of Alabama).</p>
<p>2) Oliver Goh&#8217;s work on &#8220;<a href="http://www.shaspa.com/cms/website.php" target="_blank">The Path to Sustainable Real Estate.&#8221;</a></p>
<p>3) Encitra,Â <a href="http://www.podcar.org/uppsalaconference/christerlindstrom.htm" target="_blank"></a>a company recently co-founded by <a href="http://www.ics.uci.edu/informatics/research/research_highlight_view.php?id=52" target="_blank">Crista Lopes</a> and <a href="http://www.podcar.org/uppsalaconference/christerlindstrom.htm" target="_blank">Christer Lindstrom</a> focused on improving urban planning processes, starting with transportation, using virtual worlds (<a href="http://www.ugotrade.com/2008/11/25/web-meets-world-participatory-culture-and-sustainable-living/" target="_blank">see my previous post here for more</a>).</p>
<p>The latter two projects are being developed in <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> &#8211; the open source project that should also get a Hacking The World Award for creating an open modular architecture for virtual worlds that is unleashing all these new possibilites for integrating physical and virtual worlds.</p>
<p>The 2008 code contributions to OpenSim of special note re world hacking are Crista Lopes&#8217;<a href="http://opensimulator.org/wiki/Hypergrid"> OpenSim Hypergrid</a> &#8211; see Justin CC&#8217;s blog for full details on <a href="http://justincc.wordpress.com/2008/12/19/what-is-the-hypergrid/" target="_blank">&#8220;What is the hypergrid?,&#8221;</a> and David Levine&#8217;s work (IBM),  in collaboration with Linden Lab (see<a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank"> Architecture Working Group</a>), on interoperability (see <a href="http://www.ugotrade.com/2008/07/" target="_blank">my earlier post here</a>).</p>
<p>Both these projects expand the frontiers of interoperability for virtual worlds although they &#8220;slice the problem from different ends,&#8221; as David Levine put it.Â  The emphasis in the LL/IBM approach is on security so assets are not moving yet.Â  In Crista&#8217;s solution you can have assets but the security issues are not addressed yet. But this work is vital to expanding the usefulness of virtual worlds and both projects should get Hacking the World Awards IMHO.</p>
<p>I asked <a href="http://archsl.wordpress.com/" target="_blank">Jon Brouchoud </a>(full interview upcoming) what he thought were Studio Wikitecture&#8217;s most important successes to date:</p>
<p><strong><em>&#8220;I think the greatest success has been in proving, on some level, that everyone has important knowledge that can inform and improve the design of a building, not just architects.Â  If we can continue building on that success, I hope we can eventually start to hack the traditional design process, and find ways to harness the wealth of knowledge held by the general public, instead of ignoring or avoiding it, as is most often the case.&#8221;</em></strong></p>
<h3>Harnessing the &#8220;Smart Stuff&#8221; to the Noble Cause of Sustainable Living</h3>
<p>Robert Scoble&#8217;s, <a href="http://scobleizer.com/2008/12/27/the-interview-of-the-year-tim-oreilly/" target="_blank">The Interview of the Year: Tim O&#8217;Reilly,</a> is not to be missed. Tim O&#8217;Reilly discusses the key trends for 2009 that are bubbling up at O&#8217;Reilly Media.Â  And, Yes, Tim O&#8217;Reilly, as the guru of Hacking the World, gets the &#8220;Distinguished Thinker &#8211; Hacking The World Award of 2008!&#8221;</p>
<p>Tim O&#8217;Reilly&#8217;s trend list includes:</p>
<p>1) big data- vast peer produced data bases in the cloud accessible by mobile devices</p>
<p>2) &#8220;smart stuff&#8221; &#8211; sensors and robotics and hacking on stuff for fun and not for profit</p>
<p>3) Green Tech</p>
<p>4) Advances in Biological/Life Sciences.</p>
<p>And, in Robert Scoble&#8217;s interview, there is a nice titbit of history re his attendance of early <a href="http://en.wikipedia.org/wiki/Foo_Camp" target="_blank">Foo Camps</a>.Â  Foo Camp is the wiki of O&#8217;Reilly conferences and a lineage holder to my favorite Hacking the World event of 2008, <span class="entry-content"><a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp â€˜08</a></span>.</p>
<p>But what will be the &#8220;secret sauce&#8221; for these big ideasÂ  &#8211; the generative engines that harness to the noble cause of sustainable living these vast peer produced data bases and all the creative &#8220;smart stuff&#8221; hackers across the globe are creating?Â  What will motivate the mass adoption of Green Tech and sustainable living?</p>
<p>What can Wikipedia teach us about how generative systems and bottom up approaches can change the world?</p>
<p>Jimmy Wales (interview coming soon!)Â  writes in his recent <a href="http://wikimediafoundation.org/wiki/Donate/Letter/en?utm_source=2008_jimmy_letter_r&amp;utm_medium=sitenotice&amp;utm_campaign=fundraiser2008#appeal" target="_blank">personal appeal</a> for support for Wikipedia.</p>
<p><em><strong>At its core, Wikipedia is driven by a global community of more than 150,000 volunteers &#8211; all dedicated to sharing knowledge freely. Over almost eight years, these volunteers have contributed more than 11 million articles in 265 languages. More than 275 million people come to our website every month to access information, free of charge and free of advertising.</strong></em></p>
<p>To answer questions on a how to create a successful wikiculture for sustainable living, an insider&#8217;s view of Wikipedia may be a good place to start.</p>
<h3>Interview With Jonathan Hochman on Wikipedia.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/gammapostjon.jpg"><img class="alignnone size-full wp-image-2477" title="gammapostjon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/gammapostjon.jpg" alt="" width="223" height="158" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/jonathanwikikpost.jpg"><img class="alignnone size-full wp-image-2473" title="jonathanwikikpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/jonathanwikikpost.jpg" alt="" width="224" height="158" /></a></p>
<p>The picture on the left is from the Wikipedia article, <a href="http://en.wikipedia.org/wiki/Gamma-ray_burst" target="_blank">Gamma-ray Burst</a>, that Jonathan Hochman is currently working on.Â  It is a drawing of a massive <a title="Star" href="http://en.wikipedia.org/wiki/Star">star</a> collapsing to form a <a title="Black hole" href="http://en.wikipedia.org/wiki/Black_hole">black hole</a>. Energy released as jets along the axis of rotation forms a gamma-ray burst. <em>Credit: Nicolle Rager Fuller/NSF </em></p>
<p>The picture on the right, Jonathan at Web 2.0 Summit, is taken by me. Jonathan was part of the,<em> <a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Defending Web 2.0 from Virtual Blight, panel.</a> </em></p>
<p><em><strong><a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Known as </a><a href="http://en.wikipedia.org/wiki/User:Jehochman">Jehochman</a> on Wikipedia, he serves as an administrator and as a leader in addressing online harassment, disruption and sock puppetry. He is also the founder of <a href="http://www.hochmanconsultants.com/">Hochman Consultants</a>, an Internet marketing consultancy, and the director of <a href="http://www.semne.org/">Search Engine Marketing New England</a>, a regional conference series.</strong></em></p>
<p><strong>Tish:</strong> Second Life and Wikipedia are the two great experiments in collaborative co-creation what do they have to teach us about the future of the internet?</p>
<p><strong>Jonathan:</strong> Yes, Wikipedia and Second Life are key social spaces.Â  Some people have been seeing Second Life as the beginning of Web 3.0 &#8211; a wrap around environment where you can almost experience another life. Wikipedia is sort of another example of this.</p>
<p>All the problems that exist in the real world are mirrored right into that little universe.Â  For example, the Armenians and the Turks are at each others throats and the Japanese and the Koreans are going at it, the Palestinians and the Israelis, and the &#8220;Troubles&#8221;Â  &#8230; all the conflicts are imported into Wikipedia.Â  People are fighting over the content of these articles. They want to have it their way because these are first ranked in Google and they have a big impact in public opinion.</p>
<p>There was a huge fight on the waterboarding article a while back. Some guys from Little Green Footballs &#8211; they are a very conservative reactionary type of media. They are trying to change the article to say that water boarding might not be torture &#8211; change it to say it is probably not so bad.Â  Crazy stuff. They were trying to water it down.Â  And it is very clear, from every source out there, that waterboarding is torture.Â  We did a study and there are 115 sources that say waterboarding is torture. You simulate drowning &#8211; you simulate killing someone &#8211; that is a violation of the Geneva Convention and everything else. People were fighting, fighting, fighting!</p>
<p>One of the things I did was to try and clear people out who were being disruptive.Â  We actually had to go to arbitration over that article. It is like the supreme court of Wikipedia. There is a panel of 15 arbitrators.Â  They hear the case. There is evidence, arguments and decisions. It is really like a simulated law suit. You get all the experience of a simulated law suit with the real threat that you could be banned. If they don&#8217;t like what you are doing they can actually ban you or restrict you from topics.</p>
<p>So it is really fascinating how this social space Wikipedia becomes a very real platform though it is in a virtual world for real world disputes.Â  Most disputes are over the definition of things.Â  If you have a you suit most disputes are about how things are defined. And Wikipedia has become the defacto definition of things in the real world.Â  People want to know what are &#8220;The Troubles.&#8221;Â  If you go to Wikipedia you find outÂ  The Troubles are a dispute over Northern Ireland.Â  What the article says has a profound impact on public opinion.</p>
<p><strong>Tish:</strong> So who is on the court of Wikipedia?</p>
<p><strong>Jonathan:</strong> They are volunteers. these people work two or three hours a day to run this court.Â  There are all kinds of projects.Â  There is a WikiProject Spam which has people who can write computer programs to statistically analyze Wikipedia projects &#8211; not only Wikipedia. But all of them are looking at the links and reporting them and banning those people who are abusing or gaming the system.</p>
<p><strong>Tish:</strong> You were on the Stopping Virtual Blight Panel at Web 2.0 Summit &#8211; what are the most important things to think about on this topic?</p>
<p><strong>Jonathan:</strong> Yes we were talking about how to defend the web against virtual blight. The thing I find interesting about Wikipedia is that because it is the eighth largest web site and possibly the second largest web site comprised of user generated content after YouTube. The problems that exist in Wikipedia are larger and more detailed than any other site.Â  For whatever problem someone has for their social media site or their Web 2.0 site these problems already exist in Wikipedia and the solutions are there and they are transparent. You can actually see the history of what&#8217;s been done.</p>
<p>If there is, for example, a problem on Digg &#8211; some problem with sock puppetry or vote stacking &#8211; it happens, it goes away.Â  You don&#8217;t get full disclosure.Â  With Wikipedia you can actually go in and look at a dispute and watch it unfold.Â  You can watch the arbitration cases that are filed, the arguments, the decisions, the logic, the rationale.Â  You can see the successes and the failures and the different things people have tried to control blight. For example, we tried to resolve this dispute one way but it was a disaster, so we have tried something else and that worked.</p>
<p>Wikipedia is a large laboratory for social media. Wikipedia and the large universe around it Wiki and WikiMedia projects that individuals, enterprises and put together like Commons.Â  Wikimedia Commons is a repository of publicly licensed images that anyone can take and reuse. They have sound and they have video, and all of this stuff is being stitched together now.</p>
<p>So if you go to the article on ObamaÂ  you can probably now hear his acceptance speech because that is public domain &#8211; its been stitched into the article.Â  If you go to the article on Richard Nixon &#8211; his resignation speech &#8211; you may even hear his conversation with the astronauts when they landed on the moon.Â  So this becomes a giant repository of all our culture and knowledge.Â  When I design a website, a lot of times I go to Commons to find images I use for free.Â  I don&#8217;t want to pay for an image I can get for free.Â  <strong></strong></p>
<p><strong>Tish: </strong>And the Commons images get contextualized in Wikipedia too.</p>
<p><strong>Jonathan:</strong> Some of these articles are fascinatingly detailed. If you want a quick summary of the Dr. Strangelove, the article is fantastic.Â  It is enjoyable, a pleasure to read.Â  I was reading about S.A. Andree&#8217;s North Pole balloon expedition of 1897. Some guys from Sweden decided to fly a balloon over the North pole.Â  They managed to get aloft then they flew over the icepack for 24 hrs then they crashed.</p>
<p>They unloaded their stuff and hiked back across the ice toward the island they had launched from. They ended up being on the ice pack for three months before they finally holed up in an ice cave and starved to death.Â  There weren&#8217;t found until thirty years later!Â  There was a camera with these guys and the frozen pictures taken 30 yrs earlier.Â  They developed the film and those pictures are now on Wikipedia.Â  It is just a fascinating thing!</p>
<p><strong>Tish: </strong> Do you see real time collaboration beginning to play more of a role in Wikipedia &#8211; whether virtual worlds or just voice/IM &#8212; how could real time collaboration change the wikipedia editing process?</p>
<p>Jonathan:Â  The Presidential candidate articles were being edited very rapidly yesterday. There are certain real time problems.Â  Some of the more interesting problems are when you get two administrators who &#8220;get into it.&#8221; One administrator says I am blocking this user and the other one says I am unblocking him, and the other one &#8220;NO I am blocking him!&#8221; And so on&#8230;&#8230; And everyone says, &#8220;Stop fighting. You are not allowed to do that!&#8221; And they both get their powers stripped. People do get very heated over the silliest things. Wikipedia does have some mailing lists attached and there are some IRC channels. So there are some real time elements.</p>
<p><strong>Tish: </strong>What is the role of avatars in Wikipedia?<br />
<br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> In Wikipedia you have a user page and many users are anonymous.Â  They create an avatar and they personalize it and show themselves in ways they want to show themselves through an avatar. In many ways it is a lot like Second Life.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Some users have created second accounts &#8211; or a humerous second account. Bishzilla &#8211; a Swedish lady who is in tremendous command of the English language and has a razor sharp wit.Â  She has created this secondary account &#8211; almost like in a baby language.Â  Her avatar is a dinosaur that is not very bright that goes around frying people. Bizarre what people do! People may be editing a topic like an interest they have &#8211; e.g. Pokemon that they don&#8217;t want associated with their professional avatar. Or people may be editing a topic about hot political issues.Â  There have actually been some death threats issued to people over stuff they have been putting into the encyclopedia. </span><strong><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong>So avatars are important in Wikipedia.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Absolutely because people may be going in and editing articles that they may not want their friends and family to know they are editing.Â  One editor may say to another, &#8220;Stop putting stuff in or I will come and kill you!&#8221; Well then we have to ban them.Â  We have to call the police.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> Can you build reputations on multiple avatars?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan: </strong>You are allowed to use multiple avatars as long as they don&#8217;t cross paths.Â  You can&#8217;t have two avatars editing in the same area beacuse you are going to be giving yourself double weight commenting on a discussion. </span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> How do you know when this is happening?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> You can watch the style of a users editing.Â  You have to watch behavior.Â  And if you have enough evidence through behavior that suggests accounts are controlled by one person you can go and request a technical check.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">There are some uses who are called Checkusers who are able to access information desired from the server logs and check the technical characteristics of these accounts to see if they are using the same IP address.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> So if you want to understand avatar interaction on the web it helps to understand Wikipedia. </span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Yes it is a fantastic way to understand how avatars work in some aspects, and also how to deal with community dynamics.Â  We have some very strong willed people &#8211; people in their 40s, 50s, and 60s &#8211; who are very successful in business.Â  They have plenty of money and spare time and they are doing this as a hobby. And some of these people can really butt heads.Â  You can have a problem when you have an editor who has been writing fantastic articles but also happens to be rude and chew other people out and tell them to f**k off if they are not behaving. What do you do?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> Sounds a bit like Second Life!</span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Jonathan:</strong> The person is a great contributor to the community but they are telling noobies to f**k off, so you can&#8217;t allow that.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">What do you do?Â  Vested contributors are a major problem to some of these sites. They are vested in the community but they start misbehaving. You can&#8217;t block them, because if you block them there is a huge upsroar from all their friends and it causes a cataclysm.Â  It requires very careful diplomacy to deal with some of these situations. </span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish:</strong> How many Wikipedia volunteers are there now?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Jonathan: Think of a Venn Diagram &#8211; a big circle. The total number of contributors are about one million different people that contribute.Â  But there are probably about 5,000 active editorsÂ  that are consistently and regularly contributing.Â  And within that kernel there are fifteen hundred people that have administrator access and probably only eight hundred of them are active.Â  People have a natural life span with the community.Â  People come an typically stay for 6 months to 3 years.Â  Usually after that they become bored, disillusioned or get into a conflict with someone.Â  There is a natural tendency for people to stay for a while and move on. Some people stay longer, a few, but the majority will move on at some point.Â  So it is a lot of fresh faces moving in.</span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish:</strong> What lessons of trust does Wkipedia have to teach us about new projects like AMEE that aims to aggregate the world&#8217;s energy data?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Well you have to know who is releasing the data. Who is creating the data? The beauty of Wikipedia is that you have an edit history so you can see exactly who has done what.Â  So you can judge whether this person is trustworthy or not.Â  That&#8217;s a huge problem on the web today.Â  We don&#8217;t have enough identification information.Â  When you see a web page you don&#8217;t necessarily know when that page was created and by whom, or how many revisions it has had.Â  Sometimes you can glean information by checking it.Â  If you see typos and errors you may decide that that page probably didn&#8217;t receive as much attention as it should have, and probably it is not that good.</span> <br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Typos are an interesting thing.Â  People always try to figure out how Google ranks web pages. </span><a id="uy3s" style="background-color: #ffffff;" title="Matt Cutts" href="http://www.mattcutts.com/">Matt Cutts</a><span style="background-color: #ffffff;"> was here from Google today.Â  And he was talking about spam.Â  But Matt also did a <a id="e4lo" title="blog post" href="http://www.mattcutts.com/blog/2006-pubcon-in-vegas-getting-there-and-back/">blog post</a> about how he was in an airport once, and how he has a policy &#8211; when you are reading a document as soon as you come to the first error just stop because if the author hasn&#8217;t taken the care to make everything correct, you don&#8217;t need to read it. So he was in the airport, there was a sign, he came to a typo and stopped reading it. Somehow he got in trouble for not reading the sign and not having the information.Â  But it is interesting to think whether Goggle is looking for for typos, misspellings, broken links and using that as a signal of quality to rank pages.</span><br style="background-color: #ffffff;" /></p>
<p><strong>Tish:</strong> Aaaagh typos might bring down your page rank!!!Â  That certainly is a scary thought for a blogger like me who likes to write impossibly long posts that are hard to check&#8230;&#8230;&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>&#8220;OpenSource, Interoperable Virtual Worlds&#8221; at VW 2008, LA</title>
		<link>http://www.ugotrade.com/2008/08/27/opensource-interoperable-virtual-worlds-at-vw-2008-la/</link>
		<comments>http://www.ugotrade.com/2008/08/27/opensource-interoperable-virtual-worlds-at-vw-2008-la/#comments</comments>
		<pubDate>Thu, 28 Aug 2008 00:10:05 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[free software]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Fashion Research Institute]]></category>
		<category><![CDATA[IBM and Linden Lab protocols for Virtual Worlds]]></category>
		<category><![CDATA[IBM in virtual worlds]]></category>
		<category><![CDATA[interoperable virtual worlds]]></category>
		<category><![CDATA[Open Grid]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1614</guid>
		<description><![CDATA[OpenSim is designed for interoperability innovation. Adam Frisby explains: By allowing easy customization and extension, we can test and refine interoperability protocols very quickly and efficiently. The interconnect with Second Life (TM) was developed by David Levine, IBM, in only a number of days (David Levine on the right, Adam Frisby, left). Please join us [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/adamanddavidlevinepost.jpg"><img class="alignnone size-full wp-image-1652" title="adamanddavidlevinepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/adamanddavidlevinepost.jpg" alt="" width="450" height="299" /></a></p>
<p><a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> is designed for interoperability innovation. Adam Frisby explains:</p>
<blockquote><p>By allowing easy customization and extension, we can test and refine interoperability protocols very quickly and efficiently. The interconnect with Second Life (TM) was developed by David Levine, IBM, in only a number of days (David Levine on the right, Adam Frisby, left).</p></blockquote>
<p>Please join us on Thursday, September, 4th, 4pm to 5pm at the <a href="http://www.virtualworldsexpo.com/index.html" target="_blank">Virtual Worlds Conference and Expo, LA</a> for our panel, &#8220;<strong>Open-Source, Interoperable Virtual Worlds,&#8221; </strong> which will be part of the<a href="http://www.virtualworldsexpo.com/schedule/future.html" target="_blank"> Future of Virtual Worlds</a> track.</p>
<blockquote><p><strong><br />
</strong><em>Support for standardisation in Virtual World technologies has been growing steadily in recent times, join the developers of OpenSim and industry commentators as they discuss where open-source virtual worlds are heading and the progress made towards standard  			protocols for interoperability.</em></p>
<p><em></em><br />
- <a href="http://www.virtualworldsexpo.com/speakers/adamfrisby.html">Adam Frisby, Director,  			DeepThink (and OpenSim Developer)</a><br />
- <a href="http://www.virtualworldsexpo.com/speakers/tishshute.html">Tish Shute, Writer/Virtual  			World Evangelist,  			Ugotrade.com</a><br />
- <a href="http://www.virtualworldsexpo.com/speakers/micbowman.html">Mic Bowman, Principal  			Engineer, Intel</a><br />
- <a href="http://www.virtualworldsexpo.com/speakers/justinclarkcasey.html">Justin Clark-Casey,  			OpenSim Developer, IBM</a></p>
<p>Also, special panel guest Mike Mazur, OpenSim Developer, <a href="http://3di.jp/" target="_blank">3Di</a>.   We hope David Levine, IBM, will be back from Europe and join us too!</p></blockquote>
<p>Adam Frisby, <a href="http://www.deepthinklabs.com/" target="_blank">Deep Think,</a> is one of the founders and leading developers of OpenSim.  Adam has been behind so many important steps forward in the open metaverse that I cannot list them all here. But a notable recent project of Adam&#8217;s is a new &#8220;Lively style&#8221; viewer for OpenSim and Second Life (TM), <a href="http://www.adamfrisby.com/blog/2008/07/introducing-xenki-source-now-availible/" target="_blank">Xenki</a> (see <a href="http://www.adamfrisby.com/blog/" target="_blank">Adam&#8217;s blog </a>for more).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/osarchitecturepost2.jpg"><img class="alignnone size-full wp-image-1628" title="osarchitecturepost2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/osarchitecturepost2.jpg" alt="" width="450" height="296" /></a></p>
<h3>Mic Bowman, Intel</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/micbowmanpost.jpg"><img class="alignnone size-full wp-image-1656" title="micbowmanpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/micbowmanpost.jpg" alt="" width="225" height="290" /></a></p>
<p>Mic Bowman presented  at the <a href="http://www.intel.com/idf/?cid=cim:ggl|idf_home|k4EF5|s" target="_blank">Intel Developer&#8217;s Forum</a> earlier this month.  He outlined a road map for how virtual worlds will move into the fabric of everyday computing with open source software playing a key role (the slide of OpenSim architecture above is from this presentation). Virtual Worlds are an important part of Intel&#8217;s strategy for developing connected visual computing experiences. For more about Intel&#8217;s CVC initiative (see this press coverage of IDF, <a href="http://www.theinquirer.net/gb/inquirer/news/2008/08/19/intel-reveals-plans-connected">The Inquirer</a>, <a href="http://www.bit-tech.net/news/2008/08/19/intel-intros-connected-visual-computing-initiative/1" target="_blank">bit-tech.net,</a> <a href="http://www.hexus.net/content/item.php?item=15047" target="_blank">hexus,</a> <a href="http://www.trustedreviews.com/cpu-memory/news/2008/08/19/A-Bridge-Too-Far-/p1" target="_blank">Trusted Reviews</a>).</p>
<blockquote><p>Basicallyâ€¦ my message for the panel is this: To achieve a thriving, growing, broadly adopted CVC ecosystem, we believe the industry must come to some agreement on common building block technologies. Open source technologies represent a critical element in the discovery and development of these technologies, and foster innovative usages that drive adoption.</p></blockquote>
<h3>Justin Clark-Casey, Fashion Research Institute, Inc.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/justinccpost.jpg"><img class="alignnone size-full wp-image-1624" title="justinccpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/justinccpost.jpg" alt="" width="450" height="282" /></a></p>
<p>Another interesting member of our panel and important OpenSim developer is<a href="http://justincc.wordpress.com" target="_blank"> Justin Clark-Casey</a>. Justin, formerly of IBM, is now with the <a href="http://www.fashionresearchinstitute.com/" target="_blank">Fashion Reasearch Institute, Inc </a>as a full time OpenSim developer/architect.  Fashion Research Institute is considered by many one of the most advanced business cases on OpenSim (more about Shenlei Winkler, CEO of FRI, in my next post, &#8220;Meet the Rising Stars of the Open Metaverse at VW 2008.&#8221;</p>
<p>Recently Justin has been working on something called a region archive in OpenSim.</p>
<blockquote><p>Basically, this is a way of saving a sim to a single file (currently a tar.gz) and reloading it into another OpenSimulator. This file contains all the necessary data (prim xml and assets such as textures and scripts) necessary to restore the entire region.</p>
<p>It&#8217;s currently experimental so still has some bugs. But, it can be used via the load-oar/save-oar commands on the OpenSim region console.</p></blockquote>
<p>The ability to save and load archives is something that people developing whole applications using OpenSim will be very interested in, and will contribute to the business value of OpenSim.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/osarchitecturepost2.jpg"><br />
</a></p>
<h3>OpenSim and Linden Lab are at the Center of Interoperability Innovation</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/gridnaut-visionaries.jpg"><img class="alignnone size-full wp-image-1630" title="gridnaut-visionaries" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/gridnaut-visionaries.jpg" alt="" width="450" height="338" /></a></p>
<p>Since the launch of <a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta/Open_Grid_Beta_Viewers" target="_blank">Open Grid (beta)</a> the  interoperability initiative from OpenSim and Linden Lab (for more <a href="http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/" target="_blank">see here</a>), cross world interoperability has begun with avatar hopping. Now the thorny issues of trust management, economy, and IP that are the major part of asset interoperability are on the table.</p>
<p>The picture above of Gridnaut, Lawson English, (Saijanai Kuhn in SL) is from Lynn Cullens (Bjorlyn Loon in SL), Director of Communications for <a href="http://metanomics.net/" target="_blank">Metanomics</a>.<br />
Open Grid has expanded to 31 regions in less than a month and there is now OGP support in OpenSim trunk.  <a href="http://www.vivaty.com/" target="_blank">Vivaty</a> has expressed interest in joining interoperability efforts so we may see some of the new browser based worlds become part of this initiative soon</p>
<p><a href="http://blog.secondlife.com/author/hamiltonlinden/" target="_blank">Hamilton Linden</a>, who is leading the Open Platform Product Group  (OPPG) as Director, Engineering for <a href="http://lindenlab.com/" target="_blank">Linden Lab</a>, and, <a href="http://wiki.secondlife.com/wiki/User:Tess_Linden" target="_blank">Tess Linden,</a> Technical Director, leading design and Implementation for the OPPG (Open Platform Product Group), will be holding office hours at the Linden Lab booth at VW2008, LA to discuss Open Grid.  We hope that both Hamilton and Tess will special guests at the panel also!<br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/opensiminterop.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/08/27/opensource-interoperable-virtual-worlds-at-vw-2008-la/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Tribal One Integrates OpenSim and Facebook</title>
		<link>http://www.ugotrade.com/2008/08/12/tribal-one-integrates-opensim-and-facebook/</link>
		<comments>http://www.ugotrade.com/2008/08/12/tribal-one-integrates-opensim-and-facebook/#comments</comments>
		<pubDate>Tue, 12 Aug 2008 18:38:31 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[avatar 2.0]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual goods]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1608</guid>
		<description><![CDATA[The video above (see here on YouTube) of an OpenSim integration with Facebook was posted today by Stefan Andersson of Tribal Media also see his blog here. This is the third in a series of videos that introduces Tribal&#8217;s new concept for 3D/web integration. The picture above shows the in the left pane fetched pictures [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.youtube.com/watch?v=qkiilgjs0Rg" target="_blank"><img class="alignnone size-full wp-image-1613" title="tribalonepostutube2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/08/tribalonepostutube2.jpg" alt="" width="450" height="361" /></a></p>
<p>The video above (see <a href="http://www.youtube.com/watch?v=qkiilgjs0Rg" target="_blank">here on YouTube</a>) of an OpenSim integration with Facebook was posted today by Stefan Andersson of <a href="http://tribalmedia.se/">Tribal Media</a> also see <a href="http://lbsa71.net/2008/08/11/tribal-one-entering/" target="_blank">his blog</a> here. This is the third in a series of videos that introduces Tribal&#8217;s new concept for 3D/web integration.</p>
<p>The picture above shows the in the left pane fetched pictures from Stefan&#8217;s Facebook photos. As Stefan explains a hybrid web app is talking to the region to change the picture accordingly and pull the photos into frames on the wall (for a more detailed technical explanation <a href="http://lbsa71.net/2008/08/12/tribal-one-picture-frame-web-app/" target="_blank">see here</a>).</p>
<p>These videos (see <a href="http://www.youtube.com/watch?v=qkiilgjs0Rg" target="_blank">the first here</a>) from Stefan and his partner at Tribal Media, Darren Guard, Stefan explained, demonstrate as proof of concept how a &#8220;3D web architecture&#8221; could look and feel.</p>
<p>This concept has quite far reaching implications to many or our current notions of how inventory, virtual economy and content will work in virtual worlds.  Stefan explained:</p>
<blockquote><p>Sufficient to say is that Tribal One was an experiment, showing some concepts &#8211; much like<a href="http://www.realxtend.org/" target="_blank"> realXtend</a> is introducing some innovative new concepts. For example: Who creates your &#8216;inventory&#8217; and what are you supposed to do with that? In our concept, the &#8216;inventory&#8217; is a set of services that I can use to interact with the world, i.e, I have a &#8220;pictureFrame&#8221; in my inventory that, when I drag it onto a wall, the act of dragging it onto the wall instructs the region with the wall to use that url to fetch the definition of that pictureFrame, as well as showing that pictureFrames application web page to me. And, when I interact with that web page, that web page interacts with the region to change the picture in the frame. Contrast that with the notion of an &#8216;inventory&#8217; that contains a set list of stored objects that I can &#8216;rez&#8217;.</p></blockquote>
<p>The next video Stefan noted will demo a  &#8216;friends&#8217; tab and you can choose to &#8216;join&#8217; or &#8216;visit&#8217; them. &#8220;The point there being not having to run, fly or teleport just click. Also, the clip shows &#8216;snapping.&#8217; the picture frames &#8216;snap&#8217; to compatible surfaces something you&#8217;d like in Second Life.&#8221;</p>
<h3>Interview with Stefan Andersson</h3>
<p><strong>Tish Shute:</strong> What are the goals of Tribal One?</p>
<p><strong>Stefan Andersson:</strong> One of the central goals with the Tribal Server platform is actually about cost-efficiently adding 3D to existing communities or intranets. We did the Tribal One to:</p>
<p>1) show how easy it is to build what we call a &#8220;Community Provider&#8221; which basically is a connector between an OpenSim-based server and any given community or intranet &#8211; the &#8216;provider&#8217; is just a small piece of code that provides authentication and profile data for the 3D region.</p>
<p>2) show how a hybrid web/2d interface could look, and address those very questions that we discussed would be interesting to ask Avi Bar-Zeev (<a href="http://www.ugotrade.com/2008/08/08/will-the-future-of-virtual-worlds-be-in-the-browser-interview-with-avi-bar-zeev/" target="_blank">see full interview here</a>).</p>
<p>3) show our concept of 3D web applications, where a user works with an application trhu a seamless mix of 2D and 3D interactions, all resulting in http methods on a simple web application in the background.</p>
<p>We wanted to show how third-parties could create 3D-aware applications without coding anything but web services, for example in php. This has two great benefits; a) you can let web-coders (of which there are plenty) do the work of 3d-coders (of which there are few) and b) you can utilize existing web service contracts and security solutions when passing over trust boundaries, which would let a 3D-host securely interact with public web services, instead of letting a hosted region have direct database access.</p>
<p><strong>Tish</strong>: Is it correct to say while your experiment here provides a nice solution to identity management between 2D and 3D spaces but it does not approach the issues that<a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta" target="_blank"> OGP</a> and <a href="http://www.ugotrade.com/2008/08/12/tribal-one-integrates-opensim-and-facebook/">AWG (Architecture Working Group)</a> are dealing with re trust management now, where the real meat on the table is now inventory and permissions and economy?</p>
<p><strong>Stefan:</strong> That would be correct; in our model, the inventory actually consists of url references to web applicatons; for example &#8211; the picture frame that is visible in the clip I&#8217;m uploading, is in the inventory as a url to a web application that creates the xml definition for the 3D object</p>
<p>We are aiming for something a bit more &#8216;active&#8217; than a static collection of statically stored assets.</p>
<p>We wanted to proof-of-concept an approach where you had an application generating the inventory and the objects in the inventory, as opposed to having a static databse listing of statically stored items.</p>
<p>So, the inventory is generated on-the-fly for the user, depending on what services he or she has subscribed to, and how those services are configured &#8211; not just by how stuff has been given/taken.</p>
<p>So, if you had an Phat wardrobe node in your inventory, for example, Phat could add items to that node depending on what you did on their web site</p>
<p><strong>Tish:</strong> And that would be assuming they offered virtual clothing right?</p>
<p><strong>Stefan:</strong> Yes, exactly.</p>
<p>Okay, take some virtual clothing company &#8211; the point is that the inventory, in this case, could be modified &#8216;from the outside&#8217;. Of course, in an SL-centric world view, that&#8217;s outrageous, but if you thought of this from a service-centric view, it would make sense.</p>
<p><strong>Tish:</strong> But protocols could be worked out to extend this beyond Tribal to other OpenSim worlds and even Second Life assuming those other worlds wanted to participate?</p>
<p><strong>Stefan</strong>:Yes, definitively &#8211; we did this to show how a 3D web architecture could look and feel.</p>
<p><strong>Tish:</strong> Server centric is like Tribal where you are basically running your sim on your own PC</p>
<p><strong>Stefan:</strong> Yes &#8211; we actually based all this on very simple web protocols; the &#8216;web viewport&#8217; concept is just an extension of the existing OpenSim object exchange xml &#8216;standard&#8217;.</p>
<p><strong>Tish:</strong> And while this idea would have no place in the old SL grid &#8211;  would it be more feasable to integrate it in the new <a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta" target="_blank">Linden Lab Open Grid</a> with the agent domains?</p>
<p>Could agent domains say choose to opt in and opt out of these kind of web services?</p>
<p><strong>Stefan:</strong> To be honest, I&#8217;m waiting for the <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">AWG (Architecture Working Group)</a> to solve the &#8216;inventory&#8217; issue before I would venture using the term &#8216;agent domain&#8217;.</p>
<p><strong>Tish:</strong> So when there is inventory sharing that is when the agent domain comes of age in your view?</p>
<p><strong>Stefan:</strong> Well the question is: Who creates your &#8216;inventory&#8217; and what are you supposed to do with that?</p>
<p>In our concept, the &#8216;inventory&#8217; is a set of services that I can use to interact with the world.</p>
<p>Ie, I have a &#8220;pictureFrame&#8221; in my inventory that, when I drag it onto a wall, the act of dragging it onto the wall instructs the region with the wall to use that url to fetch the definition of that pictureFrame, as well as showing that pictureFrames application web page to me.</p>
<p>And, when I interact with that web page, that web page interacts with the region to change the picture in the frame.</p>
<p>Contrast that with the notion of an &#8216;inventory&#8217; that contains a set list of stored objects that I can &#8216;rez&#8217;.</p>
<p><strong>Tish</strong>: And at the moment in AWG discussions ideas are limited to Second Life models of inventory?</p>
<p><strong>Stefan</strong>: That is my understanding</p>
<p>Again, Tribal One is not a ready-made product &#8211; it&#8217;s a vision of how future 3D/Web applications could function.</p>
<p>We just need somebody to pay us to finish it! *laughs*</p>
<p><strong>Tish:</strong> Is there any reason why assuming AWG figure out how to manage the conventional notion on inventory this concept couldn&#8217;t  be layered into extended protocols and options?</p>
<p><strong>Stefan</strong>:  I would assume the AWG solution would be able to fit our concepts &#8211; since we&#8217;ve already done this with the existing SL protocol.</p>
<p><strong>Tish</strong>: Do you see this new notion of inventory offering new oppoertunities to content creators to be rewarded for their work?</p>
<p><strong>Stefan</strong>: Well, with this prototype we were more looking at how the web works; we were more concerned with generating and distributing content from a service point of view.</p>
<p>This concept is more oriented towards users utilizing a business service, like a web site.</p>
<p>In our scenario, content is created by server-side services written in for example php.</p>
<p>imagine, if you will, if the SL model was based on scripts that were able to create objects, instead of objects having scripts.</p>
<p><strong>Tish:</strong> Won&#8217;t this involve the development of lots of new tools for content creation?</p>
<p><strong>Stefan:</strong> Well, yes and no; it&#8217;s like with the web you need people to do the designing, and then people to do the scripting, but you also need people to do the server side coding.</p>
<p>We are not addressing the current SL paradigms of content creation we are addressing what we felt was missing; a platform for organisations to make dynamically created content,  like the difference between a static html page and a php page. Both can have client-side scripting on them, but the php page can pull data out of databases &#8211; any kind of databases &#8211; existing databses.</p>
<p>Take the example with the &#8220;where I&#8217;ve been&#8221; application in Facebook: You add that app, then you can fill in where you&#8217;ve been, and you get a nice map of where you been, and you can compare that to that of your friends</p>
<p>Now, that&#8217;s a database creating those images out of the data that you and your friends entered.</p>
<p>What we did, was an api, so that the author of that facebook app could add a set of php pages, which would let users share in inventory item, that was an url to an web app that returned the definition for a map of where I&#8217;ve been, so I could have that same map on the wall in my 3d living room.</p>
<p><strong>Tish</strong>: So this implies a completely different model of content creation and economy to the one Second Life uses?</p>
<p><strong>Stefan:</strong> Well, in these kinds of business scenarios, the content in itself would probably have little value, but the service that  generated the customized content would have great value.</p>
<p><strong>Tish:</strong> So this will raise the bar for people making a living from content creation? right?</p>
<p><strong>Stefan:</strong> Most definitively. But that is how it should be.</p>
<p><strong>Tish:</strong> I&#8217;m not sure everyone will agree with that!</p>
<p><strong>Stefan: </strong>Oh? That&#8217;s the back side of progress; the &#8216;advanced&#8217; part of &#8216;advancement&#8217; if you will.  That&#8217;s why it has value.</p>
<p><strong>Tish</strong>: Well one of the things that made SL popular was the fact that a non professional developer, just someone with talent, could make a few Lindens  creating content.</p>
<p>But, don&#8217;t  you think the two models of content production will work together?</p>
<p><strong>Stefan: </strong>They still will be able to. Most definitively in SL.</p>
<p>Again, it&#8217;s like with the web; in the beginning, people could make money out of just being able to notepad some html and ftp it to a server.</p>
<p>Today we have .NET server side database programmers doing the server bits, and educated professional graphical designers doing the ui and design bits.</p>
<p><strong>Tish:</strong> Won&#8217;t the web services model have to rest in some way on a template understanding of content?</p>
<p><strong>Stefan:</strong> Of course it has. You&#8217;re very right about that.</p>
<p>But then again, it&#8217;s not an &#8216;either-or&#8217; &#8211; it&#8217;s a &#8216;both&#8217;.</p>
<p>And with that, a whole new spectrum of utility.</p>
<p><strong>Tish:</strong> Perhaps, the two models will find interesting ways to interact in the end? They already do to some degree?</p>
<p><strong>Stefan:</strong> Yes, you know these malls with 10 packs of skins, where the only difference is the combination of skin and makeup?</p>
<p>Or hair variations?</p>
<p>A good designer teams up with a good programmer and creates one good app that just delivers that exact combination.</p>
<p>There is good design and bad design, good design will always be coveted.</p>
<p><strong>Tish: </strong>So do you have any summarizing remarks?</p>
<p><strong>Stefan:</strong> Social games and technical innovation aside, what is needed to bring forth the 3D web revolution are good examples of business and user value &#8211; applications reaching beyond social networking bubbles and into the intranets and databases of organizations &#8211; and I believe Tribal Media  has shown that we are very well suited to help bring those applications to life.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/08/12/tribal-one-integrates-opensim-and-facebook/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>The Open Grid (Beta): The First Step to Interoperable Virtual Worlds</title>
		<link>http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/</link>
		<comments>http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/#comments</comments>
		<pubDate>Thu, 31 Jul 2008 16:19:05 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[digital public space]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[manufacturing 2.0]]></category>
		<category><![CDATA[Metarati]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Architectural Working Group]]></category>
		<category><![CDATA[IBM's interoperability patch for virtual worlds]]></category>
		<category><![CDATA[interoperability of virtual world assets]]></category>
		<category><![CDATA[Linden Lab New Architecture]]></category>
		<category><![CDATA[Lively style viewer for OpenSim]]></category>
		<category><![CDATA[Lively style viewer for Second Life]]></category>
		<category><![CDATA[managing assets and identity on an interoperable Open G]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[Open Grid Public Beta]]></category>
		<category><![CDATA[OpenSim and Second Life in your browser]]></category>
		<category><![CDATA[PyOGP]]></category>
		<category><![CDATA[python viewer for Second Life and OpenSim]]></category>
		<category><![CDATA[teleporting between OpenSim and Second Life]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1587</guid>
		<description><![CDATA[Open Grid Public Beta opened today (see Second Life blog) marking the beginning of a new era of interoperable virtual worlds and a new architecture for Second Life TM. The magic of &#8220;running code and consensus&#8221; is here and, at least between OpenSim and Second Life TM, avatars are jumping back and forth. Hamilton Linden, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/ugotradeogpsim.jpg"><img class="alignnone size-full wp-image-1593" title="ugotradeogpsim" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/ugotradeogpsim.jpg" alt="" width="450" height="267" /></a></p>
<p><a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta" target="_blank">Open Grid Public Beta</a> opened today (see <a href="http://blog.secondlife.com/2008/07/31/open-grid-public-beta-begins-today/" target="_blank">Second Life blog</a>) marking the beginning of a new era of interoperable virtual worlds and a new architecture for <a href="http://www.secondlife.com" target="_blank">Second Life TM</a>.  The magic of &#8220;running code and consensus&#8221; is here and, at least between <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> and Second Life TM,  avatars are jumping back and forth. <a href="http://blog.secondlife.com/author/hamiltonlinden/" target="_blank">Hamilton Linden</a>, who is leading the Open Platform Product Group  (OPPG) as Director, Engineering for <a href="http://lindenlab.com/" target="_blank">Linden Lab</a> said:</p>
<blockquote><p>The Public Open Grid Beta is an important step towards opening up the Second Life Grid to become interoperable with other virtual worlds.  Having successfully demonstrated interoperability with IBM, we&#8217;re excited to begin interoperability testing with the entire OpenSim community.</p></blockquote>
<p>In the picture opening this post <a href="http://gwala.net/blog/" target="_blank">Adam Frisby</a> (avatar Adam Zaius) one of the founders of OpenSim, David Levine, IBM, (avatar <a href="http://zhaewry.wordpress.com/" target="_blank">Zha Ewry</a>) who wrote the interoperability code. and myself are about to teleport from the Ugotrade OGP (Open Grid protocol) enabled  OpenSim to the Linden Lab Open Grid. The teleport to an external region option is in a pull down menu that brings up the box you see on the left.  If you join the  Beta and want to visit, my region URL is http://ugotrade.net:9000</p>
<p>As these teleports are about moving identity, at the moment, and no digital assets are moved, we are all Ruths.</p>
<p>You must join Gridnauts in Second Life TM if you want to participate. The download and instructions for the OGP (Open Grid Protocol) Open Grid Viewer will be on <a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta" target="_blank">the Wiki</a>.  And, to get a zipped binary package to set up an OGP enabled OpenSim you can go to the <a href="http://forge.opensimulator.org/gf/project/ogp/frs/?action=FrsReleaseBrowse&amp;frs_package_id=5" target="_blank">OpenSim forge site</a>. Thanks to Mono and .NET using the same bytecode format, the same package will work just fine for .NET and Linux/Mono. Mike Ortman, <a href="http://www.deepthink.com.au/" target="_blank">DeepThink</a> has generously created the zip package which he will keep updated.</p>
<p>In the screenshot below, Adam Zaius, Zha Ewry and Tara5 Oh are preparing to teleport back from Open Grid to the Ugotrade OGP OpenSim.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/opegridadamzhaandme.jpg"><img class="alignnone size-full wp-image-1594" title="opegridadamzhaandme" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/opegridadamzhaandme.jpg" alt="" width="450" height="268" /></a></p>
<h3>Linden Lab&#8217;s New Architecture</h3>
<p>But along with interoperability the Open Grid Beta marks the debut of Linden Lab&#8217;s new architecture that has been incubated in the <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architectural Working Group</a> (AWG) spearheaded by <a href="http://wiki.secondlife.com/wiki/User:Zero_Linden" target="_blank">Zero Linden</a>. As Zero Linden explained:</p>
<blockquote><p>A key component of virtual worlds that sets them apart from web sites, is that you interact with them with your chosen identity.  Separating out the Agent Domain enables your identity to be held and hosted by a organization of your choice, and enables your identity to be truly independent of the many organizations that will eventually host regions. The web can&#8217;t do this &#8211; your identity on a web site is tied up with that web site. You have an account at each web site. In virtual worlds, independent persistent identity is key to the experience &#8211; and Agent Domains are just the technical mechanism that enables them in an open virtual world.</p></blockquote>
<p>The interop protocols developed in AWG and used in David Levine&#8217;s, IBM, (Zha Ewry in Second Life) <a href="http://www-03.ibm.com/press/us/en/pressrelease/24589.wss" target="_blank">interoperability patch</a> not only play an important part in enabling virtual world interoperability, they will be a key component of the new Linden Lab architecture and eventually part of their main production grid Agni, that is the grid we call Second Life. Zero explained:</p>
<blockquote><p>The plan is, that once this is shown to work, that this code base will eventually be rolled into Agni, probably even before Agni is opened up to outside grids. TPing, and Login will be done on Agni using these interop protocols as the standard method. Of course, there are legacy viewers to support &#8211; so the existing stuff isn&#8217;t going away for some time.  And we&#8217;ll proceed very cautiously onto Agni, with &#8220;kill switches&#8221; that allows to revert all viewers, even new, back to the old pathways.</p></blockquote>
<p>Zero Linden and Zha Ewry will be speaking on &#8220;OpenSim and the Future&#8221; &#8211; the progress they have made, and the implications of their work at <a href="http://www.metanomics.net/Event080408" target="_blank">Metanomics</a>, Noon PST on Monday, August 4th.  <a href="http://dusanwriter.com/" target="_blank">Dusan Writer</a> will also be announcing the follow-up to his much-lauded competition to create a better Second Life client viewer at the start of the show.</p>
<p>The picture below shows how the Open Grid client which, in addition to the teleport option after login, allows you to select an external region even before you log in</p>
<p><a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta" target="_blank"><img class="alignnone size-full wp-image-1588" title="open-gridpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/open-gridpost.jpg" alt="" width="450" height="265" /></a></p>
<h3>The Open Grid &#8211; a community of developers, &#8220;playing with shiny things&#8221;</h3>
<p>There is a strong team of Linden&#8217;s working with Hamilton in the Open Platform Product Group. <a href="http://wiki.secondlife.com/wiki/User:Tess_Linden" target="_blank">Tess Linden,</a> Technical Director, leads design and Implementation for the OPPG, and Layla Linden has been getting the agent domain ready. <a href="http://blog.secondlife.com/author/periapselinden/" target="_blank">Periapse Linden </a>is project manager for OPPG.  <a href="http://www.whump.com/moreLikeThis/" target="_blank">Whump Linden</a> is managing the <a title="Open Grid Public Beta" href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta">Open Grid Public Beta</a>. Whump is also a very interesting contributor, I think, to the evolution of the Open Grid. He has an enormous amount of web experience and has been a blogger since 1998. Whump came to Linden Lab from Apple&#8217;s MobileMe group. He is the point person for the Open Grid Beta which is organized through the Second Life TM Gridnauts group.</p>
<p><a href="http://wiki.secondlife.com/wiki/User:Enus_Linden" target="_blank">Enus Linden</a> and <a href="http://wiki.secondlife.com/wiki/User:Infinity_Linden" target="_blank">Infinity Linden</a> are working on testing tools known as the <a href="http://wiki.secondlife.com/wiki/Pyogp">PyOGP</a> test harness. These testing tools are a very interesting project themselves. <a href="http://mrtopf.de/blog/" target="_blank">Tao Takashi</a> who was the prime mover in the PyOGP project before it became part of the Open Grid Beta explained to me:</p>
<blockquote><p>My vision was always to create something like libsecondlife but for plain Python instead of .NET. The old protocol was just too undocumented to really get something like this done quickly so when OGP was getting born I though of trying again but with a better protocol and by coincidence Linden Lab need a test harness for testing all those components out there so PyOGP was born, as the library can now serve as backend for the tests. But in the long run of course more is possible. It can also become a full implementation of client and server, web service interface and more. I am working on an agent domain implementation for pyogp right now and I have some ideas for some text based or maybe even 2d gfx client.</p></blockquote>
<p>Something worth noting about the Interoperability effort between Linden lab and OpenSim, the Architectural Working Group, and the PyOGP initiative is the large number of experienced and talented developers that are putting extraordinary amounts of time and effort into these projects.</p>
<p>The meetings are packed. I had my first God-mode teleport into a full sim in Second Life TM from Zero Linden today so I could get into the AWG meeting to ask some questions for this post. Yes, God-mode is truly the finest way to travel!  I hope to devote a series of posts to the pioneering developers that are creating the future of open source virtual worlds.  Their dedication and brilliance is quite extraordinary.</p>
<p>Hey but for starters a tip of the hat to the indefatigable and omnipresent Saijanai Kuhn (Lawson English in RL) &#8211;  &#8220;a 20+ year script kiddie programmer who always wanted to get into game programming.&#8221; Saijanai says: &#8220;This is my chance to do something kool on a significant scale, so I&#8217;m excited about the whole AWG OGP thing.&#8221;</p>
<p>And, If you want a little example of how quickly some of this developing brilliance produces results in this community check out this prototype for <strong><a href="http://gwala.net/blog/2008/07/introducing-xenki-source-now-availible/" target="_blank">a &#8220;Livelyâ„¢&#8221;-style viewer for OpenSim+SL</a>.</strong> that Adam Frisby (OpenSim/<a href="http://www.deepthink.com.au/">Deep Think</a>) whipped up in a few hours! There is a currently <a href="http://jira.secondlife.com/browse/MISC-881" target="_blank">a petition to release llmath/llvolume.cpp under a more liberal license </a>which Adam pointed out to me is &#8220;somewhat required to do accurate rendering in alternate clients.&#8221;</p>
<p>I spent this weekend jumping around Second Life and OpenSims with Whump Linden and Zha Ewry. The picture below shows Zha, Whump and I arriving on the LL Open Grid from Zha&#8217;s laptop sim. There is a bug Zha told me that is making us arrive at (0,0,0) on the sim.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/whumpzhatara5post2.jpg"><img class="alignnone size-full wp-image-1592" title="whumpzhatara5post2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/whumpzhatara5post2.jpg" alt="" width="450" height="260" /></a></p>
<h3>Managing Assets and Identity in an Interoperable Open Grid</h3>
<p><strong>Linden Lab is NOT throwing the baby (the Second Life economy) out with the bath water (the old Second Life Architecture). </strong>Linden Lab have made this very clear many times but Zero reiterated for me when I asked this question:</p>
<blockquote><p>Absolutely &#8212; after all, I love babies &#8212; we positively need to build an architecture that supports the economy of SL &#8212; while at the same time allowing the virtual world to be open to a wider variety of experiences.</p></blockquote>
<p>And, if you have already watched the <a href="http://www.ugotrade.com/2008/07/27/metaverse-meetup-opensim-and-virtual-worlds-interoperability/" target="_blank">video of the NYC Metaverse Meetup</a> you will know that interoperability of assets and managing identity in open virtual worlds is what&#8217;s on everyone&#8217;s minds.  But as David Levine (Zha Ewry in SL) pointed out several times: &#8220;These teleports are just about moving identity for the moment they do not bring a single digital asset with them for a moment.&#8221;</p>
<p>There was a long but very interesting discussion about some of the issues of managing and federating identity and moving assets between multiple virtual worlds at the Meetup.  And, Adam Frisby and David Levine outlined some of technical and social steps to full interoperability in that discussion.</p>
<p>David Levine has also asserted several times that a big priority for him is looking at how the interoperability of assets can be implemented without detriment to &#8220;creators&#8221; whom he describes are &#8220;the secret sauce&#8221; that makes Second Life a compelling place and the ingredient that makes a virtual world either work or not work. But, interoperability, regardless of how particular virtual worlds decide to handle it, will force virtual worlds to rethink the way they do or don&#8217;t help their content creators and users to relate outside of the little puddle of their own particular terms of service. But, David pointed out, if we want to do something that spans not just one or two applications, this discussion, which is social as much as technical, has to be done in a broader community</p>
<p>For now, the goals of the OGP Beta are narrow.  As Whump pointed out:</p>
<blockquote><p>The matter of inventory is not in scope for this part of the beta. Figuring out inventory is a combination of technical and community work. Some of this will be figuring out a common vocabulary for talking about these issues. We want to figure out the basics of protocols for teleport, find the bugs, and refine these issues. We want to have running code and test suites, because that will bring interested parties.</p></blockquote>
<p>But, while the beta has begun with a simple version of OpenSim trunk the next step will be to work on interop with projects like <a href="http://www.realxtend.org/" target="_blank">realXtend</a> and <a href="http://www.tribalnet.se/" target="_blank">Tribal Net</a>. Both these initiatives are  bringing a lot of innovation to OpenSim.  Both realXtend and Tribal see interoperability as a key project and are looking forward to joining the Beta soon.</p>
<h3>Roadmap for Open Grid</h3>
<p>I asked Zero Linden what the roadmap for the next few months would be:</p>
<p><em><strong>Zero Linden:</strong> Well, now that we&#8217;ve demonstrated some technical work, and are going into a public beta, August is going to find much of the LL side hunkered down and fleshing out much architectural  detail. For some areas, especially inventory and identity, we&#8217;ll be putting together some concrete frameworks so those more complex discussions can make progress in the Fall.  So the next step is to pave the way for clear progress on them.  They are big issues and deserve the time and background work to make them be successful discussions and eventually successful desgins.</em></p>
<p><em><strong>Tara5 Oh: </strong>So when you say concrete framework you mean code and architecture?</em></p>
<p><em><strong>Zero Linden:</strong> I mean more of a specific set of issues, use cases and design options to have a discussion about.  We&#8217;ve been talking about identity and inventory in largely general terms for almost a year. I think we as a whole have a common sense of what we are talking about.  Now we need some specific points to answer, and a guide for the design.  Then, the code will follow.</em></p>
<p><em><br />
<strong> Tara5 Oh</strong>: so when you say uses cases do u have a wish list yet?</em></p>
<p><em><br />
<strong> Zero Linden</strong>: Well, I have may personal pet use cases &#8212; who doesn&#8217;t &#8212; what we will be developing in August is a more rational set. So, in short, nothing yet.  I&#8217;m trying to stay purposly &#8220;zen mind&#8221; about it &#8212; since it can be such an explosive topic.</em></p>
<p>In the picture below Whump Linden gazes out at the open horizon.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/whumppost1.jpg"><img class="alignnone size-full wp-image-1596" title="whumppost1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/whumppost1.jpg" alt="" width="450" height="360" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/07/whumppost.jpg"> </a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
		<item>
		<title>Metaverse Meetup: &#8220;OpenSim and Virtual Worlds Interoperability&#8221;</title>
		<link>http://www.ugotrade.com/2008/07/27/metaverse-meetup-opensim-and-virtual-worlds-interoperability/</link>
		<comments>http://www.ugotrade.com/2008/07/27/metaverse-meetup-opensim-and-virtual-worlds-interoperability/#comments</comments>
		<pubDate>Mon, 28 Jul 2008 01:53:20 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[GPL]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[privacy in virtual worlds]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[Second Life and the Art World]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[IBM and Interoperable Virtual Worlds]]></category>
		<category><![CDATA[Metaverse Meetup]]></category>
		<category><![CDATA[OGP]]></category>
		<category><![CDATA[OpenSim and Second Life Interoperability]]></category>
		<category><![CDATA[OpenSource Virtual Worlds]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1586</guid>
		<description><![CDATA[Here is the video of our last Metaverse Meetup: OpenSim &#38; Virtual Worlds Interoperability 7.23.08 (from Vimeo). The video of this landmark event was produced thanks to the awesome Annie Ok, Artist, Creative Director, Curator, Video Director, Metaverse Evangelist/Consultant, Co-Organizer of Metaverse Meetup. While Annie&#8217;s first love is art, she has been involved in an [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="302" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.vimeo.com/moogaloop.swf?clip_id=1417228&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="302" src="http://www.vimeo.com/moogaloop.swf?clip_id=1417228&amp;server=www.vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object><br />
<a href="http://www.vimeo.com/1417228?pg=embed&amp;sec=1417228"></a></p>
<p>Here is the video of our last <a href="http://gamedev.meetup.com/153/" target="_blank">Metaverse Meetup: OpenSim &amp; Virtual Worlds Interoperability 7.23.08 </a> (from <a href="http://www.vimeo.com/1417228" target="_blank">Vimeo</a>).  The video of this landmark event was produced thanks to the awesome <a href="http://annieok.com/" target="_blank">Annie Ok</a>, <span id=":1o8" dir="ltr">Artist, Creative Director, Curator, Video Director, Metaverse Evangelist/Consultant, Co-Organizer of </span><a href="http://gamedev.meetup.com/153/" target="_blank">Metaverse Meetup</a><span id=":1o8" dir="ltr">. </span></p>
<div id=":19w" class="h8iICe" dir="ltr"><a href="http://www.mediartchina.org/events/newyorkmoma"></a></div>
<p><span id=":1o8" dir="ltr"> While Annie&#8217;s first love is art, she has been involved in an extraordinary number of projects (see her <a href="http://www.annieok.com/Bio/Bio" target="_blank">bio here</a>). Notably, </span> <a href="http://annieok.com/" target="_blank">Annie Ok</a>, with <a href="http://www.jeffcrouse.info/" target="_blank">Jeff Crouse</a>, &amp; <a href="http://pan-o-matic.com/" target="_blank">Stephanie Rothenberg</a><span id=":1o8" dir="ltr"> </span><span id=":19t" dir="ltr">made the documentary and helped with the amazing<a href="http://www.annieok.com/OtherProjects/InvisibleThreads" target="_blank"> Invisible Threads project</a></span> which shows how excellent <a href="http://www.annieok.com/OtherProjects/InvisibleThreads" target="_blank">Second Life</a> is for such innovative mixed reality installations. <span id=":1o8" dir="ltr">The documentary<a href="http://annieok.com/tangent/?p=641" target="_blank"> premiered</a> at <a href="http://www.mediartchina.org/events/newyorkmoma" target="_blank">Synthetic Times</a>.</span> Annie also created the  <a href="http://www.dipity.com/user/xantherus/timeline/Virtual_Worlds" target="_blank">interactive, collaborative Timeline of Virtual Worlds</a> that the whole community can help with.</p>
<div id=":19b" class="h8iICe" dir="ltr"><a href="http://www.dipity.com/user/xantherus/timeline/Virtual_Worlds"></a></div>
<p>Photos of the meetup are now posted <a href="http://flickr.com/photos/annieok/sets/72157606366191338/" target="_blank">here on Flickr</a> and some <a href="http://www.facebook.com/profile.php?id=694956192#/photo_search.php?oid=19358967556&amp;view=all" target="_blank">nice portraits here</a> on Facebook.</p>
<p>With the video Annie sent out a  great write up about the meetup.</p>
<p>Annie noted:</p>
<p>&#8220;<a href="http://gwala.net/blog/" target="_blank">Adam Frisby</a> and <a href="http://zhaewry.wordpress.com/" target="_blank">David Levine</a> gave us incredible insight into OpenSim and shared compelling details that really expanded on what has previously been known about its amazing potential and revolutionary role in the future of the metaverse.&#8221;</p>
<p>And she was very kind about my really minor supporting role!</p>
<p>&#8220;Tish Shute was great as the guest moderator, asking key questions and adding salient commentary.&#8221;</p>
<p>And I really agree with Annie&#8217;s synopsis about what is at the heart of our metaverse meetups!</p>
<p>&#8220;It was so nice to see all the familiar regulars as well as meet the new ones. In true Metaverse Meetup style, we migrated en mass to a local bar where we continued the conversation about all things metaversal and had fun hanging out with fellow avatars until the late hours.&#8221;</p>
<p>Thanks to everyone and especially <a href="http://www.globalkids.org/" target="_blank">Global Kids</a> for making the meetup possible.</p>
<p>Please be sure to check out the list of Metaverse Meetup links on the <a href="http://gamedev.meetup.com/153/about/" target="_blank">new About page</a>. There are now Metaverse Meetup group on LinkedIn, Flickr and FriendFeed, as well as a list of Metaverse Meetup chapters in other cities for those of you who are not based in NYC.<br />
<a href="http://gamedev.meetup.com/153/about/" target="_blank"><br />
</a></p>
<p>Looking forward to seeing you at the next meetup!</p>
<p><a href="http://gamedev.meetup.com/153/" target="_blank"></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/07/27/metaverse-meetup-opensim-and-virtual-worlds-interoperability/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
