<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; Android</title>
	<atom:link href="http://www.ugotrade.com/tag/android/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Tonchidot: Taking Augmented Reality Beyond Lab Science with Fearless Creativity and Business Savvy</title>
		<link>http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/</link>
		<comments>http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/#comments</comments>
		<pubDate>Thu, 17 Sep 2009 21:56:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[air tagging]]></category>
		<category><![CDATA[air tags]]></category>
		<category><![CDATA[Android developers in Japan]]></category>
		<category><![CDATA[Android phone by NTT DoCoMo]]></category>
		<category><![CDATA[anime]]></category>
		<category><![CDATA[AR Commons]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[augmented reality apps on symbian phones]]></category>
		<category><![CDATA[augmented reality as a new public infrastructure]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Carl Malamud]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Protocol]]></category>
		<category><![CDATA[Gov 2.0 Summit]]></category>
		<category><![CDATA[Japanese augmented reality eyewear]]></category>
		<category><![CDATA[japanese iphone use]]></category>
		<category><![CDATA[japanese mobile culture]]></category>
		<category><![CDATA[Japanese mobile market]]></category>
		<category><![CDATA[Ken Inoue]]></category>
		<category><![CDATA[manga]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Mitsuo Ito]]></category>
		<category><![CDATA[Sekaicamera]]></category>
		<category><![CDATA[Takahito Iguchi]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4410</guid>
		<description><![CDATA[Sekai Camera has a slick new demo video out that is already causing a stir in the Japanese press (see Beyond the Beyond).Â  This video shows a ton of stuff going on! (A friend who lives in Tokyo pointed out to me that, in Japan, people are used to working with &#8220;busier&#8221; mobile UIs.) Takahito [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.youtube.com/watch?v=ORRZgEx0_Lc&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4411" title="Screen shot 2009-09-17 at 12.57.03 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-12.57.03-PM-300x243.png" alt="Screen shot 2009-09-17 at 12.57.03 PM" width="300" height="243" /></a></p>
<p><a href="http://support.sekaicamera.com/" target="_blank">Sekai Camera</a> has a <a href="http://www.youtube.com/watch?v=ORRZgEx0_Lc&amp;feature=player_embedded" target="_blank">slick new demo video</a> out that is already causing a stir in the Japanese press (<a id="w1av" title="see Beyond the Beyond" href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-sekaicamera-demo/">see Beyond the Beyond</a>).Â  This video shows a ton of stuff going on! (A friend who lives in Tokyo pointed out to me that, in Japan, people are used to working with &#8220;busier&#8221; mobile UIs.)</p>
<p>Takahito Iguchi, founder of <a href="http://translate.google.com/translate?hl=en&amp;sl=ja&amp;u=http://www.tonchidot.com/&amp;ei=TJ6ySvupL4LVlAfEnPjvDg&amp;sa=X&amp;oi=translate&amp;resnum=1&amp;ct=result&amp;prev=/search%3Fq%3DTonchidot%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26hs%3DPW3" target="_blank">Tonchidot</a> the company that has created Sekai Camera, is ultra cool.Â  Coming to augmented reality from the worlds of anime and manga culture, he isÂ  already a successful entrepreneur with excellent sartorial taste (as <a href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-sekaicamera-demo/" target="_blank">Bruce Sterling notes</a>).Â  Before turning his attention to AR, Iguchi-san was founder of Digitao, where he pioneered a blogging + social networking service &#8220;chibikki (Little Diary).&#8221; Also Iguchi-san spent time at JUST Systems and Scitron &amp; Art, where he developed innovative multimedia platforms and web services.</p>
<p>But Takahito Iguchi doesn&#8217;t give interviews in English.Â  So recently, as part of my series of interviews with members of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>, I found myself talking to the brilliant,Â  <a href="http://www.tonchidot.com/corporate-profile.html" target="_blank">CFO of Tonchidot</a>, <a href="http://www.linkedin.com/ppl/webprofile?action=vmi&amp;id=499984&amp;pvs=pp&amp;authToken=r8TF&amp;authType=name&amp;trk=ppro_viewmore&amp;lnk=vw_pprofile" target="_blank">Ken Inoue. </a>Inoue-san&#8217;s specialties include the Japanese mobile market, start-up finance, alliances, new business development, and international expansion.</p>
<p>And while, perhaps, I would have liked to learn more about how cool Japanese sub-cultures are informing the future of AR, with every business analyst under the sun opining on the future of this young industry, it is good to hear directly from an augmented reality CFO who is actually shaping business development on the ground.Â  And Tonchidot is one ofÂ  AR&#8217;s most interesting start ups.</p>
<p>With Tonchidot, I think we are beginning to taste a magic brew as augmented reality, long nurtured only in lab scientist cultures, meets business savvy and fearless creativity.</p>
<p>Bruce Sterling <a href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-sekaicamera-demo/" target="_blank">posted the video below</a>, noting:</p>
<p><strong> &#8220;Tonchidot tearinâ€™ it up at the department store.  Check out that exceedingly weird and/or clever AR-iPhone <em>pistol grip device</em> that kicks in around 2:20.&#8221;</strong></p>
<p><strong><a href="http://www.youtube.com/watch?v=FiVFVdl3EA4&amp;feature=player_embedded#t=115" target="_blank"><img class="alignnone size-medium wp-image-4414" title="Screen shot 2009-09-17 at 2.34.42 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-2.34.42-PM-300x181.png" alt="Screen shot 2009-09-17 at 2.34.42 PM" width="300" height="181" /></a></strong></p>
<h3>The AR Commons</h3>
<p>In the interview below, Ken Inoue also describes an important organization that Tonchidot has helped create &#8211; the <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja|en" target="_blank">AR Commons</a>.</p>
<p><strong>Ken Inoue:</strong> <strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing &#8211; quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja|en" target="_blank">AR Commons</a> would like to tackle in the not so distant future.</strong></p>
<p>An AR Commons is something we should all be thinking about. <strong>&#8220;Augmented reality could be a new public infrastructure,&#8221;</strong> as <a href="http://twitter.com/timoreilly" target="_blank">Tim O&#8217;Reilly noted in Twitter</a>. I will discuss this more in my upcoming post on the recent<a href="http://www.gov2summit.com/" target="_blank"> Gov 2.0 Summit</a> which was was an extraordinary event &#8211; an historic manifestation of the current wave of transformation in the nature of Government that <a href="http://en.wikipedia.org/wiki/Carl_Malamud" target="_blank">Carl Malamud</a> described in his address, &#8220;By The People,&#8221; available as <a href="http://public.resource.org/people/" target="_blank">video, audio and text here</a>.Â  Carl Malamud received a standing ovation at the Summit.</p>
<p>Malamud pointed out:</p>
<p><strong>&#8220;We are now witnessing a third wave of change &#8211; an Internet wave &#8211; where the underpinnings and machinery of government are used not only by bureaucrats and civil servants, but by the people.&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Ken Inoue, CFO, Tonchidot</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/tonchidot.png"><img class="alignnone size-thumbnail wp-image-4416" title="tonchidot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/tonchidot-150x150.png" alt="tonchidot" width="150" height="150" /></a></p>
<p><span style="background-color: #ffffff;"><strong>Tish Shute:</strong> There has been some skepticism lately that augmented reality experiences will live up to the recent hype (</span><a id="fk:l" style="background-color: #ffffff;" title="see this post for example" href="http://www.genebecker.com/2009/09/thinking-about-design-strategies-for-magic-lens-ar/">see this post for example</a><span style="background-color: #ffffff;">). But Tonchidot has a reputation for creativity, as you pointed out, &#8220;we are not &#8220;AR lab scientists&#8221; &#8211; we are from the worlds of multimedia, visual arts, publishing, lovers of manga and anime and Japanese sub-culture&#8230;. &#8221; What is Tonchidot&#8217;s approach to designing AR experiences that can deliver wonder, curiosity and discovery &#8211; the emotions of AR, despite the limitations of GPS+compass implementations of mobile AR? </span><br style="background-color: #ffffff;" /> <strong><br />
Ken Inoue:</strong> <strong>We have been facing skepticism ever since we started!Â  It doesn&#8217;t really bother us, it never has.Â  As for the opposite, the recent hype, well, we will have to live with that too.Â  We are aware of the hype cycle, the obstacles that lie ahead.Â  We are not rejoicing, and we will be prepared.Â  By the way, I didn&#8217;t think the said article was skeptical at all &#8211; in fact, I took it as great advice.<br />
</strong></p>
<p><strong><br style="background-color: #00ffff;" /></strong> <span style="background-color: #ffffff;"><strong>Tish Shute:</strong> Most augmented reality experiences are at the moment about one person experiencing multiple streams of content, we haven&#8217;t seen any multiuser realtime interaction in augmented reality yet, for example, people teaming up to accomplish some goal?Â  What do you think will be the most exciting aspects of shared augmented reality experiences? And, we are yet to see a really mobile augmented reality game get a mass audience.Â  Pong was a landmark game for the PC.Â  It really excited people because there was a &#8220;Wow! my physical action is changing what is seen?&#8221;Â  What would be an equivalent Wow! experience for augmented reality?<br />
</span><span style="background-color: #00ffff;"><br />
</span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-3.38.57-PM.png"><img class="alignnone size-medium wp-image-4417" title="Screen shot 2009-09-17 at 3.38.57 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-3.38.57-PM-300x148.png" alt="Screen shot 2009-09-17 at 3.38.57 PM" width="300" height="148" /></a><br />
<span style="background-color: #00ffff;"> <span style="background-color: #ffffff;"> </span></span></p>
<p><span style="background-color: #00ffff;"><span style="background-color: #ffffff;"><strong>Ken Inoue:</strong> <strong>We wish we had an answer to that! <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />  </strong></span></span><strong><span style="background-color: #00ffff;"><span style="background-color: #ffffff;">We are talking to many game developers, and everyone has different ideas&#8230;Â  we want to test them all! </span></span><span style="background-color: #00ffff;"><span style="background-color: #ffffff;">We are striving to be a social application, and we are thinking hard.Â  But often times, users find new uses and come up with really unexpected, but ingenious ideas&#8230; that&#8217;s the nature of social experiences, I guess. </span></span></strong><br style="background-color: #00ffff;" /><br style="background-color: #00ffff;" /> <span style="background-color: #00ffff;"><span style="background-color: #ffffff;"><strong>Tish Shute: </strong> It is a year since you demoed at TC50.Â  What have been the most exciting developments in augmented reality this year and what have been the biggest disappointments?</span></span></p>
<p><span style="background-color: #ffffff;"><strong>Ken Inoue: We&#8217;re definitely excited about what other start-ups in the field are doing across the ocean.Â  We get a lot of stimulation, and we see it as something close to a great sporting rivalry, but only, we aren&#8217;t that great yet&#8230;.Â  Our disappointment was that we weren&#8217;t able to release our app this summer&#8230;. </strong></span></p>
<p><strong>Tish Shute:</strong> I know you can&#8217;t give too many details about your upcoming iphone launch because you are in &#8220;stealth mode&#8221; and because of Apple&#8217;s NDA. But I will start with a general question: &#8220;Do you think that Apple is going down the right path with what they are or aren&#8217;t making available to developers?&#8221;</p>
<p><strong>Ken Inoue:</strong> <strong>Looking back at Apple&#8217;s short history in iPhone and AppStore, they&#8217;ve slowly but steadily headed in the path of more openness. And what with the FCC making an inquiry to Apple about the rejected Google Voice application, they&#8217;re forced to be more friendly and open to developers, whether they want to or not&#8230;</strong></p>
<p><strong>Tish Shute:</strong> How is Tonchidot going to differentiate itself in an exploding field of new augmented reality companies?</p>
<p><strong>Ken Inoue:</strong> <strong>Well, I feel that the market for augmented reality is still in such a nascent stage, that the priority for many of us is cooperation, rather than cut-throat competition.Â  That&#8217;s the rationale for the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium </a>that was founded by Robert Rice and others very recently, and something that we completely subscribe to.Â  In Japan, Tonchidot is the central proponent in <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja|en" target="_blank">AR Commons</a>, an organization which has already started building a social database for AR.</strong></p>
<p><strong>Tish Shute:</strong> What do you think of the augmented reality applications released recently?</p>
<p><strong>Ken Inoue: There are now so many cool AR apps out there &#8211; we&#8217;d like to think that our presentation at TC50 back in September 2008 stimulated fellow developers just a little bit.Â  Many AR applications and services seem to capture the benefits of AR in some way or another very well.Â  I think maybe the difference between our service and what some others are doing, is that we are initially focused on UGC (user generated content) &#8211; not on business applications and tools.Â  However, it&#8217;s just a matter of prioritization, I think &#8211; it seems we all share the same dream!</strong></p>
<p><strong>Tish Shute:</strong> Yes I like the way you have taken these the concepts &#8220;world camera&#8221;Â  and &#8220;air tagging&#8221; and focused on the social aspects &#8211; social tagging.Â  Wikitude now has a way for users to create tags &#8211; <a href="http://www.wikitude.org/add-content" target="_blank">wikitude.me</a> which is a big step forward too I think.</p>
<p><strong>Ken Inoue:</strong> <strong>Yes, indeed!Â  It seems they have done a great job.Â  Their success, and the success of everyone else helps us too, since it generates media attention, and also ideas for how it can be applied to the real world and real businesses.</strong></p>
<p><span style="background-color: #ffffff;"><strong>Tish Shute:</strong> There is a a growing development of AR browser like experiences, Wikitude, Layar, and Sekai Camera but they are not true browser experiences (in the sense that we experience web browsers) as they don&#8217;t share AR data across browsers. How can we move towards a situation of sharing augmented reality data?</span><span style="background-color: #ffff00;"><span style="background-color: #ffffff;"> What are the obstacles to sharing AR data across browsers now?Â  I guess these obstacles are business obstacles mainly, not technical obstacles. But what do you think?</span><br style="background-color: #ffffff;" /><br />
<span style="background-color: #000000;"><span style="background-color: #ffffff;"><strong>Ken Inoue:</strong> </span></span></span><strong><span style="background-color: #ffff00;"><span style="background-color: #000000;"><span style="background-color: #ffffff;">Because AR is in many ways location dependent, geographic coverage always will be a challenge for anyone.Â  This means that collaboration makes sense. </span></span></span><span style="background-color: #ffff00;"><span style="background-color: #000000;"><span style="background-color: #ffffff;">I don&#8217;t think there are many technical obstacles, and some things can already be shared though open APIs. </span></span></span>The issue of sharing AR data can not be solved by any one company &#8211; We believe we must make collaborative efforts.Â <span style="background-color: #ffff00;"><span style="background-color: #000000;"><span style="background-color: #ffffff;"> </span></span></span></strong><br />
<span style="background-color: #ffff00;"><br />
</span> As I mentioned, we helped create an organization called <a href="http://www.arcommons.org/" target="_blank">AR Commons</a> which has already started building a social database for AR in Japan.Â Â  However, sharing ALL data on this platform will be a challenge, since so many interests will need to be aligned.Â  Not all info is shared on the internet, and some prefer closed and secure environments.</p>
<p><span style="background-color: #00ffff;"><span style="background-color: #ffffff;"><strong>Tish Shute: </strong> What is your vision for AR Commons in the next 12 months?</span></span><br style="background-color: #00ffff;" /></p>
<p><strong>Ken Inoue:</strong> <strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing &#8211; quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and AR Commons would like to tackle in the not so distant future.</strong><span style="background-color: #ffff00;"> </span></p>
<p><span style="background-color: #ffff00;"><br />
</span><strong>Tish Shute: </strong>What isÂ  the business model for Sekai Camera?Â  Do you have to subscribe to create? Otherwise just view?</p>
<p><strong>Ken Inoue:</strong> <strong>All users can create AirTags &#8211; we want to allow all users to start AirTagging and add value to our service.Â  We wanted everybody to make tags, and we didn&#8217;t want to put a hurdle on it.</strong></p>
<p><strong>So, users can create text, voice, image/photo tags and can add comments on the tags &#8211; much like blogging and twitter. We will also mash up with many other social services which will strengthen the &#8220;social&#8221; aspect of our app.</strong></p>
<p><strong>Tish Shute:</strong> are you aiming for something close to the real time experience of Twitter?Â  <span style="background-color: #ffff00;"><span style="background-color: #ffffff;">And what will attract users over other social location based apps like Bright Kite using 2 dimensional maps?</span></span></p>
<p><strong>Ken Inoue:</strong> <strong>Our service is very close to real-time already &#8211; only, because of the location specific aspect, it will be different.Â  It will definitely be something new.Â  Maps will also be integrated.</strong></p>
<p><strong>Tish Shute:</strong> And Sekai camera will work anywhere in the world?</p>
<p><strong>Ken Inoue: We have named and designed it to be global! <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </strong></p>
<p><strong>However, it&#8217;s definitely easier for any company to focus on your home market first.Â  Being a Japanese company, we are initially concentrating on the Japanese market.Â  It&#8217;s still the second largest economy in the world, one of the leaders in the mobile internet market, full of geeks and early adopters of new technologies.Â  And what&#8217;s more, we already have a great buzz here, and it&#8217;s easier to talk and collaborate with local partners.Â  For any company building AR apps, geography and platform may be the difficult decisions to make, since first-mover advantage may become quite significant&#8230;Â  We are lucky to have such a large and hungry home market.</strong></p>
<p><strong>Tish Shute: </strong>Yes you have Denno Coil too. One of my big inspirations!</p>
<p><strong>Ken Inoue: Oh, you know about it!Â  How surprising!</strong></p>
<p>Tish Shute: You mentionedÂ  CEO <a href="http://ascii.jp/elem/000/000/409/409940/" target="_blank">did a talk session with the creator of denno coil recently</a>.</p>
<p><strong>Ken Inoue: Yes, &#8220;Denno Coil&#8221; shows us what the future could be, and is very inspiring.Â  We actually didn&#8217;t know about Denno Coil until afterwards, although it was broadcast on national TV.</strong></p>
<p><strong>There is a picture on the web article of our talk session &#8211; on the right, you see our CEO, Takahito Iguchi, on the left, Mitsuo Iso, creator of Denno Coil.Â  Iso-san knew about the Sekai camera, and in fact, gave us lots of hints and advice on how to make it better.Â  He is a real technology lover &#8211; a mac lover, and iphone lover.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-4.55.06-PM.png"><img class="alignnone size-medium wp-image-4423" title="Screen shot 2009-09-17 at 4.55.06 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-4.55.06-PM-300x201.png" alt="Screen shot 2009-09-17 at 4.55.06 PM" width="300" height="201" /></a><br />
</strong></p>
<p><span style="background-color: #ffffff;"><strong>Tish Shute: </strong>Iguchi-san is a very inspiring and charismatic thinker and I would love to know about some of his imaginings for augmented realities.</span> <span style="background-color: #ffffff;">What are his AR imaginings for the next step after air tagging! </span><span style="background-color: #ffff00;"><span style="background-color: #ffffff;">What does Iguchi-san see Tonchidot doing in 2010? And then beyond that? And, what are some augmented realities he would like to see even beyond the limitations of current technologies?</span></span></p>
<p><strong><span style="background-color: #ffffff;">Ken Inoue: We believe the possibilities are infinite!</span><span style="background-color: #ffffff;"> There are so many things we can and would like to do, but so limited resources.. </span><span style="background-color: #ffffff;">So here again, what we and other fellow AR pioneers will be doing will depend on how we prioritize.Â  We would like to keep our plans secret for now. <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </span></strong></p>
<p><span style="background-color: #ffffff;"><strong>Tish Shute:</strong> Does Iguchi-san see Tonchidot doing more with image recognition and the tight alignment of graphics with physical objects in the near future?</span></p>
<p><strong><span style="background-color: #ffffff;">Ken Inoue: Yes, definitely!Â  We are already in talks with potential partners. There are some great technologies here in Japan, which were just waiting for us!</span></strong></p>
<p><span style="background-color: #ffffff;"><strong>Tish Shute:</strong> And when will we get the kind of eyeware that would really change everything? (I noticed <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">one Japanese company that is producing eyewear </a>- what is their potential? Are their other eyewear initiatives in Japan?Â  What does Tonchidot think will be key to pushing this kind of hardware development for AR forward?</span></p>
<p><strong>Ken Inoue: Yes indeed, the world of Denno Coil is not too far away&#8230;.Â  There are actually many projects going on in Japan, and we are definitely interested in hardware development.Â  We are not short of world-class hardware developers here in Japan, and we have been approached by quite a few.</strong></p>
<p><strong>Tish Shute:</strong> I know you got some criticism for showing a concept video at last year&#8217;s TechCrunch50 which people felt didn&#8217;t show the technology you had actually developed. Do you have all the functionality shown in your video working now?</p>
<p><strong>Ken Inoue: Hmm.. We did get criticism, and so it seems did TechCrunch &#8211; but we got far more praise and support!Â  I guess we really felt we needed to get the idea out there &#8211; As Robert said in your interview -Â  it&#8217;s hard to make people understand the full potential of AR.Â  And unless you show something like that in video form, it&#8217;s difficult to make people understand.Â  We showed TC50 our working prototype on the iPhone, and made it clear that the video was a vision of the future.Â  Because of the language barrier, we used simple phrases like, &#8220;Look up, not down&#8221; and &#8220;AirTag&#8221;.Â  TC50 let us make the presentation, for which we are very very thankful.</strong></p>
<p><strong>Tish Shute:</strong> Oh I love the term Air Tagging. It is a brilliant term!Â  Robert Rice noted it has the ring of terms like xerox and kleenex &#8211; i.e. a brand that becomes the &#8220;thing&#8221; and no longer a brand, congrats!Â  Sekai (World) Camera is really nice concept too!</p>
<p><strong>Ken Inoue: Thanks!</strong></p>
<p><strong>Tish Shute:</strong> <span id="c:i9" style="background-color: #ffffff;" title="Click to view full content">Recently @rhymo of SPRXMobile tweeted that Samsung NL was calling #augmentedreality the Optical Internet.Â  The resulting Twitter discussion gave a pretty resounding the thumbs down to the term Optical Internet with no&#8217;s from @bruces and my friend Gene Becker.</span><span id="c:i9" style="background-color: #ffffff;" title="Click to view full content"> </span></p>
<p><span id="c:i9" style="background-color: #ffffff;" title="Click to view full content"><br />
</span><em style="background-color: #ffffff;"><strong>RT @genebecker: No @Rhymo, Optical Internet misses the point that #AR will be multimodal, multisensory, social, contextual</strong></em><br style="background-color: #ffffff;" /> <br style="background-color: #ffffff;" /> <span style="background-color: #ffff00;"><span style="background-color: #ffffff;">I tweeted that I thought Tonchidot my be able to improve on the term augmented reality considering your great track record with word smithing.Â  Has the Tonchidot team got any ideas for a better term?</span></span></p>
<p><strong><span style="background-color: #ffffff;">Ken Inoue: *** Good question &#8211; the term &#8220;AR&#8221; is too techy/difficult&#8230;..Â  we agree. </span><br />
<span style="background-color: #ffffff;">But we haven&#8217;t thought of a good alternative term yet&#8230;</span><br />
</strong> <span style="background-color: #ffff00;"><strong><br />
</strong> <span style="background-color: #ffffff;"><strong>Tish Shute:</strong> Who came up with the term &#8220;air tags?&#8221; </span></span></p>
<p><strong>Ken Inoue: Our CEO, Takahito Iguchi did.Â  He has a talent for creating names, phrases&#8230;Â  and the future, we hope. <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
Our members are not &#8220;AR lab scientists&#8221; &#8211; we are from the worlds of multimedia, visual arts, publishing, lovers of manga and anime and Japanese sub-culture&#8230;.</strong></p>
<p><span style="background-color: #ffffff;"><strong> Tish Shute:</strong> You mentioned Tonchidot has been very involved in Android development community in Japan. can you tell me more about this and what have been areas Tonchidot has been most interested in? What do the Tonchidot developers think have been the most exciting new developments with Android?</span></p>
<p><strong><span style="background-color: #00ffff;"><span style="background-color: #ffffff;">Ken Inoue: Yes,Â  core members of our tech team are key members of the Android movement in Japan, and we are influenced greatly by what&#8217;s happening there.Â  Their openness is very very attractive indeed!Â  I</span></span><span style="background-color: #ffffff;">t was a tough decision whether to choose Android or iPhone as our first application platform. There are pros and cons. </span></strong></p>
<p><strong>The android dev community is unofficial, of course, but we have been invited to speak and do demos very often -Â  <a href="http://www.mobilecrunch.com/2009/03/20/sekai-camera-mobile-social-tagging-is-coming-to-android-phones-too/" target="_blank">one of our demos is in the media</a> &#8211; shooting games on Android.Â  It was quite a while back, and our app is now far ahead.</strong></p>
<p><strong>Tish Shute:</strong> But Sekai Camera will be released on the iphone?</p>
<p><strong>Ken Inoue: YES, ifÂ  all goes well &#8211; as many have pointed out, iPhone is not PERFECT &#8211; no device is, at least currently.</strong></p>
<p><strong>Tish Shute:</strong> Yes and how is the iphone uptake in Japan &#8211; the big plus in the US is the big user base?</p>
<p><strong>Ken Inoue: Yes that&#8217;s the big difference. Â  In Japan, Softbank, the #3 carrier is marketing it &#8211; for now. They don&#8217;t release numbers, but I think there are 1M handsets already sold.Â  Still very small compared to other markets.Â  BTW, In Japan, roughly 35M handsets were sold last year, dropping from 50M in previous years.</strong></p>
<p><strong>Tish Shute:</strong> Yes it seems at the moment application developers are forced to choose between the US market and the rest of the world! So what is the status of Android in the Japanese mobile market &#8211; the iphone is pretty tiny</p>
<p><strong>Ken Inoue: We just had a release of the first Android phone by NTT DoCoMo a couple of months ago, so still very very early.</strong></p>
<p><strong>Tish Shute:</strong> So Android phones market is even smaller than the iphone</p>
<p><strong>Ken Inoue: Yes, and so our decision to release on the iPhone -<br />
We haven&#8217;t provided our app for android yet &#8211; just demos. It&#8217;s too small of a market, at least for now.</strong><br />
<br style="background-color: #ffffff;" /> <span style="background-color: #ffffff;"><strong>Tish Shute:</strong> Robert put out an interesting question: </span><em style="background-color: #ffffff;">&#8220;Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em><br style="background-color: #ffff00;" /><br />
<strong>Ken Inoue: Good point. We certainly don&#8217;t wish to be Apple dependent, or dependent on anyone.Â  As much as we like Apple and iPhone, we will surely create apps for other platforms. We always get question/requests to create symbian apps, and we would like to do that &#8211; but in order of prioritization- we&#8217;re a small start-up.</strong></p>
<p><strong>Tish Shute:</strong> There are obstacles to creating AR apps on symbian devices aren&#8217;t there?</p>
<p><strong>Ken Inoue: The AR experience we can provide on iPhone and android, can not be replicated on conventional phones.Â  However, we haven&#8217;t examined possibilities on Symbian in detail yet, so we can&#8217;t say much.</strong></p>
<p><strong>Tish Shute: </strong>iphone adoption in the US has really put augmented reality on the map.</p>
<p><strong>Ken Inoue: It certainly has!<br />
</strong></p>
<p><strong>In Japan, it is rumored that iPhone will soon be marketed by multiple carriers, in addition to Softbank. That will be a boost for us.Â  Apple is moving gradually to a multi-carrier strategy, I believe.Â  With content getting richer, Apple will be required to partner with carriers with strong infrastructure.</strong></p>
<p><strong>Tish Shute:</strong> Recently I haveÂ  been exploring the strengths of <a href="http://www.waveprotocol.org/" target="_blank">Google Wave protocol</a> for some aspects of mobile augmented reality.<br />
<span style="background-color: #ffffff;"> </span></p>
<p><span style="background-color: #ffffff;">And this is, perhaps, a question for the Tech team perhaps?Â  Do the Tonchidot devlopers think Google Wave would be an interesting jumping off point for some augmented reality standards?</span><br style="background-color: #ffffff;" /><br />
<strong>Ken Inoue: Our tech members haven&#8217;t been able to examine this in detail yet &#8211; but we are definitely excited!</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-5.10.09-PM.png"><img class="alignnone size-medium wp-image-4425" title="Screen shot 2009-09-17 at 5.10.09 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-17-at-5.10.09-PM-300x199.png" alt="Screen shot 2009-09-17 at 5.10.09 PM" width="300" height="199" /></a><br />
</strong></p>
<p><strong><br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Games, Goggles, and Going Hollywood&#8230;How AR is Changing the Entertainment Landscape: Talking with Brian Selzer, Ogmento</title>
		<link>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/</link>
		<comments>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/#comments</comments>
		<pubDate>Mon, 31 Aug 2009 03:38:38 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[alternate reality RPG]]></category>
		<category><![CDATA[ambient intelligence]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR spam]]></category>
		<category><![CDATA[ARBalloon]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[augmented reality baseball cards]]></category>
		<category><![CDATA[augmented reality development]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality hotspots]]></category>
		<category><![CDATA[augmented reality industry]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[Blockade]]></category>
		<category><![CDATA[Brad Foxhoven]]></category>
		<category><![CDATA[Brian Selzer]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cyberpunk]]></category>
		<category><![CDATA[Evolutionary Reality]]></category>
		<category><![CDATA[EyeToy]]></category>
		<category><![CDATA[eyewear for AR]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[Kensuke Tanabe]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar Developer Conference]]></category>
		<category><![CDATA[location based RPGs]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[markerless natural feature tracking]]></category>
		<category><![CDATA[Masunaga]]></category>
		<category><![CDATA[Metroid]]></category>
		<category><![CDATA[Metroid Prime]]></category>
		<category><![CDATA[Mirrorshades]]></category>
		<category><![CDATA[multiperson mobile AR experiences]]></category>
		<category><![CDATA[Nano Air Vehicles]]></category>
		<category><![CDATA[near field object recognition]]></category>
		<category><![CDATA[new augmented reality trade jargon]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Pentagon's Robot Hummingbirds]]></category>
		<category><![CDATA[Project Natale]]></category>
		<category><![CDATA[Put a Spell]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Sekai camera]]></category>
		<category><![CDATA[social gaming platforms]]></category>
		<category><![CDATA[sticky light]]></category>
		<category><![CDATA[The Dawn of the Augmented Reality Industry]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Topps AR baseball cards]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Vuzix]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[Yoshio Sakamoto]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4334</guid>
		<description><![CDATA[Picture on the left Mirrorshades, picture on the right a Metroid Hud. &#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; Bruce Sterling in Beyond the Beyond &#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg"><img class="alignnone size-full wp-image-4349" title="mirrorshadespost3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg" alt="mirrorshadespost3" width="124" height="204" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2.jpg"><img class="alignnone size-medium wp-image-4350" title="metroid_hud1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2-300x204.jpg" alt="metroid_hud1post" width="300" height="204" /></a></p>
<p><em>Picture on the left <a href="http://www.amazon.com/Mirrorshades-Cyberpunk-Anthology-Greg-Bear/dp/0441533825" target="_blank">Mirrorshades</a>, picture on the right a <a href="http://en.wikipedia.org/wiki/Metroid" target="_blank">Metroid Hud</a>.</em></p>
<p><strong>&#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; <a href="http://www.wired.com/beyond_the_beyond/2009/08/the-key-take-aways-for-investors-interested-in-the-augmented-reality-field/" target="_blank">Bruce Sterling in Beyond the Beyond</a></strong></p>
<p><strong>&#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s super cyberpunk&#8221;&#8230; Bruce Sterling, <a href="http://vimeo.com/6189763" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry.&#8221; </a></strong></p>
<p>The Dawn of the Augmented Reality Industry continues to brighten, and now we have two augmented reality companies, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, firmly established in Hollywood &#8211; the dream mother of so many of our augmented realities.<a href="http://ogmento.com/" target="_blank"></a></p>
<p><a href="http://ogmento.com/" target="_blank">Ogmento</a> is the most recent of these two pioneering augmented reality companies to set up shop in LA.Â  <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> CEO Bruno Uzzan moved to LA from France two years ago, although he still has a fifty person RandD team in France.Â Â  Total Immersion began 10 years ago in the quiet, lonely, hours before the dawn of an AR industry.Â  But <a href="http://gamesalfresco.com/2009/07/23/mattel-launches-augmented-toys-at-comic-con/" target="_blank">Total Immersion&#8217;s AR toys for Mattel,</a> and augmented reality for <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">Topps baseball cards</a>, fired CNet writer Daniel Terdiman up enough to say, &#8220;I have seen the future of toys, and it is augmented reality&#8221; (<a href="http://news.cnet.com/8301-13772_3-10317117-52.html" target="_blank">see full post here on CNet</a>).</p>
<p>Recently, I talked withÂ <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank"> Ori Inbar, one of the founders of Ogmento </a>andÂ  the premier augmented reality blog <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> about his new venture in Hollywood. Bruce Sterling, <a href="http://twitter.com/bruces" target="_blank">@bruces</a>, had some fun with my invention of <a href="http://www.wired.com/beyond_the_beyond/2009/08/augmented-reality-ogmento/" target="_blank">brand new augmented reality trade jargon here</a>!Â  Ori pointed out Ogmento brings two important new facets to the rapidly growing augmented reality field: firstly they are bringing leadership from veterans of the entertainment industry into augmented reality development. <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> have partnered with Ori on Ogmento.Â  And, in an another important step forward for a young industry, Ogmento announced they will be acting as publishers for a fast growing cohort of augmented reality application developers and helping AR development teams out there bring their concepts to the market.</p>
<p>So I was very happy also to have the opportunity to talk with Brian Selzer.Â  Bruce Sterling pointed out in his seminal<a href="http://eurekadejavu.blogspot.com/2009/08/augmented-realitys-sermon-on-flatlands.html" target="_blank"> sermon from the flatlands</a> at the <a href="http://layar.com/" target="_blank">Layar</a> Developer Conference, AR is kind of a &#8220;Hollywood scene.&#8221; We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser experiences in recent weeks in an awesome wave of enthusiasm. Are Hollywood creatives equally smitten? For the answers see the full interview with Brian Selzer below.</p>
<p>Brian Selzer (<a href="http://brianselzer.com/" target="_blank">www.brianselzer.com</a> and <a href="http://twitter.com/brianse7en" target="_blank">twitter &#8211; brianse7en</a> ) has an extensive involvement with emerging platforms:</p>
<p><strong>&#8220;from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC [user generated content] projects (<a href="http://www.mashade.com/" target="_blank">mashade.com</a>, <a href="http://www.instafilms.com/" target="_blank">instafilms.com</a>).&#8221;</strong></p>
<p>I began my own career in Hollywood doing motion control photography and creating software that bridged the language of robotics and servo motors with the visions ofÂ  film directors. Eventually our little company, NPlus1, moved on to 3D vision systems and image recognition stuff.Â  So yes, I have been really, really patient waiting for this particular techno visionary dream.Â  And, while I have been waiting for augmented reality to manifest, I have grown to love the internet.Â  But now, how awesome, <a href="../../2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">It is OMG finally for mobile AR!</a></p>
<p>Augmented reality is busting out all over &#8211; through our laptops, our phones, on the streets, toys, baseball cards, art installations, <a href="http://www.youtube.com/watch?v=9noMfsg486Y" target="_blank">sticky light calligraphy</a> and more.</p>
<p>Many of my questions to Brian were directed at at how and when we will see augmented realities with near field object recognition, image recognition and tracking and, of course, the illusive eyewear.Â  As Bruce Sterling points out we are just at the very, very beginning &#8211; the dawn of an industry.Â  I created the photomontage below on the right to compliment <em> <a href="http://www.tonchidot.com/">Tonchidot&#8217;s</a> </em>illustration suggesting the evolutionary inevitability of holding our phones up (below on the left).Â  The Evolutionary Reality of AR will not end there.Â  It is just a step into eyewear, hummingbirds or <a href="http://http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Nano Air Vehicles</a>, and more&#8230;&#8230;.</p>
<h3>The Evolutionary Reality of AR</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96.png"><img class="alignnone size-medium wp-image-4359" title="Picture 96" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96-300x97.png" alt="Picture 96" width="300" height="97" /></a></p>
<p><em>Cartoon on the left  by  <a href="http://www.tonchidot.com/">Tonchidot</a> on the right a collage of a stock photo and the <a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Pentagon&#8217;s Robot Humming Birds &#8211; </a><a href="http://http//gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">&#8220;Nano Air Vehicles</a>.&#8221;</em><strong><em><strong><a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank"> </a></strong></em> </strong></p>
<p>While we finally we have, an affordable mediating device with the horse power, mindshare and business model to bring AR mainstream with the iphone.Â  The much anticipated Apple 3.1 Beta SDK to be released in September will not, I am sure, open up the Video API at the levels that augmented realities with near field object recognition and tracking require (I would love to be proved wrong though). But the magic wand to deliver even <span id="b9-2" title="Click to view full content">tightly registered AR graphics/media (that require a lot of CPU and GPU)</span> to a wide audience is in our hands, so full access to may not be far off. And others, of course, can/will/might knock the iphone off its current pedestal.Â  AR made it&#8217;s mobile phone debut on the Android after all.</p>
<p>Like everyone else who loves AR, I wish that Apple would open up faster (and I wish Android would manifest on some rocking hardware). But we will see enough of the iphone Video API open for the next generation of mobile augmented reality games and applications to emerge in the coming months.</p>
<p>One of these will be Ogmento&#8217;s.  Although Ogmento is in stealth mode, they have released <a href="http://www.youtube.com/watch?v=EB45O7-6Xrg&amp;eurl=http%3A%2F%2Fogmento.com%2F&amp;feature=player_embedded" target="_blank">a teaser for their first game, &#8220;Put A Spell,&#8221;</a> developed by ARBalloon â€“ screenshot below.Â  Ori did reveal to me in <a href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">th<span style="color: #551a8b;">is interview</span></a> that they are doing image recognition and using the Imagination AR engine.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95.png"><img class="alignnone size-medium wp-image-4356" title="Picture 95" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95-300x177.png" alt="Picture 95" width="300" height="177" /></a></p>
<p>As Brian notes, Hollywood has had the AR bug for a long time. AR has been everywhere in Science Fiction Movies and video games. Nintendo&#8217;s SPD3 head Kensuke Tanabe, &#8220;effectively the man in charge of overseeing all the <em>Metroid</em> franchise underneath original co-creator Yoshio Sakamoto,&#8221; explains the story of <em>Metroid</em> to Brandon Boyer of <a href="http://www.offworld.com/2009/08/retro-effect-a-day-in-the-stud.html" target="_blank">Offworld here</a> (an image of a Metroid Hud on the right opening this post) :</p>
<p><strong>&#8220;the idea of the different visors you use in the <em>Prime</em> games to interact with the world: the scan visor, for instance, set the game apart from other first person shooters in that the player was using it to proactively collect information from the world, rather than having the story come to them passively, in the form of cut-scenes or narration. &#8220;<em>Prime</em> could have adventure elements with the introduction of this visor,&#8221; says Tanabe, &#8220;That&#8217;s how we came up with the genre &#8212; first person adventure, instead of shooter.&#8221;</strong></p>
<p>But as Brian points out:</p>
<p><strong>&#8220;the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear).&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Brian Selzer</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg"><img class="alignnone size-full wp-image-4363" title="me" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg" alt="me" width="188" height="227" /></a></p>
<p><strong>Tish Shute: </strong>Bruce Sterling&#8217;s sermon at the Layar Developer conference, <a href="http://www.wired.com/beyond_the_beyond/2009/08/at-the-dawn-of-the-augmented-reality-industry/" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry,&#8221;</a> was absolutely awesome. He spread the future feast/orgy of augmented reality before usÂ  &#8211; and described many of the dishes we will tasting both delectable and diabolical.Â  One of the many things he points out is, AR is kind of a &#8220;Hollywood scene.&#8221; And, as Ogmento is one of only two augmented reality companies in Hollywood, I am interested to hear how it looks from your neck of the woods. We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser in recent weeks in an awesome wave of enthusiam &#8211; are Hollywood creatives catching the buzz?</p>
<p><strong>Brian Selzer: Â It was a thrill to hear Bruce Sterling mention Ogmento. I devoured all of his Cyberpunk books back in the 80&#8242;s, along with writers like Gibson, Rucker, Shirley&#8230; To me, sci-fi writers are the visionaries who define and influence our technological paths into the future. They make science and tech sexy enough to want to manifest those experiences in the real world. Clearly Bruce sees the AR industry as being sexy. I love that he called it &#8220;a techno-visionary dream come true&#8230; and super-cyberpunk.&#8221; Â And yes, kind of a Hollywood scene.</strong></p>
<p><strong>Hollywood creatives caught the AR bug before they knew what AR was. Â Look at science fiction movies and video games to see AR everywhere. Terminator, The Matrix, Minority Report, Iron Man.. the list goes on. Â Look at any video game with an integrated heads-up display. Â It&#8217;s clear Hollywood loves AR. Â It&#8217;s only been in the past few months though that the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear). So yes, the buzz is here and it&#8217;s strong. Â With that, we all have to be prepared for the good, the bad and the ugly as AR goes mainstream.</strong></p>
<p><strong>It certainly goes to show how young this industry is when Ogmento and Total Immersion are currently the only AR companies based in Los Angeles. It&#8217;s very exciting to be the only company right now demonstrating a natural feature tracking (markerless) iPhone experience in Hollywood. We are in talks to bring some very big brand and properties to the mobile AR space. The goal is to deliver experiences that create added engagement and value to the consumer.</strong></p>
<p><strong>Tish Shute:</strong> Also in his landmark sermon Bruce Sterling noted that augmented reality has been around for 17 yrs and now at last we are seeing the dawning ofÂ  an augmented reality industry. What inspired you to take up the challenge of launching an augmented reality company in Hollywood?Â  Oh congrats that Bruce Sterling name checked Ogmento in his list of companies that prove that this really is the dawn of an industry!</p>
<p><strong>Brian Selzer: I&#8217;ve always been involved in emerging platforms&#8230; from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC projects (mashade.com, instafilms.com). Working with all these great CG game assets, I continued to think about what&#8217;s next, and that&#8217;s when I started to follow AR very closely and started engaging with those who were pioneering in the space.</strong></p>
<p><strong>I remember swapping instant messages with <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> (<a href="http://twitter.com/robertrice" target="_blank">@robertrice</a>) right after the 2008 Super Bowl.Â  We were not chatting about the football game, but rather about some of the commercials that aired during the event as a sign that AR was making its way into the mainstream.Â  A lot of people became aware of AR for the first time when the <a href="http://ge.ecomagination.com/smartgrid/" target="_blank">GE SmartGrid commercial</a> aired.Â  There were all these YouTube videos popping up of people blowing on holographic wind turbines.</strong></p>
<p><strong>The commercial that really got me excited though was the <a href="http://www.youtube.com/watch?v=Kwke0LNardc" target="_blank">Coke Avatar commercial</a>.Â  In that commercial people in the city were sporadically being portrayed as their digital persona&#8217;s, avatars, gaming characters, etc..Â  For me that spot did a great job showing how many of us already have these â€˜alter egosâ€ that live in cyberspace, and how the line between these worlds can sometimes be blurred. I remember watching that commercial and thinking that is exactly the type of experience Iâ€™d like to create with mobile AR.Â  I want to overlap the virtual world into our every-day reality. Why cant I bring my World of Warcraft or Second Life persona with me into the real world?</strong></p>
<p><strong>I am big on the notion of â€œGames and Goals.â€ I believe that games have the power to motivate people in a very powerful way. By challenging ourselves while playing a game we can climb mountains.Â  Augmented Reality is the perfect platform to bring gaming into the real world.Â  By mixing the virtual world with the physical world, this added layer of perception provides a very powerful experience for something like a role-playing game.</strong></p>
<p><strong>One of my earlier social-gaming projects was a website called Superdudes.Â  This was a â€œBe Your Own Superheroâ€ concept that celebrated and motivated kids to create superhero avatar/persona&#8217;s online, and we gave members all sorts of games, challenges, and rewards, some of which carried into the real world. The site recognized members for teamwork, creativity, volunteer work and things like that. So the Superdudes were often involved in charity events and benefits to help children. Â Everybody called each other by their Superhero names, and the line between fantasy and reality were being blurred. Â This project really got me thinking about what happens when you take positive role-playing like this and mix it into the real world.Â  I started to work on a plan for location-based activist missions for points and rewards, but never got to complete that. So I have some unfinished business here.</strong></p>
<p><strong>I think it would be fantastic to be able to show up to some type of fun event with friends, and everybody could see each others alter ego personas standing before them. When you can turn the world into a playground, and use the power of gaming to make a positive impact on the planet&#8230; well, I donâ€™t think there is anything better than that.Â  These are the types of projects that drive me, and I think AR is the best platform to support these types of social gaming experiences.</strong></p>
<p><strong>Tish:</strong> Does Ogmento have any RPGs under development?Â  I noticed in the Google Wave on RPG someone has been working on doing something with the Dungeons&amp;Dragons API.Â  I am interested in exploring the web of protocols underlying Wave as a transport mechanism for multi-person, mobile, AR experiences (not requiring downloads), on an open global outdoor AR network. If not Wave, what do you see as the potential infrastrucure and protocols we could harness for an open augmented reality network?</p>
<p><strong>Brian: Â Ogmento has a deep background in video games and we interact regularly with most of the major game publishers. As a company we are not so much developing our own RPGs right now, but rather exploring what mobile AR extensions make sense for existing brands. Â There are many limitations to location-based gaming, but a global AR network is exactly along the lines we are thinking. Â Lots of discussions are taking place on protocols, platforms, API&#8217;s, and there are numerous ways to approach this. Â We need to be able to use what&#8217;s available now and continue to refine and customize for AR&#8217;s specific needs and issues as we progress. </strong></p>
<p><strong>In general though, Ogmento is focused on what types of experiences can be had today and over the next couple of years. I still think we are several years out from a truly open augmented reality network. Â We are certainly looking at launching our own &#8220;Ogmented Network&#8221; which would support some fun treasure hunt type experiences, or add an entertainment layer on top of traditional outdoor marketing campaigns.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know whether you have read Thomas Wrobel&#8217;s ideas for an open augmented reality network that I just <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">published here on Ugotrade</a>.Â  The principals he talks about are very important for augmented reality to become a major part of our lives &#8211; .Â  Considering the difficulty open networks can pose for emerging business models how can we fund the development of an open framework for augmented reality?</p>
<p>&#8220;<em>a future AR Network, I mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.<br />
Where people can just point their phone, webcam, or pair of AR glasses anywhere were a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.&#8221;</em></p>
<p><strong>Brian: I think funding for these types of projects will definitely come from Venture Capital groups in the near future. Â It&#8217;s early in AR, but the VC&#8217;s are watching and deciding which horses to bet on. Â Until that time, it&#8217;s about service work, and developing AR experiences for others with what is possible today. That work will help fund internal development of original AR products, and platform development.</strong></p>
<p><strong>Tish:</strong> How did you get started with Ogmento?</p>
<p><strong>Brian: My first conversation with Ori was actually about my interest in Location Based RPG concepts.Â Â  We had a long conversation about the possibilities with AR, and it was clear that we shared similar interests, but were coming from different complimentary backgrounds. The idea of collaboration was exciting, so we just kept talking until the timing felt right. Now, with Ogmento we bring a unique blend of AR development experience with a deep backgrounds in AR technology, animation, video games, entertainment, social media, etc.Â Â  I think this is a powerful mix that will allow us to do some great things.</strong></p>
<p><strong>Itâ€™s still so early, and things are just getting started in AR. There are only so many webcam magic tricks you can enjoy before you are ready for something else.Â  The location-based apps have the most potential in my opinion, which is why we are really focused on mobile AR.Â Â  We have some board-game type projects, which do not instantly scream location-based gaming, but if you look at something like the ARhrrr board game, you can see how much more compelling it can be when the game invites the player to be actively moving around during the experience.</strong></p>
<p><strong>Tish:</strong> I am interested in your perspective on how we can create the kind AR experiences that really embody what has always been so exciting about AR &#8211; the tight alignment of graphics and media with real world objects and ultimately a rich immersive 3D experience, so I am going to hit you with a bunch of those, &#8220;Is this really eyewear or vaporware?&#8221; questions.Â  The real deal eyewear changes everything!</p>
<p>While eyeware is a big challenge technically and aesthetically,Â  I am pretty sure that there are several outfits out there that can pull off the optics and projection. â€¨Will the entertainment industry get excited enough to put a major push into delivering the eyewear in short order instead of the 5 to 10 year project that some people still think it is? Â Â  The business development challenge is bigger perhaps than the technical obstacles perhaps? What is your view on this?</p>
<p>And, perhaps, the eyewear is a clear example of a need for partnerships. For example, we have seen efforts from companies like <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a>, and recently a Japanese Company, <a href="http://www.masunaga1905.jp/brand/teleglass/">Masunaga</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97.png"><img class="alignnone size-medium wp-image-4386" title="Picture 97" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97-300x80.png" alt="Picture 97" width="300" height="80" /></a></p>
<p>I have no reports from people who have tried the Maunaga eyewear yet.Â  But,Â  limited by small field of view, and tethered, currently eyewear offerings, available at a reasonable price point, are not workable solutions for augmented reality experiences. But the problems are not insurmountable. What will facilitate the real deal?Â  â€¨â€¨â€¨It seems that it is critical to start creating hardware relationships now. The industry is costly and slow moving and as Robert Rice put it to me in a recent conversation, &#8220;once the software cat is out of the bag, its going to go wild and if the hardware isnt there, its going to stutter.&#8221;</p>
<p>As Ori notes some of the hardware companies like Intel and others don&#8217;t seem to be paying enough attention to AR.Â  Ori points out they donâ€™t see the demand yet.Â  But in order to create an awesome AR experience and demand from a mass audience, don&#8217;t we need to work in conjunction with hardware designers?</p>
<p><strong>Brian: Itâ€™s fun to think about who will eventually deliver a great hardware solution for AR glasses. It will happen. It would be cool to see somebody like an Oakley or Nike partnered up with a company like Vuzix to deliver something people actually might wear in public. Â Perhaps a hardware manufacturer like Apple or Nokia will bring us something like the iSight or the NGaze down the line. Â Iâ€™d love to see a set of glasses designed by Ideo.Â  Microsoft or Sony are already playing with technologies like Project Natale and the EyeToy, so I think its only a matter of time before they deliver an eyewear solution. I would even look to the toy companies to eventually make an investment here.</strong></p>
<p><strong>Gamers will be the early adopters, and in a few years we may start to see people running around in the park wearing glasses with headsets, but it will be acceptable because it&#8217;s clear they are using them for a game. Â Itâ€™s going to take a very sexy and stylish piece of hardware for everyday people to be willing to wear AR glasses in public while going about their everyday business. Â Â Itâ€™s like the recent cover of Wired magazine where Brad Pitt is wearing a mobile headset in his ear, and the editors point out that even he canâ€™t pull that look off, so why do you think you can. Â When AR glasses come in designer frames, and you can&#8217;t tell them from non-AR glasses, to me thatâ€™s when things get really interesting from a mass-adoption perspective. Â Â Compare how many people were carrying around a mobile phone in the 80s to now.Â  I think it will be the same thing with glasses.</strong></p>
<p><strong>I was in an AR pitch meeting the other week at a very significant media company, and brought up the point that todayâ€™s handheld Smartphones will eventually evolve into tomorrows Smartglasses. My comment was quickly shrugged off as sort of a sci-fi notion that was irrelevant to the business at hand. Â Probably true, but I think it is important to understand where digital media and entertainment is going, so you can adapt quickly, and evolve into those spaces more naturally. Â The more we see people walking around with their Smartphones in front of their face (like a camera), the sooner it will be that we make the jump to eyeglasses as a key hardware device for AR experiences.</strong></p>
<p><strong>At Ogmento, we definitely are working on AR experiences with the hardware and software available today. Â We will get some product out this year, and 2010 will be a banner year for markerless mobile AR in general.Â  I think the entire AR community is looking forward to bringing this technology to the mainstream in the form of games, marketing campaigns, virtual docent apps, and much more.Â  It might not be the full experience we are all dreaming about for some time, but we can see the path and the true potential, and it&#8217;s pretty spectacular.</strong></p>
<p><strong>You mention the tight alignment of graphics and media with real world objects. Â That is really our focus. A lot of well-deserved attention is going to the browser overlay &#8220;post-it&#8221; approach right now, which uses compass and GPS. Â We are focused on markerless natural feature tracking, so once you identify something that is AR enhanced in your environment, you can interact with that integrated experience. Â On an iPhone that can be as simple as using your touch screen to interact. Â When you are wearing glasses, it becomes more about visual tracking. There are lots of smart people thinking through these issues. Many of which you have interviewed. It is my hope that there are exciting collaborative efforts to be had in the coming months to get us all there together and faster.</strong></p>
<p><strong>Tish:</strong> Bruce touched on some of the hard problems that have to be solved for augmented reality &#8211; and he noted for instance security needs to be tackled in the early stages. Robert made a nice list, <em>â€œprivacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others.â€</em> Will Ogmento be leading the way in solving some of these hard problems?</p>
<p>And, won&#8217;t trying to solve these hard problems for networked AR in walled garden scenarios one company at a time lead to a lot of reinventing the wheel wasted energy?</p>
<p><strong>Brian: These are all important issues, and again there are a lot of smart people thinking about solutions to these problems on a daily basis. Â Ogmento is interested in partnering with developers and supporting their efforts as a publisher of mobile AR experiences. Â While we intend to roll up our sleeves in these areas, we are currently more focused on taking AR mainstream with the hardware and software available today. Â As the industry evolves, so will Ogmento. As the opportunities evolve, our ability to make a greater impact tackling these issues will be realized.</strong></p>
<p><strong>Tish: </strong>Another area of development that could really kick AR into high gear might be creating augmented reality hotspotsÂ  where we use can deliver the kind of location accuracy/instrumentation necessary to create interesting AR experiences (partnership with Starbucks, perhaps ?!).Â  Augmented reality hots spots, could deliver the kind of high quality AR experience that isn&#8217;t possible ubiquitously at the moment, and may be a real way to get people really exploring the potential of AR now, rather than later?</p>
<p><strong>Brian: Â Agreed. I see a great opportunity here with this approach.</strong></p>
<p><strong>Tish:</strong> Although there are many obstacles to Green AR &#8211; the energy hogging servers at the backend for starters! Last week I had a conversation with Gavin Starks, <a href="http://www.amee.com/?page_id=289" target="_blank">AMEE</a>, and <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice </a>and <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a> about how to work with AMEE and the technology available and encourage Green Tech AR development (<a href="http://blog.pachube.com/2009/06/pachube-augmented-reality-demo-with.html" target="_blank">see an early exploration of green tech AR from Pachube here</a>).</p>
<p>We came up with the idea of holding a competition perhaps centered around a targeted instrumented space. But I would really love to hear your thoughts on the topic of Green Tech AR (the energy hogging servers at the back end being the first cloud on the horizon!.)Â  Cool GreenTech AR imaginings, social gaming ideas, RPGs, not even necessarily even tied to the immediately practical, would be like rain in a drought!</p>
<p><strong>Brian: Â I go back to &#8220;Games and Goals&#8221;&#8230; If you make environmental and other activist efforts fun and rewarding, more are likely to be motivated and participate. Â Can you imagine having a personal &#8220;carbon footprint stat&#8221; floating over your self at all times? Or over your home or factory? Â How would that change your behavior? Â We all love stats. Look at how the Nike+ campaign has used technology and gaming to motivate people to run. Â I think there is a lot that can be done to make being green fun. It starts with the individual, and spreads from there. Â Keep me posted on that one!</strong></p>
<p><strong>Tish:</strong> I would also like to explore further the <a href="http://www.readwriteweb.com/archives/augmented_reality_human_interface_for_ambient_intelligence.php" target="_blank">RRW suggestion that ambient intelligence is both the Holy Grail of AR and possibly snake oil</a>:</p>
<p><em>&#8220;The holy grail of the mobile AR industry is to find a way to deliver the right information to a user before the user needs it, and without the user having to search for it. This holy grail is likely in a ditch somewhere beside a well-traveled road in the district of the semantic Web, ambient intelligence and the Internet of things. Be wary of any hyped-up invitation to invest in a company that claims to have gotten the opportunity right. What we&#8217;ve seen in the commercial industry to date is a rather complex version of a keyboard, mouse, and monitor.&#8221;</em></p>
<p><em> </em></p>
<p>So Holy Grail, Snake Oil, or a ditch somewhere&#8230;.?</p>
<p><strong>Brian: Â I instantly think of Minority Report, where Tom Cruise&#8217;s character is being bombarded with holographic ads personalized with his name and to his current situation. Â In the future, Spam is a nightmare, especially when it knows who you are. Â I think the key thing here is delivering &#8220;the right information&#8221;, and we still dont have that down. I do see a day where we can truly customize what comes to us, how we want it, when we want it. Â My future vision of ambient intelligence is the ability to &#8220;turn everything off&#8221; if I want to&#8230; block out the stimuli and replace it with images of nature, or natural surroundings, etc. Â Where I live in Los Angeles, we have those digital billboards everywhere, so it&#8217;s like advertising overload wherever you look (hints of Blade Runner). Â I personally don&#8217;t mind them, but I know there is great debate on there being simply too many billboards everywhere. So AR would only add to the noise of life by adding yet another digital overlay of information, right? </strong></p>
<p><strong>Perhaps the holy grail is to use technology to filter things out. AR might become a solution to leading a simpler life, or a perfectly customized life if you want that. Ultimately the control needs to be with the individual. Â I guess I am talking about something like TiVo taken to the extreme.</strong></p>
<p><strong>Tish:</strong> And then that other biggy &#8211; augmented reality search! I am asking this next question ofÂ  <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a href="http://sekaicamera.com/" target="_blank">Sekai </a>camera too and now I must also ask <a href="http://www.acrossair.com/" target="_blank">Acrossair</a> and several others I guess! Obviously a huge area of opportunity in this broader landscape that uses location-awareness, barcode scanners, image recognition and augmented reality is to harness the collective intelligence &#8211; a whole new field of search. There is the beginning of a discussion on this <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">in the comments here</a>.</p>
<p>What will it take, in your view, to become a leader in augmented reality search?</p>
<p><strong>Brian: Â I&#8217;m more of a content guy, so I tend to focus on things like UI, quality of creative, etc.. Â From that perspective, I am looking forward to evolving beyond the &#8220;post-it&#8221; text overlay user-experience we see now in AR search. I was impressed with the TAT Augmented ID concept and hope we start seeing more smart design solutions like that emerging in the space. Â There are some great new design approaches coming out of the location-aware space that should be applied to AR search. I&#8217;ve been studying the heads-up display designs being used in video games, and re-watching movies like Iron Man for ideas. This is another example where Hollywood has painted a polished picture of what AR can and should look like, and the masses have already accepted these design approaches. Â So from that perspective, from my view the leaders in search will be delivering sexy, smart and simple solutions. It&#8217;s all about the S&#8217;s.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>&#8220;Do Well By Doing Good:&#8221; Talking Experience and Design in a Mobile World with Nathan Freitas and David Oliver</title>
		<link>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/</link>
		<comments>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/#comments</comments>
		<pubDate>Sat, 04 Apr 2009 06:05:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Metarati]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Phones in Africa]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[albany's king geek]]></category>
		<category><![CDATA[andrew hoppin]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[android APIs]]></category>
		<category><![CDATA[android market place]]></category>
		<category><![CDATA[android on HTC]]></category>
		<category><![CDATA[Bre Pettis]]></category>
		<category><![CDATA[Coovents]]></category>
		<category><![CDATA[crowd sourced]]></category>
		<category><![CDATA[david oliver]]></category>
		<category><![CDATA[geo report android]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[government 2.0]]></category>
		<category><![CDATA[greporter]]></category>
		<category><![CDATA[information age volunteerism]]></category>
		<category><![CDATA[inkscape]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[MeetMoi]]></category>
		<category><![CDATA[Mobile design]]></category>
		<category><![CDATA[mobile user experience design]]></category>
		<category><![CDATA[mobile voter]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[NYC Resistor]]></category>
		<category><![CDATA[oliver coady]]></category>
		<category><![CDATA[Oliver+Coady]]></category>
		<category><![CDATA[open intents]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Peek]]></category>
		<category><![CDATA[tech president]]></category>
		<category><![CDATA[the extraordinaries]]></category>
		<category><![CDATA[Thingiverse]]></category>
		<category><![CDATA[viaplace]]></category>
		<category><![CDATA[Volunteerism in the information age]]></category>
		<category><![CDATA[widget based commerce]]></category>
		<category><![CDATA[xtify]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3356</guid>
		<description><![CDATA[Nathan Freitas holding a Peek with Oliver+Coady partner David Oliver talking to fans at New York Tech Meetup &#8211; Mobile Meets Social Volunteerism and participation in public life seem to come naturally to Nathan Freitas. Nathan is one of the leading innovators/developers in NYC in mobile strategy/design (for more on his Android development read on). [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathafreitaswithpeek.jpg"><img class="alignnone size-medium wp-image-3357" title="nathafreitaswithpeek" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathafreitaswithpeek-300x199.jpg" alt="nathafreitaswithpeek" width="300" height="199" /></a></p>
<p><em>Nathan Freitas holding a <a href="http://www.getpeek.com/indexb.html" target="_blank">Peek</a> with <a href="http://olivercoady.com/" target="_blank">Oliver+Coady</a> partner David Oliver talking to fans at <a href="http://www.meetup.com/ny-tech/calendar/9466657/" target="_blank">New York Tech Meetup &#8211; Mobile Meets Social</a><br />
</em><br />
Volunteerism and participation in public life seem to come naturally to <a id="chzc" title="Nathan Freitas" href="http://openideals.com/" target="_blank">Nathan Freitas</a>. Nathan is one of the leading innovators/developers in NYC in mobile strategy/design (for more on his Android development read on). And he is much in demand as speaker who shows others how to realize their mobile experience and design dreams (for upcoming speaking engagements see Nathan&#8217;s blog). But also Nathan has spent much of the last ten years working on new ways for causes and non profits to benefit from technology.</p>
<p>Most recently <a id="plcq" title="Nathan has started working part time for the NY Senate under, &quot;Albany's King Geek,&quot;" href="http://www.observer.com/2009/media/albany%E2%80%99s-king-geek" target="_blank">Nathan has started working part time for the NY Senate under, &#8220;Albany&#8217;s King Geek,&#8221;</a> the new CIO Andrew Hoppin:</p>
<p><strong>&#8220;The CIO team is organizing training sessions for senators and their staff on social networking platforms and how to pay attention to online feedback. Last week, they hired mobile specialist <span class="il">Nathan</span> <span class="il">Freitas</span> to create new phone applications that will allow citizens to get government news on the go.&#8221; </strong></p>
<p>Also, Nathan is currently supporting engineer on, <a href="http://www.theextraordinaries.org/" target="_blank">The Extraordinaries</a>, a smart phone application that explores territory &#8220;beyond the flattening tendency of online relationships&#8221; (see <a id="i6qw" title="this list from Andy Oram" href="http://www.praxagora.com/andyo/professional/government_participation_question.html" target="_blank">this list from Andy Oram</a> of the Questions on Government participation).Â  <a href="http://www.theextraordinaries.org/" target="_blank">The Extraordinaries</a> is Ben Rigby and Jacob Colker&#8217;s prize winning projectÂ  &#8211; &#8220;a smartphone application that delivers volunteer opportunities on-demand.&#8221;</p>
<p>Ben&#8217;s post, <a title="Information Age Volunteerism - Open Sourced! Crowdsourced!" href="http://techpresident.com/blog-entry/information-age-volunteerism-open-sourced-crowdsourced" target="_blank">Information Age Volunteerism &#8211; Open Sourced! Crowdsourced!</a> and the extensive comments give a detailed analysis and critique of this brilliant and creative new approach to volunteersim in the information age.</p>
<p>Nathan, in my view, is a great example of how to &#8220;do well by doing good.&#8221; And, I am particularly excited by the work Nathan and his partner in <a id="nwp6" title="Oliver+Coady" href="http://olivercoady.com/">Oliver+Coady,</a> David Oliver, are doing on Android, e.g., Nathan&#8217;s new <a id="jjed" title="gReporter - opensource, geotagging, media capture report client" href="http://openideals.com/greporter/" target="_blank">gReporter &#8211; opensource, geotagging, media capture report client</a> (you can <a id="ycbi" title="download the source here" href="http://github.com/natdefreitas/georeport-android/tree/master">download the source here</a>).</p>
<p>I first met Nathan when I interviewed him about <a id="kx4_" title="Cruxy" href="http://openideals.com/2009/03/11/cruxy/">Cruxy</a> in 2007 (see my post, <a href="http://www.ugotrade.com/2007/05/24/the-mixed-reality-metarati-at-destroy-tv-merging-art-commerce-politics-and-play/" target="_blank">The Mixed Reality Metarati and &#8220;Destroy TV:&#8221;Â  Merging Art, Technology, Politics and Play</a>).Â  Nathan recently announced that <a id="v9nm" title="&quot;the fat lady has just uploaded her last song,&quot;" href="http://openideals.com/2009/03/11/cruxy/">&#8220;the fat lady has just uploaded her last song.&#8221;</a> Cruxy was an innovative distributed music venture Nathan started with Jon Oakes.Â  Although, as Nathan explains, Cruxy &#8220;never really broke through in the way we hoped.&#8221; Nevertheless Cruxy seems to have been a fertile garden for ideas that are coming of age in Oliver-Coady&#8217;s current mobile experience endeavors.Â  As Nathan explains, &#8220;the world, including Apple and iTunes, has shifted to embrace some of the ideals we have always had &#8211; open formats, more ways to distribute and promote online, more avenues for niche content to be discovered and heard.&#8221; Cruxy&#8217;s technology platform, built by the incomparable Will Meyer:<br />
<strong><br />
&#8220;was a great success in my mind, being one of the first to fully embrace Amazonâ€™s cloud and provide a widget-based commerce system that actually worked!&#8221;</strong></p>
<p>Nathan has a new company, Oliver+Coady. But Nathan told me that he feels he is over his &#8220;start up phase.&#8221;</p>
<p><strong>Nathan Freitas:</strong> I am just tired of the term &#8220;startup.&#8221; I&#8217;m more interested in being defined as person than a member of a corporation. Also I am more interested in the ideas of cooperatives, and have been working on this idea (<a id="un1g" title="see here for more on the New York Creative Cooperative" href="http://scratch.openideals.com/index.php/New_York_Creative_Cooperative" target="_blank">see here for more on the New York Creative Cooperative</a> ).</p>
<p><strong>Tish Shute:</strong> You do a high percentage of non profit work. Are you still managing to keep the home fires burning in the economic downturn?</p>
<p><strong>Nathan Freitas:</strong> There is definitely profit to be made in non-profits because even if you only get paid half of what you get for corporate work, it is worth it in terms of fulfillment, ego, respect, and general contribution back to the planet. However, I&#8217;ve also been investing time &amp; energy w/o pay into thinking about how causes can benefit from technology for over ten years. So its not just something you decide to do one day, and suddenly are successful.</p>
<p><strong>Tish Shute:</strong> What are some of the highlights of your non profit work recently?<br />
<strong><br />
Nathan</strong>: Well, <a id="nywz" title="The Extraordinaries" href="http://www.theextraordinaries.org/about.html" target="_blank">The Extraordinaries</a> project is definitely a highlight. It is focused on a whole new approach to volunteering and winning the first prize at the <a href="http://wemedia.com/miami09/" target="_blank">WeMedia Conference</a> for the non-profit tech category was a great validation of the work. I am just a supporting engineer on the effort, which was founded by my good friend Ben Rigby (a longtime non-profit tech guy as well) and Jacob Colker.</p>
<p>Ben wrote this excellent book on mobile tech and organizing, <a id="lrfb" title="Mobilizing Generation 2.0" href="http://www.amazon.com/Mobilizing-Generation-2-0-Practical-Technologies/dp/0470227443" target="_blank">Mobilizing Generation 2.0</a> He&#8217;s done a ton of mobile work with youth voters via his non-profit, <a id="u5yr" title="Mobile Voter" href="http://mobilevoter.org/about.html" target="_blank">Mobile Voter</a>.</p>
<p>The Extraordinaries is really taking all of our joint experience and putting it into a whole new system that is meant to go beyond generic email blasts that just ask you to &#8220;send a fax&#8221; or &#8220;send a link&#8221;. it gives people specific tasks they can accomplish on their phone or in their local area using their phone.</p>
<p><strong>Tish: </strong>Did you do Twitter Vote Report with Ben too?</p>
<p><strong>Nathan:</strong> Oh, no, <a id="rkbs" title="Twitter Vote Report" href="http://twittervotereport.com/" target="_blank">Twitter Vote Report</a> was with a different group of folks&#8230;mostly east coast-based, organized by the <a id="z91u" title="TechPresident.com blog" href="http://techpresident.com/" target="_blank">TechPresident.com blog</a>. But Ben and I worked on SMS efforts for the 2004 election. We sent 40,000 messages out to SEIU labor members and MoveOn members&#8230; really the first time SMS was used in a wide-scale manner to help get out the vote on election day.</p>
<p><strong>Tish:</strong> Do you have a new mobilization project planned?</p>
<p><strong>Nathan:</strong> Its all about The Extraordinaries right now. We&#8217;ve got a big launch coming in June, and are working actively to add more causes that can benefit from volunteers and organizations that have volunteers but don&#8217;t know what to do with them.</p>
<p><strong>Tish:</strong> I was just looking at <a id="mg55" title="your post on Peek" href="http://openideals.com/?s=peek&amp;x=0&amp;y=0" target="_blank">your post on Peek</a> too.</p>
<p><strong>Nathan:</strong> Yeah&#8230; fortunately that is a completelyÂ  &#8220;for profit&#8221; gig.Â  But I like the company a lot, and think their spirit of providing access to email at a very low cost plays well with the non-profit world.</p>
<p><strong>Tish:</strong> So it isn&#8217;t just iphone apps that are paying the bills?</p>
<p><strong>Nathan:</strong> Nope. iPhone is just an aspect. Everyone is so obsessed with it and how to strike it rich quick, but in the greater scheme of things, there is a huge ecosystem of mobility out there for you to find a niche in, if you are looking.</p>
<p><strong>Tish:</strong> Are you able to monetize your work on Android yet?</p>
<p><strong>Nathan:</strong> here and there&#8230; releasing some for pay apps soon, also including &#8220;free&#8221; Android ports in some high-profile iPhone apps we hope to have out soon. Some successful iPhone app developers are looking for people to port their apps to Android, as well.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/georeporter.jpg"><img class="alignnone size-medium wp-image-3358" title="georeporter" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/georeporter-145x300.jpg" alt="georeporter" width="145" height="300" /></a></p>
<p><a id="jjed" title="gReporter - opensource, geotagging, media capture report client" href="http://openideals.com/greporter/" target="_blank">gReporter &#8211; opensource, geotagging, media capture report client</a></p>
<p><strong>Tish: </strong>So what are your hopes for Android development in general and your gReporter app in particular?</p>
<p><strong>Nathan:</strong> I think Android represents right now what Linux on desktops did in 99 or 00Â  Though as we all know, cycles of technology seem to speed up. There is huge interest in it at the academic level and there is also a genuine interest in its use by non-profit/development agencies working around the globe.</p>
<p>You have to jump through hoops to get an unlocked, open iPhone w/o contract. Android provides an alternative solution to this, that acts more like a true platform, and not just a consumer product.</p>
<p><strong>Tish:</strong> At the moment the Android market place is only for free apps right?</p>
<p><strong>Nathan:</strong> No, it now supports paid apps. I just bought one today for $2.99</p>
<p><strong>Tish:</strong> What did you buy?</p>
<p><strong>Nathan:</strong> An app that allows me to turn my G1 phone into a WiFi hotspot sharing my 3G connection to anyone who connects.</p>
<p><strong>Tish:</strong> So what are the most important aspects of Android in your view?</p>
<p><strong>Nathan:</strong> There areÂ  two sites to help demonstrate what is really going on with Android that makes it significant</p>
<p>1) <a id="jr_o" title="Open Intents" href="http://www.openintents.org/en/intentstable" target="_blank">Open Intents</a> &#8211; this is the ecosystem of developers, all creating services and apps that interoperate, share data, and generally build a very rich Microsoft style platform:<br />
except all these are open-source and built by lots of small developers and not one big corporation.</p>
<p>2) <a id="zdqw" title="Android on HTC" href="http://www.androidonhtc.com/" target="_blank">Android on HTC</a> &#8211; this is the home for all the efforts to port Android to pre-existing HTC/XDA mobile phone hardware. You can see the status of ports here: http://wiki.xda-developers.com/index.php?pagename=Android_devicesÂ  Imagine&#8230; taking an old Windows Mobile HTC phone, and then popping in an SD card that reformats it over to Android brand new phone!Â  For much of Asia, India and Africa, there is huge interest in this.</p>
<p><strong>Tish:</strong> Nice! You mentioned earlier that you are thinking of doing SDK for the android sensor API&#8217;s?</p>
<p><strong>Nathan: </strong>That would be part of the geo report app&#8230; expanding it to capture all sensing data and report that when you submit your text, photo or audio report.Â  Right now it just detects your lat and lon, but no reason it couldn&#8217;t also check your compass, altitude and whatever other data the device might offer.</p>
<p><strong>Tish</strong>: So what will your geo report do now?</p>
<p><strong>Nathan:</strong> It allows you to submit a text, photo or audio report, tagged with geo coordinates, timestamp, and basic user info (name, email, home location, etc) to whatever server it is configured to us. it is the latest release of code used for the TwitterVoteReport and InaugurationReport efforts.</p>
<p>There is also just a lot to learn or use from the code itself, which is available at: http://github.com/natdefreitas/georeport-android</p>
<p>Lots of little lessons learned packaged up into a functioning application</p>
<p><strong>Tish:</strong> How many sensor APIs does android have?</p>
<p><strong>Nathan</strong>: http://developer.android.com/reference/android/hardware/SensorManager.html</p>
<p>int SENSOR_ACCELEROMETER A constant describing an accelerometer.<br />
int SENSOR_ALL A constant that includes all sensors<br />
int SENSOR_DELAY_FASTEST get sensor data as fast as possible<br />
int SENSOR_DELAY_GAME rate suitable for games<br />
int SENSOR_DELAY_NORMAL rate (default) suitable for screen orientation changes<br />
int SENSOR_DELAY_UI rate suitable for the user interface<br />
int SENSOR_LIGHT A constant describing an ambient light sensor Only the first value is defined for this sensor and it contains the ambient light measure in lux.<br />
int SENSOR_MAGNETIC_FIELD A constant describing a magnetic sensor See SensorListener for more details.<br />
int SENSOR_MAX Largest sensor ID<br />
int SENSOR_MIN Smallest sensor ID<br />
int SENSOR_ORIENTATION A constant describing an orientation sensor.<br />
int SENSOR_ORIENTATION_RAW A constant describing an orientation sensor.<br />
int SENSOR_PROXIMITY A constant describing a proximity sensor Only the first value is defined for this sensor and it contains the distance between the sensor and the object in meters (m)<br />
int SENSOR_STATUS_ACCURACY_HIGH This sensor is reporting data with maximum accuracy<br />
int SENSOR_STATUS_ACCURACY_LOW This sensor is reporting data with low accuracy, calibration with the environment is needed<br />
int SENSOR_STATUS_ACCURACY_MEDIUM This sensor is reporting data with an average level of accuracy, calibration with the environment may improve the readings<br />
int SENSOR_STATUS_UNRELIABLE The values returned by this sensor cannot be trusted, calibration is needed or the environment doesn&#8217;t allow readings<br />
int SENSOR_TEMPERATURE A constant describing a temperature sensor Only the first value is defined for this sensor and it contains the ambient temperature in degree centigrade.<br />
int SENSOR_TRICORDER A constant describing a Tricorder When this sensor is available and enabled, the device can be used as a fully functional Tricorder.<br />
float STANDARD_GRAVITY<br />
with a few easter eggs as well<br />
GRAVITY_DEATH_STAR_I<br />
SENSOR_TRICORDER<br />
 <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /> </p>
<p><strong>Nathan</strong>: They are all in the API however, there isn&#8217;t hardware to support all of them yet&#8230; for instance TEMPERATURE is not yet supported<br />
nor is LIGHT.<br />
<strong><br />
Tish:</strong> and errr what is gravity_deathstar</p>
<p><strong>Nathan: </strong>It is a value representing the fictional gravity on the Death Star from Star Wars &#8211; geek humour<br />
<strong><br />
Tish: </strong>That makes me think of <a id="t8:v" title="this great essay by Julian Bleeker, Design Fiction: A Short Essay on Design Science, Fact and Fiction" href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">this great essay by Julian Bleeker, Design Fiction: A Short Essay on Design Science, Fact and Fiction</a>:</p>
<p><strong>&#8220;When you trace the knots that link science, fact and fiction you see the fascinating crosstalk between and amongst ideas and their materialization. In the tracing you see the simultaneous knowledge-making activities, speculating and pondering and realizing that things are made only by force of the imagination. In the midst of the tangle, one begins to see that fact and fiction are productively indistinguishable.<em>&#8220;</em></strong><em><br />
</em><br />
Picture below is Nathan playing his dream ukulele &#8211; designed using the free, open-source <a href="http://www.inkscape.org/">Inkscape</a> vector drawing tool (see his <a href="http://www.thingiverse.com/thing:299">open-source Ukulele plans here)</a><br />
See <a id="dqj2" title="Nathan's blog for the whole story" href="http://openideals.com/2009/03/27/open-source-ukulele-proto-uno-lazzzzored-ftw/" target="_blank">Nathan&#8217;s blog for the whole story</a> of how the Flying V Rockinâ€™ Ukulele Design he posted to <a href="http://thingiverse.com/">Thingiverse</a> a few weeks ago, after being inspired by <a href="http://twitter.com/bre">Bre Pettisâ€™</a> talk at ROFLThang materialized at theÂ  <a href="http://nycresistor.com/">NYC Resistor</a> &#8220;amazing workshop laboratory in Brooklyn where they let anyone come over and hang out at, to learn how to make, build and fabricate pretty much anything. They also have a <a href="http://www.nycresistor.com/laser/">laser</a> (aka â€œLAAAZZZOOORâ€) which you can think of as an automagic thing cutter-outer!&#8221;</p>
<p>so this&#8230;.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/lazoorukele.jpg"><img class="alignnone size-medium wp-image-3359" title="lazoorukele" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/lazoorukele-300x164.jpg" alt="lazoorukele" width="300" height="164" /></a></p>
<p>became this &#8230;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathanfreitasplayingukele.jpg"><img class="alignnone size-full wp-image-3360" title="nathanfreitasplayingukele" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathanfreitasplayingukele.jpg" alt="nathanfreitasplayingukele" width="240" height="180" /></a></p>
<p>Nathan and David presented <a id="oofs" title="Coovents" href="http://www.coovents.com/" target="_blank">Coovents</a> at NYTM &#8211; Mobile Meets Social. They had a large group of questioners surrounding them (see picture below).Â Â  I talked to David after the presentation.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/new-yorktechmeetup.jpg"><img class="alignnone size-medium wp-image-3361" title="new-yorktechmeetup" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/new-yorktechmeetup-300x199.jpg" alt="new-yorktechmeetup" width="300" height="199" /></a></p>
<p>David Oliver was a software architect, user experience designer and product manager in the areas of mobile/wireless and electronic payment at IBM for over a decade.Â  Most recently, he lead the effort to productize a mobile client for IBM&#8217;s Lotus Connections enterprise social networking suite.Â  As a software architect, David was often technical lead for IBM&#8217;s business partner relationships with mobile device manufacturers.Â  Prior to IBM, David was co-founder of the Internet&#8217;s ï¬rst &#8220;micropayments&#8221; company, Clickshare.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/david-oliver.jpg"><img class="alignnone size-medium wp-image-3362" title="david-oliver" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/david-oliver-227x300.jpg" alt="david-oliver" width="227" height="300" /></a></p>
<h3>Talking with David Oliver</h3>
<p><strong>Tish Shute: </strong>How are smart phones are causing us to rethink what networked online relationships are all about.</p>
<p><strong>David Oliver: </strong>You know these [mobile] devices are .. there&#8217;s a long time we tried to pitch that we&#8217;re going to treat them like they&#8217;re PC&#8217;s, or they&#8217;re just like anything else. But they&#8217;re really not. It may be the same coding style but the way you think about using them is entirely different. And the way you think about your program. so if you use html, java and that kind of stuff, yes it&#8217;s same code type but the way you think about it is entirely different. And to me these little devices make what you said [<em><strong>relationships</strong></em> <em><strong>inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.</strong></em>] a lot more possible than a PC. because in a PC you almost have to sit in front of it and like it controls you. But the device is so little and there&#8217;s almost no user interface by comparison. You got to be very smart how you build something so that it&#8217;s almost invisible. And of course that&#8217;s the beauty of the iphone, Apple will tell you. The idea of ubiquitous computing. Ubiquitous what? Am I really computing? I don&#8217;t feel like I&#8217;m computing. I feel like I&#8217;m interacting or something.</p>
<p>I think twitter is very cool. The real way it&#8217;s cool is that there&#8217;s no required client. You can access Twitter any way you want.Â  You can imagine other ways to use it. Tweet Deck happens to be a nice for now. What I like about Twitter is, if you give it a tiny bit of thought, the Twitter network&#8217;s complete white noise, just like the internet itself. If you put a probe on the internet it&#8217;s all white noise, it&#8217;s all unordered packets. It makes no sense. So it&#8217;s cool that Twitter is at the level of little bitty conversations, but collectively all white noise. Totally meaningless white noise.Â  There&#8217;s some neat things going on, but I think we haven&#8217;t seen barely the first of what you can do with Twitter.</p>
<p>The way I see it is it&#8217;s like instant messaging where you don&#8217;t instant message to someone you instant message to the network and there are listeners. So normally in the old world of IM like AOL IM I would say Tish let&#8217;s talk and I kind of like grab you. Then it&#8217;s a narrow pipe you to me. You can add a few people in and make a little group, and that makes a bit of a closed network. But with twitter you just like talk into the air as if I were standing over there and you had a twitter client here, we could have the same interview. Because I would be watching you OH I see Tish&#8217;s question. I&#8217;d be over there talkingÂ  and you&#8217;d be picking me up over here. I&#8217;ts like you&#8217;re talking into white noise, like at this bar. You choose to hear me, this guy is not choosing to hear me right now.</p>
<p><strong>Tish Shute:</strong> So what does Android bring to the party?</p>
<p><strong>David Oliver:</strong> They have the notion that you have a telephone platform that&#8217;s open, and that everybody can use. And it&#8217;s got a variety of sensor data &#8211; not just location but also accelerometer and compass and more.Â  So in theory you can almost broadcast that data. It&#8217;s connected to a network. It&#8217;s easy, open API&#8217;s to get at that data. But the question is who are you going to broadcast it to or who are you sending it to. What are they going to do with it? How are you going to control it, and make sure people don&#8217;t misuse it? As you heard with the services tonight, there&#8217;s a central kind of service necessary to filter and rebroadcast that stuff back out to places that need it, or can use it, or you want to have use it. I think the mobile device is only one piece of this. Nat and I always talk about well we do mobile applications but a portion of it is on the server. And coordinating with the people or the group or the central resource that brings all this data together.</p>
<p><strong>Tish Shute: </strong>There seems to be a lot of new location based services &#8211; platforms to aggregate location based data being developed (e.g. <a id="lm5o" title="xtify" href="http://www.xtify.com/" target="_blank">xtify</a> and <a id="algg" title="viaplace" href="http://www.viaplace.com/" target="_blank">viaplace</a>). What do you think about the direction this development is going in?</p>
<p><strong>David Oliver:</strong> It&#8217;s not conventional wisdom but it&#8217;s one of these things where when a crowd of people does something, and that means people themselves are the service providers,Â  when they all get together the net effect is greater that the individual effect would be. Pooling together makes more sense than doing it individually. Its a little bit like an advanced version of you have to have a password for every single site and you manage your passwords. Location is the same way. If you had to give every single website that you enjoyed your location data or tell them how to get it, what a huge pain. So they&#8217;re offering a way to do that in a more general sense. There are humongous privacy issues though. Just like passwords. Would you really trust a place that held all your passwords centrally?</p>
<p>Even with the most basic level of calling. Now that you can call from anywhere. Largely people are getting into a mode where their mobile phone is them. It&#8217;s always with them. That&#8217;s how you reach me. Forget the home phone, the work phone it&#8217;s just a mobile phone. You have an address attached to you, an address I can reach you at that&#8217;s location independent. So there some beauty in that and it&#8217;s very freeing. It makes your location unimportant, you can call me anywhere. You can text me anywhere, message me anywhere. You can be anonymous. My son told me something recently. &#8220;I love going to New York City because I can just walk around and nobody knows me. I&#8217;m completely anonymous. That&#8217;s the coolest thing&#8221;, he says. At one level that is a good thing and a lot of good things can happen that way. But this new thing is sort of the flip side where everybody knows your location. And we haven&#8217;t figured out if that&#8217;s a good thing yet. But we&#8217;re in the throes of that whole changeover happening. And we&#8217;ll see. There&#8217;ll be some misuse. I&#8217;m not an advertising guy, so the fact that everything&#8217;s got to be ad supported makes it potentially very creepy and very dangerous. So we&#8217;ll see how that evolves.</p>
<p>Is there any model where you can go &#8220;Oh this is just like &#8216;S&#8217;&#8221;? I don&#8217;t see where that&#8217;s possible. It&#8217;s a new world. Where you&#8217;re exposed all the time, potentially. And how do you figure out either as an individual or a larger group, society or whatever, when that works and when that doesn&#8217;t. And you know there&#8217;s going to be some mis-steps probably. But the tangibility creates some of these interesting opportunities, there are just some amazing things that could happen, really, really good things. But we&#8217;re not going to get there in one step.</p>
<p>One of the things that was really a killer for privacy and a killer for in some ways the internet, was during the dot com bust. Prior to the bust, there were web sites that you&#8217;d given your name and email, and they said &#8220;we promise to preserve this privacy.&#8221; But as soon as those companies went bankrupt, their email list was gold. It was value. And a bankruptcy judge, in a court in Delaware, created a legal basis to sell that data. Those things that were formerly private were no longer private &#8211; &#8220;no no no that&#8217;s got value. I&#8217;m going to sell it so the shareholders get their money.&#8221; So all these web sites who had lists of user names that they promised were private, became public information. That was one of the biggest blows to privacy in the history of the internet. That&#8217;s going to happen again and again. Like if <a href="http://www.meetmoi.com/welcome" target="_blank">MeetMoi</a> goes out of business the likelihood is all your shit&#8217;s going to get sold. I&#8217;m sorry it&#8217;s all going to be sold. It&#8217;s all a big joke. And that&#8217;s why central services are horrid, and I don&#8217;t like anything about a central service.</p>
<p>There are some pragmatic things about the way routing on networks actually works and the fact that the internet has gotten very centralized itself. The core ideas of the early internet which were essentially a survivable telecommunications network, remember it was the defense department that did the original internet? So the original idea of the original internet was survivability. The Russians could bomb the daylights out of the United States, territorial U.S. and we would still have a survivable network. That was the idea. And therefore all the nodes were dispersed and did not count on each other, and could reroute. Well now one company UUNET or whatever they are they own the whole thing. And you can look up all their locations on some internet database. 18 well placed bombs and the whole internet goes down. That&#8217;s what happens over time.</p>
<p>Well the whole cloud thing is also kind of a myth. It&#8217;s a very neat sounding term, and some aspects of it are different and new. Nate and I do a lot of cloud computing, it&#8217;s all on Amazon.</p>
<p>But we&#8217;ve always had that. That&#8217;s called time sharing. Strictly speaking it&#8217;s a thin contractual accompanied by a much much much easier application programming interface. That&#8217;s what cloud computing is. It&#8217;s a very skinny contract. Timeshare was aÂ  huge contract. Literally it&#8217;s legal and a little bit of API ease. It&#8217;s just timesharing. But at Amazon and the other ones too, you&#8217;re not responsible for your node going down. If it goes down, they push it somewhere else automatically. Your disk goes down. You&#8217;re not responsible for backing up your disk, it&#8217;s already on 14 copies on 8 continents. They do that. So it&#8217;s a higher level of service. Nate and I have this thing called slice host. And we&#8217;ll probably build some services on it, and if they get popular, it&#8217;s like a vending machine. You just drop in a dime, they give you another slice. No contract at all. It is growth and learning about old ideas. Like this whole idea of software as a service. The company called ADP Automatic Data Processing, who basically in short do payroll for everybody. It&#8217;s software as a service. It&#8217;s been going on since 1952 or something. It&#8217;s more like a reconception using modern tools. It&#8217;s like virtual worlds are a different thing. That&#8217;s a whole different beast.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</title>
		<link>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/</link>
		<comments>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/#comments</comments>
		<pubDate>Sat, 28 Feb 2009 04:28:06 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[crossing digital divides]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[aggregating the world's energy data]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Anne Galloway's forgetting machine]]></category>
		<category><![CDATA[antisocial networking]]></category>
		<category><![CDATA[antisocial networking systems]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[cities and networks]]></category>
		<category><![CDATA[connecting environments]]></category>
		<category><![CDATA[context aware]]></category>
		<category><![CDATA[context aware applications]]></category>
		<category><![CDATA[context aware mediators]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[deliberative democracy]]></category>
		<category><![CDATA[Eben Moglen on privacy]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[Erving Goffman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[flexible identity]]></category>
		<category><![CDATA[information processing]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative is a mood]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[mobile computing]]></category>
		<category><![CDATA[mobile phones and sensors]]></category>
		<category><![CDATA[mobility]]></category>
		<category><![CDATA[next generation internet]]></category>
		<category><![CDATA[Nurri Kim]]></category>
		<category><![CDATA[onto]]></category>
		<category><![CDATA[ontome]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[privacy in networked environments]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[self-describing networked objects]]></category>
		<category><![CDATA[smart homes]]></category>
		<category><![CDATA[smart products]]></category>
		<category><![CDATA[social networking systems]]></category>
		<category><![CDATA[sousveillance]]></category>
		<category><![CDATA[speedbird]]></category>
		<category><![CDATA[spime wrangle]]></category>
		<category><![CDATA[spime wrangling]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[spimy]]></category>
		<category><![CDATA[sustainable cities]]></category>
		<category><![CDATA[the big now]]></category>
		<category><![CDATA[the city is here for you to use]]></category>
		<category><![CDATA[the future of the internet]]></category>
		<category><![CDATA[the long here]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp technologies]]></category>
		<category><![CDATA[ubiquitous systems]]></category>
		<category><![CDATA[unbook]]></category>
		<category><![CDATA[uncanny valleys]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[web of things]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2969</guid>
		<description><![CDATA[Adam Greenfieldâ€™s new book, The City Is Here For You To Use, is coming soon (photo above by Pepe Makkonen is from Adam Greenfieldâ€™s Flickr stream). Adam told me: â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg"><img class="alignnone size-full wp-image-2970" title="adamgreenfieldpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg" alt="adamgreenfieldpost" width="333" height="500" /></a></p>
<p>Adam Greenfieldâ€™s new book, <em><strong><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></strong></em>, is coming soon (photo above by Pepe Makkonen is from <a id="souo" title="Adam Greenfield's Flickr stream" href="http://www.flickr.com/photos/studies_and_observations/">Adam Greenfieldâ€™s Flickr stream)</a>. Adam told me:</p>
<p style="text-align: left;"><strong>â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a version zero or public alpha in about six weeks.â€</strong></p>
<p>I am not good at waiting for books I really want to read to arrive. But, on the upside, it brings out my already pretty highly developed investigative instinct. So when Adam very generously agreed to do an interview, impatience turned into delight in tasting what is to come. And Adam is encouraging this kind of engaged anticipation. He writes (<a id="v80w" title="see post" href="http://speedbird.wordpress.com/2009/02/19/of-books-and-unbooks/">see post</a>) that <em>The City Is Here For You To Use</em>, is shaping up:</p>
<p><strong>â€œas something of an <a id="oj:9" title="unbook" href="http://theunbook.com/2009/02/18/what-is-an-unbook/">unbook</a><em> avant la lettre. </em>Itâ€™s why weâ€™ve [<a href="http://www.nurri.com/">Nurri Kim</a> and Adam Greenfield] always insisted on keeping you in the loop as to the bookâ€™s <a href="http://speedbird.wordpress.com/2009/01/22/bookproject-update-005-year-two/">fitful progress</a>, itâ€™s why I take every opportunity to <a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/">test its ideas here</a>, itâ€™s why I make explicit the fact that your response to those ideas is crucial to their evolution and expression. And itâ€™s why, even though the process is inevitably going to result in a static, physical document as one of its manifestations &#8211; and hopefully a very nice one indeed &#8211; weâ€™ve committed to offering a free and freely-downloadable Creative Commons-licensed PDF of every numbered version of <em>The City</em>, from zero onward.</strong></p>
<p><strong>You buy the book if you want the object. The ideas are free.â€</strong></p>
<p>I found the opportunity to ask Adam questions about some of his subtle renderings of technology, culture, and being in urban environments challenging and very illuminating.Â  Although I definitely get the feeling I am asleep at the wheel on some of the critical areas he is thinking and writing on.</p>
<p>Knowing the depth and range of Adam&#8217;s thought in his seminal book, <em><a id="you9" title="Everyware" href="http://www.studies-observations.com/everyware/">Everyware</a></em>, and his blog, <a id="r22r" title="Speedbird" href="http://speedbird.wordpress.com/">Speedbird</a>, before I began the conversation I asked Adam to point me to some of his posts that reflect key ideas he is working on at the moment (Adam has recently posted<em> </em><a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/" target="_blank"><em>The City Is Here</em>: Table of contents</a>).Â  Adam directed me to these three posts.</p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">Antisocial networking</a></p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/" target="_blank">More songs about context and mood</a></p>
<p><a href="http://speedbird.wordpress.com/2007/01/29/messenger-space-messenger-body-messenger-mesh/" target="_blank">Messenger, space, messenger body, messenger mesh</a></p>
<p>I may ramble and diverge, as is my nature, but these posts inspired many of the questions I ask.</p>
<p>Adam is currently head of design direction for service and user-interface design at Nokia and living in Helsinki, so I did not have the opportunity to do the interview in person. But I have glimpsed Adamâ€™s world through his Flickr stream and some of these images have found their way into this post. But I suggest you browse Adamâ€™s photography for yourself. I cannot do justice to the thousands of nuanced perceptions of cities, networks and publics you will find there. In the meantime, here are three glyphs of Adam Greenfield that I liked a lot.</p>
<p><strong><em><a id="r315" title="&quot;My favorite shoes&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074835498/">â€œMy favorite shoes,â€</a> <a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">â€œMy favori</a><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">te chairâ€</a> </em></strong><em>and</em><strong><em> </em></strong>photo by Adam Greenfield, <em><strong><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/"> </a><a id="vjz1" title="&quot;Favoriteplace&quot;" href="http://www.flickr.com/photos/studies_and_observations/1849426174/">â€œFavoriteplaceâ€</a></strong></em></p>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg"><img class="alignnone size-full wp-image-2984" title="favoriteshoespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg" alt="favoriteshoespost" width="225" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1.gif"><img class="alignnone size-medium wp-image-2975" title="favoritechair1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1-300x225.gif" alt="favoritechair1" width="300" height="225" /></a></em></strong></p>
<p><a href="../wp-content/uploads/2009/02/favoriteplace.jpg"><br />
</a><br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2.jpg"><img class="alignnone size-medium wp-image-2992" title="favoriteplace2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2-300x225.jpg" alt="favoriteplace2" width="300" height="225" /></a></p>
<h3>A Conversation (in gdoc) with Adam Greenfield</h3>
<p><strong> Tish Shute:</strong> Could you explain a little about the evolution of your thoughts on urban environments, ubicomp and interaction design? What shifts in your thinking have taken place over the last few years re the dawning of the age of ubiquitous computing? It is a couple of years now since <a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a>, what aspects of the uptake of <em>Everyware</em> have most surprised, disappointed or inspired you? Which of the many thesis you discuss in <em>Everyware</em> have become the most crucial for <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>?</a></p>
<p><strong>Adam Greenfield: You know, thereâ€™s a little passage in the liner notes to the second Throbbing Gristle album that I always think of when Iâ€™m asked questions along these lines. As part of their stance, theyâ€™d adopted the dry tone of a corporate annual report, and the preamble began by saying, â€œSince our last report to you, many things have changed. Indeed, it would be foolish to assume that it could be otherwise.â€ And I think thatâ€™s just exactly right: the world keeps moving, and the positions weâ€™d staked ourselves to not so long ago may no longer be correct, or even relevant, to the one we find ourselves inhabiting now.<br />
</strong><br />
<strong>So, first, I think itâ€™s important to cop to all the places in <em>Everyware</em> where I just outright got things wrong. Thereâ€™s a passage in Thesis 50, for example, where I unaccountably mock the idea that â€œthe mobile phoneâ€¦will do splendidly as a mediating artifact for the delivery of [ubiquitous] services.â€ OK, this was admittedly written in a pre-iPhone world &#8211; and was correct <em>for</em> that world &#8211; but you can really see my parochialism showing here. It took the iPhone to make the proposition as blazingly self-evident to me in North America as it had been for quite some time to folks in Europe and Asia.</strong></p>
<p><strong>Having said that, though, I think Iâ€™m justified in taking a little pride in what the book got right. The broader trends the book set out to discuss &#8211; the colonization of everyday life by information processing &#8211; well, take a good look around you. And so one of the points of departure for the new book is taking everything posited in <em>Everyware</em> as a given: the urban environment, and most everything in it as well, has been provisioned with the kind of abilities you mention. So what now?</strong></p>
<p><strong>How do you go about designing informatic systems so they donâ€™t undermine the wonderful things about cities? How do you design cities so they can incorporate networked informatics to greatest advantage? How, especially, do you accomplish these things when the disciplinary communities involved barely speak the same language? And how do you keep everyoneâ€™s eyes on the prize, which is the ordinary human being asked to make sense of these new propositions? These are the questions<em> </em><em>The City Is Here For You To Use </em>sets out to address.</strong></p>
<p><strong><em><br />
</em></strong></p>
<p><a href="../wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"><img class="alignnone size-full wp-image-2993" title="adamgreenfieldthelonghere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg" alt="adamgreenfieldthelonghere" width="500" height="321" /></a></p>
<p><em>Adam talking about the <a href="http://www.flickr.com/photos/studies_and_observations/3181518615/" target="_blank">â€œLe Long Iciâ€</a> in Paris (also see Adamâ€™s post, <a href="http://speedbird.wordpress.com/2008/05/04/the-long-here-and-the-big-now/" target="_blank">â€œThe long here and the big nowâ€</a>)</em><strong></strong></p>
<p><strong>TS:</strong> You mention that the hardest parts ofÂ  producing <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em></a> wasnâ€™t <em><strong>â€œkeeping on top of all the emergent manifestations of urban informatics, or even developing a satisfying spinal argument about their significanceâ€</strong></em> but getting the voice right.Â  It seems that now is the perfect time for a book that would really speak to a wide audience.Â  But also it seems that the city that is here for you to use is manifesting quite differently in different parts of the world?Â  You seem to be somewhat of a nomad, Japan to NYC to Helsinki.Â  Can putting together different views of urban informatics give us more depth perception on the emergence of ubiquitous computing?</p>
<p><strong>AG: Thereâ€™s no question in my mind that the long-term experience of everyday life in Tokyo, New York, and now Helsinki has been an invaluable asset to me, as I imagine it would be to anybody interested in thinking or writing about the networked city. Itâ€™s given me a certain amount of parallax, you know? And that, in turn, throws a really interesting light onto how the selfsame technology can appear in substantially different guises in different social contexts.</strong></p>
<p><strong>But explaining those things &#8211; those complicated, delicate negotiations &#8211; getting them right, doing them justice, doing so in a way that doesnâ€™t dumb anything down, and still remaining accessible? Itâ€™s a challenge, let me tell you. You want to remain approachable and humane, but you also want to explain things like different jurisprudential takes on property, or how advocates of RESTful architectures think that REST is the reason why Internet adoption spread as rapidly as it did. If you want to enjoy even one chance in a hundred of getting your message across, youâ€™ve got to start with an understanding that those subjects are MEGO territory for most people &#8211; whether they hail from Shibuya, Shoreditch or San Pedro.</strong></p>
<p><a href="../wp-content/uploads/2009/02/everywareicon.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg"><img class="alignnone size-full wp-image-2996" title="everywareicon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg" alt="everywareicon" width="136" height="135" /></a></p>
<p><em><strong><a href="http://www.flickr.com/photos/studies_and_observations/89045331/" target="_blank">Everyware icons: Information processing dissolving into behavior</a></strong></em><em><strong> </strong>(Icons inspired by <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall</a>; design by Adam Greenfield and <a href="http://www.nurri.com/">Nurri Kim</a>).Â  [Adam notes on his Flickr page that he tweaked <a href="http://www.flickr.com/search/?w=14112399%40N00&amp;q=everyware+icons&amp;m=text" target="_blank">these icons </a>as section headers for </em><em><a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a></em><em>]</em></p>
<p><strong>TS:</strong> Could you explain more about what you term â€œontoâ€ and â€œontomeâ€ and how this differs from spimes and spime wrangling?<strong><br />
</strong><strong><br />
AG: You know, I never did get to develop that idea as much as I would have liked. In my mind, at least, â€œontomeâ€ referred to the totality &#8211; the global environment of addressable, queryable, scriptable objects. (An â€œonto,â€ then, would be any given such object.) I guess I was looking for words that would do two things: allow us to distinguish between the instantiation and the class, and leave us with a better word than â€œspime.â€</strong></p>
<p><strong>TS: </strong>When you say better word than spime this is this becauseâ€¦.<br />
<strong><br />
AG: Euphony, primarily. : . )</strong></p>
<p><strong>TS:</strong> When I first used the Android app,Â  <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a>, on Broadway, NYC &#8211; a street I have traveled thousands and thousands of times, and it offered up new information about itself, it was definitely an â€œOMG this is big!â€ moment for me. Like the first time I clicked on a screen and Amazon sent out a book in the early nineties (something so ordinary now it seems impossible that it was exciting but I remember it was to me!). But if I understand <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">your post here</a> correctly, isnâ€™t Android with compass the first easy-to-use context-aware mediator for wrangling onto, ontome and spimes?<strong><br />
</strong><br />
<strong>AG: Wikitude sure looks pretty impressive, and maybe even useful. But I would never, ever call it â€œcontext-aware.â€<br />
</strong><br />
<strong>To my mind, at least two more things would need to happen before we could comfortably think of it a â€œcontext-aware spime wrangler.â€ First, the buildings and other public objects around you would actually have to be spimy &#8211; theyâ€™d have to report something of their past and current state to the network. And then, some application running on your phone would somehow have to cross-reference that state information with some fact about your current state of being, and deliver you relevant information.</strong></p>
<p><strong>S</strong><strong>o, letâ€™s take your Wikitude example. Youâ€™re walking down Broadway and you pass an unfamiliar building, and for whatever reason you want to know more about it. Your phone pings the buildingâ€™s dynamic self-description, and it replies to the effect that Andy Warhol had his Factory there between 1973 and 1984. If Wikitude chooses to share this particular piece of information with you, and not some other potentially germane factoid from the buildingâ€™s history, on the strength of the fact that â€œThe Velvet Underground and Nicoâ€ was in your last.fm playlist? That would constitute some small measure of context-awareness.</strong></p>
<p><strong>But you see how hard we had to try just to come up with an example, how forced it is, how</strong><em><strong> so-what. </strong></em><strong>And I have to say that &#8211; short of some infinitely supple system that really could model your innermost desires ahead of real time, and present appropriate responses to them &#8211; most so-called â€œcontext-awareâ€ applications and services are like this. Theyâ€™re either trivial, or wildly overambitious.</strong></p>
<p><strong>Maybe we donâ€™t need for things to be context-aware for them to be useful, anyway. Certainly a great many objects in the world are starting to report their own status, and many more will do so in the fullness of time. And for the most part, all youâ€™ll need to avail yourself of them is a Web browser running on a device that knows where it is in the world. An iPhone or an Android device will work splendidly &#8211; I called the iPhone â€œthe first real everyware deviceâ€ the day it came out and I was able to play with it for the first time &#8211; and in that way, the answer to your question is â€œyes.â€ Not to be longwinded or anything. ; . )</strong></p>
<p><a href="../wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"><img class="alignnone size-medium wp-image-3000" title="objectwithimperceptibleproperties" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties-300x212.jpg" alt="objectwithimperceptibleproperties" width="300" height="212" /></a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/206984090/#DiscussPhoto" target="_blank">This Object has imperceptible properties. </a> [Adam notes on his Flickr page: &#8220;This is a custom RFID-enabled transit pass that <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall </a>had made up for me here in Seoul. I&#8217;ve (clumsily) tagged it with the icon that Nurri and I developed to represent just such emergent situations as this in the everyware milieu &#8211; that there&#8217;s no way for anyone to understand that this object has puissance beyond the obvious simply by examining it.&#8221;]</em></p>
<p><strong>TS: </strong>It seems thatÂ  we are just at the beginning of understanding how to create networks of spimes (e.g. <a href="http://www.pachube.com/" target="_blank">Pachube</a>). Gavin Starks of <a id="ya:2" title="AMEE" href="http://www.amee.com/">AMEE</a> (â€the worldâ€™s energy meterâ€) once suggested to me that AMEE could be described as a facilitator of networked spimes (everything will have an energy identity). I think you may be familiar with AMEE because you keynoted next to Gavin at<a href="http://2007.xtech.org/public/schedule/grid/2007-05-16" target="_blank"> Xtech 2007</a>.</p>
<p>I would be interested to hear your thoughts on AMEE?</p>
<p>When <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">you discussed onto and ontome in this post</a>, you noted:</p>
<blockquote><p><em><strong>â€œThe greater part of the places and things we find in the world will be provided with the ability to speak and account for themselves. That theyâ€™ll constitute a coherent environment, an <a href="http://www.graphpaper.com/2006/03-23_a-spime-is-a-species">ontome</a> of <a href="http://flickr.com/photos/studies_and_observations/89092744/">self-describing networked objects</a>, and that weâ€™ll find having some means of handling <a href="http://web.archive.org/web/20050117141647/www.v-2.org/greenfieldspime.pdf">the information flowing off of them</a> very useful indeed.â€</strong></em></p></blockquote>
<p>Is the idea of â€œenergy identityâ€ that AMEE proposes an ontome?Â  <em><br />
<strong><br />
</strong></em><strong>AG: See below for a prÃ©cis of my feelings regarding environmental/sustainability initiatives, AMEE included. Uhâ€¦is AMEE an ontome? No. Thereâ€™s just one ontome, and itâ€™s coextensive with what folks now call the Internet of Things. It sounds like individual AMEE sensors would be â€œontos.â€</strong></p>
<p><strong>But I think the difficulty weâ€™re having is a pretty good indicator that the terminology is more trouble than itâ€™s worth. Sometimes a coinage, as satisfying as it may be lexically, just doesnâ€™t work for people. These days Iâ€™m trying to get out of the neologism trade.</strong></p>
<p><strong>TS: </strong>I know <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">when Usman Haque talks about Pachube</a> he talks about spimes and spime wrangling. I asked Usman for his thoughts on spimes and onto/ontome and he gave me some comments.</p>
<p><strong>Usman Haque:</strong> I think I had somehow missed the conversation about onto and ontome but backtracked through blog posts to piece it together (unfortunately some posts at v-2 and Studies &amp; Observations no longer exist!). There are a couple of things that have made me uncomfortable about the word â€™spimeâ€™: (a) the fact that it might be too easy to confuse with an â€œobjectâ€. A â€™spimeâ€™ should also encompass relationships between things, and not just the â€œthingnessâ€ itself. (b) the sound of it (as Adam noted above). But then I am reminded of that horrible gooey interface used to plug into people in <a href="http://www.imdb.com/title/tt0120907/">eXistenZ</a> &#8211; it somehow seems appropriate that it should be a horrible gooey word, and not something that can disappear politelyâ€¦ So I like onto/ontome because it speaks to my first concern about â€™spimeâ€™; but my second concern, it turns out, is not the problem I thought it was, and so onto/ontome might beâ€¦ ahemâ€¦ too euphonic! On the question of this thing people are calling the â€œInternet of Thingsâ€, Iâ€™ve tried in lectures to reframe it as the â€œEcosystem of Environmentsâ€. Further, Vlad Trifa makes a delicious point that just as â€˜webâ€™ is different from â€˜internetâ€™, so too should we consider the â€œWeb of Thingsâ€<strong> </strong>rather than the â€œInternet of Thingsâ€, something I agree with.</p>
<p><strong>TS: </strong>It seems like this point about the difference between â€œthe web of thingsâ€ and the â€œinternet of thingsâ€ is pretty important?<br />
<strong><br />
AG: The parallel distinction between Web and Internet sure is! Theyâ€™re two completely different things, right? And http is far from the only protocol that runs over the Internet. Now, as to what Vlad means by extending this particular distinction to the domain of networked objects, I donâ€™t yet know, I havenâ€™t had time to check it out. But sure, in principle Iâ€™d totally be willing to go along with the idea that thereâ€™s a meaningful distinction between two environments named that way.</strong></p>
<p><strong><br />
</strong></p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg"><img class="alignnone size-full wp-image-3010" title="everywareicon3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg" alt="everywareicon3" width="142" height="139" /></a><br />
</em></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/89045326/in/photostream/" target="_blank">No information is collected here; network dead zone</a></em></p>
<p><strong>TS: </strong>I was just going over <a id="yo_s" title="Greenfield's principles of ubiquitous computing" href="http://www.we-make-money-not-art.com/archives/2006/10/adam-greenfield.php">Greenfieldâ€™s principles of ubiquitous computing</a>.Â  I am not sure that I see any current manifestations of ubicomp that hold to these priniciples yet?</p>
<p><strong>AG: Oh, sure there are. Look at the work Tom Coates has done on <a href="http://fireeagle.yahoo.net/" target="_blank">Yahoo!â€™s Fire Eagle</a>; look at <a href="http://www.dopplr.com/" target="_blank">Dopplr</a>. And look at some of the steps other, less compassionate developers (e.g. Facebook) have been forced to take by their own users.</strong></p>
<p><strong>Look, those principles are just codifications of common sense and basic neighborly virtues, expressed in language appropriate to the domain of application. The best, smartest and most ethical developers have never needed guidelines to do the right thing. But especially inside companies and other complex organizations, people who want to implement compassion in their design of a technical system may occasionally find it useful to have some color of authority to invoke in their struggles</strong><strong>. Thatâ€™s all those five principles are there for, and Iâ€™m well satisfied that people have been able to use them that way.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome.jpg"><img class="alignnone size-medium wp-image-3005" title="smarthome" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome-300x225.jpg" alt="smarthome" width="300" height="225" /></a><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank"><br />
</a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank">Boffiâ€™s take on the smart home</a>- photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> In your post, <a id="klme" title="More Songs About Context And Mood" href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/">More Songs About Context And Mood,</a> you suggest a direction for interaction design that you point out is not far from Yvonne Rogersâ€™ ideas in â€œMoving on from Weiserâ€ about a switch in goal of ubicomp from Weiserâ€™s vision of calm living (â€computers appearing when needed and disappearing when notâ€) to engaged living &#8211; ubicomp technologies not designed to to do things for people but to help people engage more actively in things that they do (ensembles, ecologies of resources).</p>
<p>You also suggest interaction designers should be:</p>
<blockquote><p><strong><em>&#8220;parsimonious about the interaction design challenges our organizations do take on, with an eye toward reducing the complications of context (and the attendant opportunities for default, misunderstanding, misfire, time-wasting, and humiliation) to some manageable minimum.&#8221;</em></strong></p></blockquote>
<p>As you have pointed out, â€œwe donâ€™t do â€œsmartâ€ very well yet.â€ But paradoxically smart grids, smart homes, smart products etc. etc. are ubiquitously coming to market right now.</p>
<p>Yvonne Rogers suggests interaction designers should be:</p>
<blockquote><p><em>moving from a mindset that wants to make the environment smart and proactive to one that enables people, themselves, to be smarter and proactive in their everyday and working practices</em><em> </em></p></blockquote>
<p>What areas might interaction designers most productively direct their attention towards?<br />
<strong></strong></p>
<p><strong>AG: You note that things called â€œsmart homesâ€ and â€œsmart productsâ€ are coming onto the market, and that sure would seem to be the case. But as to whether or not these things are genuinely smart, we donâ€™t have anything more to go on than the marketing departmentâ€™s word. I think you can already see that I tend to take language very seriously, and I really donâ€™t uses like the â€œsmartâ€ here, or the â€œawareâ€ in â€œcontext-aware.â€ They overpromise, they cannot help to set us up for failure and disappointment.</strong></p>
<p><strong>You know what Iâ€™d really like to see interaction design wrestle with? I would love to see a rigorous, no-holds-barred examination of the complexities of the self and its performance in everyday life, and how these condition our use of public space (and personal media in public space). I would love to see the development of ostensibly â€œsocialâ€ platforms informed by some kind of reckoning with issues like vulnerability, dishonesty, the fact of power dynamics. In other words, before we deign to go about â€œhelpingâ€ people, wouldnâ€™t it be lovely if we understood what they perceived themselves as needing help with, and why?</strong></p>
<p><strong>Iâ€™d also pay good money to see talented interaction designers turn their efforts toward tools for the support of deliberative democracy, for the navigation of complex multivariate decision spaces, and for conflict resolution.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg"><img class="alignnone size-full wp-image-3071" title="locativeasamood" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg" alt="locativeasamood" width="500" height="375" /></a><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank"><br />
</a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank">Locative is a mood</a> &#8211; photo by Adam Greenfield</em><strong><br />
</strong></p>
<p><strong>TS:</strong> I know you said this would take too long to explain but I couldnâ€™t help noticing that you seem to be, perhaps, skeptical about the role of everyware can play in sustainable living and yet, it seems at the moment, in the hacker and business communities at least, the role of everyware in reducing carbon footprint/energy management etc, is the great green hope?</p>
<p>Will everyware enable or hinder fundamental changes at the level of culture and identity necessary to support the urgent global need &#8211; â€œto consume less and redefine prosperity?â€<strong><br />
</strong><br />
<strong>AG: Iâ€™m not skeptical about the potential of ubiquitous systems to meter energy use, and maybe even incentivize some reduction in that use &#8211; not at all. Iâ€™m simply not convinced that anything we do will make any difference.</strong></p>
<p><strong>Look, I think we really, seriously screwed the pooch on this. We have fouled the nest so thoroughly and in so many ways that I would be absolutely shocked if humanity comes out the other end of this century with any level of organization above that of clans and villages.</strong><strong> Itâ€™s not just carbon emissions and global warming, itâ€™s depleted soil fertility, itâ€™s synthetic estrogens bioaccumulating in the aquatic food chain</strong><strong>, itâ€™s our inability to stop using antibiotics in a way that gives rise to multi-drug-resistance in microbes</strong><strong>. </strong></p>
<p><strong>Any one of these threats in isolation would pose a challenge to our ability to collectively identify and respond to it, as itâ€™s clear anthropogenic global warming already does. Put all of these things together, assess the total threat they pose in the light of our societiesâ€™ willingness and/or capacity to reckon with them, and I think any moderately knowledgeable and intellectually honest person has to conclude that itâ€™s more or less â€œgame over, manâ€ &#8211; that sometime in the next sixty years or so a convergence of Extremely Bad Circumstances is going to put an effective end to our ability to conduct highly ordered and highly energy-intensive civilization on this planet, for something on the order of thousands of years to come.</strong></p>
<p><strong>So (sorry <em>again</em>, Bruce) I just donâ€™t buy the idea that weâ€™re going to consume our way to Ecotopia. Nor is any symbolic act of abjection on my part going to postpone the inevitable by so much as a second, nor would such a sacrifice do anything meaningful to improve anybody elseâ€™s outcomes. Iâ€™d rather live comfortably &#8211; hopefully not obscenely so &#8211; in the years we have remaining to us, use my skills as they are most valuable to people, and cherish each moment for what it uniquely offers.</strong></p>
<p><strong>Maybe some people would find that prospect morbid, or nihilistic, but I find it kind of inspiring. It becomes even more crucial that we not waste the little time we do have on broken systems, broken ways of doing things. The primary question for the designers of urban informatics under such circumstances is to design systems that underwrite autonomy, that allow people to make the best and wisest and most resonant use of whatever time they have left on the planet. And who knows? That effort may bear fruit in ways we have no way of anticipating at the moment. As it says in the Quâ€™ran, gorgeously: â€œAt the end of the world, plant a tree.â€</strong></p>
<p><strong><a href="../wp-content/uploads/2009/02/biowall2.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg"><img class="alignnone size-full wp-image-3008" title="biowall2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg" alt="biowall2" width="375" height="500" /></a><br />
</strong></p>
<p><em><a href="http://www.flickr.com/search/?q=biowall&amp;w=14112399%40N00" target="_blank">Biowall! </a>- photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>In <a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">your post â€œAntisocial Networking,â€</a> you make some telling comments on the sorry state of social networking systems.</p>
<div style="margin-left: 40px;"><strong><em>â€œAll</em> <em>social-networking systems, as currently designed, demonstrably create social awkwardnesses that did not, and could not, exist before. All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options &#8211; and those static! A wiser response to them would be to recognize that, in the words of the old movie, â€œthe only way to win is not to play.â€</em></strong></div>
<p>But you do also state:</p>
<div style="margin-left: 40px;"><strong><em>â€œBut itâ€™s past time for me to acknowledge that while the discourse of social networking may at first blush seem marginal to my core concerns, itâ€™s far more central to those concerns than I might wish.â€</em></strong></div>
<p>Which of your concerns is social networking more central to than you might wish and why?</p>
<p><strong>AG: Well, you know Iâ€™m interested in social interaction, interpersonal behavior, and in how these things play out in networked environments. Thereâ€™s virtually no way for me to avoid dealing with Facebook, as wretched as I think it is</strong><strong>.</strong></p>
<p><strong>Facebook is pretty hegemonic, in that its reach and influence extend further than the universe of people who use it. I bump up against it constantly, in a few different ways. People send me links I canâ€™t access, because Iâ€™m not on Facebook. People spend time and energy trying to convince me that Iâ€™m really missing out, because Iâ€™m not on Facebook. The last few months, thereâ€™s even been a few people who feel justified in expressing some kind of </strong><strong>exasperation, that theyâ€™re really pissed offâ€¦because they canâ€™t find me on Facebook. Itâ€™s become the sovereign interface to any kind of life in public</strong><strong>, and as a result a great many people donâ€™t question its modes, tropes and metaphors.</strong></p>
<p><strong>So when it comes time to build some kind of situated interpersonal mediation framework, some kind of intervention in the fabric of the city, those are the tropes they reach for: accounts, profiles, friend counts, friendings and unfriendings, nudges and pokes. And as a member of a team tasked with the design of such systems, as a potential user of them, and certainly as someone exposed to the social rhetoric flowing downstream from their use, you bet these tropes become central to my concerns.</strong></p>
<p><strong>But what if we admitted that Facebook and the whole paradigm itâ€™s built on are broken? What would things look like if we started from a more sensitive understanding of the interaction between self and others? Say, the understanding Erving Goffman was offering us as far back as the late 1950s? Then youâ€™d understand the need for provisions like a â€œbackstage,â€ a place to swap out one mask for another, the ability to present oneself differently to different communities and networks. Thatâ€™s what Iâ€™m interested in exploring.</strong></p>
<p><strong>TS: </strong>Social networking systems in their current form are crude and express a very narrow bandwidth of human relationship. But already people are connecting everywareâ€™s networked social acts to existing social networking systems. At the ITP winter show there was <a id="eo:2" title="kickbee" href="http://gizmodo.com/5109297/kickbee-now-the-world-can-know-what-your-fetus-is-up-to">kickbee</a> &#8211; networked fetal communication (and <a id="kwj6" title="tweetmobile" href="http://tweetmobile.com/">tweetmobile</a> which used twitter as an acctuator for an ambient display) and green everyware (energy monitoring) is showing up in a number forms on existing social networks. But rather than just hooking up everyware to these existing flawed social network systems, does everyware require a reimagining of networked social interactions and social networking systems?<strong><br />
</strong><br />
<strong>AG: Thatâ€™s a great question, and I think the answer is clearly â€œyes.â€ Itâ€™s one thing to confine the consequences of that brokenness to the Web, and entirely another to let it bleed out into the world.</strong></p>
<p><strong>Does that mean any such reimagining is <em>going</em> to happen, that people will somehow refrain from plugging real-world outputs into these terribly flawed frameworks? Not a chance in hell. Itâ€™s too late to put a fence on that particular cliff. But maybe thereâ€™s still time to park an ambulance in the valley</strong><strong> below.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg"><img class="alignnone size-full wp-image-3074" title="earthssurface" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg" alt="earthssurface" width="375" height="500" /></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2970558731/" target="_blank">&#8220;A graphic representation of a portion of the Earth&#8217;s surface, as seen from above&#8221;</a> &#8211; photo by Adam Greenfield<br />
</em></p>
<p><strong>TS: </strong>I saw you tweet that you met Usman Haque from <a href="http://www.pachube.com/" target="_blank">Pachube</a> recently. What do you find most interesting about Pachube and <a href="http://www.eeml.org/" target="_blank">EEML</a>? Will you design a project for Pachube to push the conversation further?Â  Did Usman ask you to take a role in the future of Pachube. How does Pachube enable the vision of<em> <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a></em>? I could go on for ever with questions,Â  so please do tell!</p>
<p><strong>AG: OK, I should probably reiterate that my fundamental interest is in people, and in what they choose to make and do with technology, not the technology itself. For the last few years, Iâ€™ve particularly been trying to understand how people interact with each other and with the urban environments around them when those environments have been provisioned with the ability to gather, process and take action on data. And this is how I come about my interest in what Usman is up to with Pachube, because those â€œgather,â€ â€œprocessâ€ and â€œtake action uponâ€ functions are generally accomplished by different systems, designed by different groups of people, at different times and to different ends. What Pachube aims to do is make the difficult and not-particularly-glamorous work of connecting these pieces a whole lot easier.</strong></p>
<p><strong>Think of it as a step toward enabling the ontome, this so-called Internet of Things we&#8217;ve been talking about, the same way basic protocols like HTTP and HTML enabled the wildfire spread of the Internet weâ€™re familiar with. What Pachube offers is a way &#8211; a relatively straightforward and self-explanatory way &#8211; to plug any given compatible input into a similarly compatible output. So if youâ€™ve got an air-quality sensor or a soil-pH sensor or a personal biometric monitor, you can plug it into Pachube, and someone else can grab the data those things generate and use it to drive a visualization, or the state of a physical system like a window, or whatever else they can imagine. Itâ€™s as close as anyoneâ€™s yet come to providing a plug-and-play backbone for the creation of responsive environments.</strong></p>
<p><strong>And I think itâ€™s absolutely brilliant that itâ€™s designed to work with Arduino and Processing, two lightweight, open-source frameworks that hobbyists and researchers (and even one or two more serious developers) around the world are already using to build things. (Arduinoâ€™s a kit of parts for doing basic physical computing &#8211; using data to drive lights, motors, and other actuators that have effect out here in the world &#8211; while Processing is a very accessible language to do dynamic and interactive graphics for screen-based media). Given both its openness and modularity, and its willingness to build on top of the very popular frameworks that already exist, Iâ€™m very excited to see what people make of and with Pachube.</strong></p>
<p><strong>I have to be honest and admit that personally, I couldnâ€™t really care less about the environmental angle, for reasons that I went into at embarrassing length above. What Iâ€™m engaged by in Usmanâ€™s work is the idea that Pachube is helping to create an open platform for people to share data more readily. And while, no, he hasnâ€™t explicitly asked me to take any particular stake in things, Iâ€™m always happy to lend a hand in whatever way would be most useful. I think itâ€™s a project worth supporting.</strong></p>
<p><strong>As to how Pachube enables some of the ideas in</strong><em><strong> The City Is Here</strong></em><strong>, the answer has to do with the bookâ€™s call for every â€œpublic objectâ€ &#8211; every lamppost, bus shelter, commercial faÃ§ade, and so forth &#8211; to support an open API. Somethingâ€™s got to string all those objects together, present them to people as resources to be taken up and used, and Usmanâ€™s offered us a critical first step in that direction.</strong><em><strong><br />
</strong></em><br />
<strong>TS:</strong> Usman suggested, it might be interesting to ask you about â€œthe tension between â€˜couldâ€™ and â€™should.â€™</p>
<p><strong>Usman Haque: </strong>There are a whole bunch of things that we â€œcanâ€ do, technologically speaking; how do we decide what we â€™shouldâ€™ do, as we find ourselves in an age where we can build almost anything we can imagineâ€¦? particularly with reference to technology/privacy/security triumvirate. e.g., leaving aside that the majority of the world is *not* in the technology â€˜paradiseâ€™ that weâ€™re in, here in the west, only a small fraction of people are currently producing the technology that the rest of us use; one aim is to get people more engaged in the productive process, but, in a sense that will also mean the whole wide ecosystem of technology will be even bigger, both â€œgoodâ€ stuff and â€œbadâ€ (that qualification firmly placed on how itâ€™s used), as opposed to now when we can focus on quite specific things that government &amp; industry are doing and saying â€œthat shouldnâ€™t be happeningâ€¦.â€. part of this relates to something <span class="nfakPe">adam </span>said on his blogÂ  in the comments (see <a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank">here</a>).â€Â <strong><a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank"> </a></strong></p>
<p><strong>AG: I think the first part of answering that question has to involve figuring out who â€œweâ€ are in any given situation. A â€œweâ€ composed of seven Helsinki-based Linux developers would most likely arrive at very different answers than the United States Air Force Materiel Command or Samsungâ€™s board of directors, right? So clearly, a first challenge is getting to some kind of pragmatically useful alignment between those local and occasionally even painfully parochial perspectives with whatâ€™s best for the Big We. And this challenge is only going to become more vexing as the ability to imagine, design, build and deploy informatic componentry gets more and more widely distributed. In this respect the spread of simple, modular, low-barrier-to-entry tools only makes things worse!</strong></p>
<p><strong>The primary issue that I can see here is that the inherent clock speed of technical development is so very much faster than that of any meaningful deliberative process â€œweâ€ might bring to bear on it. A concomitant concern is that the sources of technical innovation and production are now so widely distributed that you can be reasonably certain that somebody, somewhere will implement any given technically feasible idea, no matter how offensive, poorly thought-out, socially disruptive or frankly stupid. A public toilet you have to SMS to unlock and use? A â€œFriend Finderâ€ visualization with high locational precision and no privacy features whatsoever? A first-person rape-simulation â€œgameâ€? A clunky brown iPod knockoff? Somebody thought each one of these things was worth the time, expense and effort to actually go about making it. They exist.</strong></p>
<p><strong>But Iâ€™m pretty old-fashioned in some ways, in that I think the good old Habermasian idea of the public sphere still has some life left in it. And I think it should be self-evident by now that thereâ€™s no necessary contradiction between even the newest (cough) â€œsocial mediaâ€ and the formation of such a sphere. So youâ€™ve provided a forum, and in it I get to express my belief that these things are stupid and pointless and probably should not have been built. And if somebody gets all het up about that, they can argue right back at me in comments. And eventually one or another of these positions begins to tell, in terms of regulation, legislation, and other tools of the juridical order, in terms of protest campaigns or organized boycotts or litigationâ€¦in terms of nonexistent sales!</strong></p>
<p><strong>Thereâ€™s nothing new in any of this, of course, though indubitably some of the dynamics are amplified or accelerated by e-mail, Twitter and YouTube. My main contention is that informatic technology now has such deeply pervasive implications, and for things like presentation of self that previous waves of technical development barely touched, that â€œweâ€ as societies need to be very much more conscious of the consequences before committing to any one course of action.</strong></p>
<p><strong>I should also point out that I do not, at all, believe that weâ€™re â€œin an age where we can build almost anything we can imagine,â€ though I might buy â€œâ€¦<em>two or three of</em> almost anything we can imagine.â€ On the contrary, as I implied above, I think the global constraints on our ability to operate freely are already becoming quite evident, and will continue to grow teeth over the next few decades.</strong></p>
<p><strong><br />
TS: </strong>Also UsmanÂ  added &#8230;</p>
<p><strong>Usman Haque:</strong> ..where Adam said: <em>in this regard, I very much *do* have a problem with â€œjust showing up.â€ â€” </em>something I feel that as well. but i always wonder: What happens when one appears to be mandating participationâ€¦?</p>
<p><strong>AG: Look, I happen to have a strong &#8211; maybe some would say obnoxious or hyperactive or overdeveloped &#8211; sense of personal responsibility and accountability. I think one is basically committed to some measure of responsibility for the commonweal simply by surviving to the age of majority. The</strong><strong> choice of how, particularly, to discharge that responsibility</strong><strong> can only be yours and yours alone, but it canâ€™t be ducked or gotten around without severe and entirely predictable consequences. So to Usman Iâ€™d respectfully suggest that Iâ€™m not the one mandating participation. Life is.</strong></p>
<p><em><strong><br />
</strong></em></p>
<p><strong>TS:</strong> It seems we have grown accustomed to striking a Faustian bargain on the internet today -Â  in order to share and distribute parts of our identity we are expected to give up key information to one site to store and disperse our data. <strong> </strong>I took part in<a href="http://www.ugotrade.com/2007/12/21/a-conversation-with-eben-moglen-on-second-life/" target="_blank"> a discussion with David Levine, IBM and Eben Moglen on privacy</a> last year.Â  And Eben Moglen gave a succinct description of the elements of privacy and how they have been treated in the American Constitution that is, I think, relevant to unpacking some of the challenges of ubiquitious computing. Here are some extracts from that conversation where, Eben notes:</p>
<blockquote><p><em>there are three elements that are mixed up in privacy and we tend not to notice which one we are talking about at any given moment.</em></p>
<p><em>There is secrecy &#8211; that is the data should not be readable by or understandable by anybody except me or people I designate. There is anonymity which is the data can be seen by anybody but about whom it is should be knowable only by me or people that I designate. And there is autonomy which isnâ€™t about either secrecy or anonymity but which is about my right to live under circumstances which reinforce my sense that I am in control of my own fate. And this form of privacy is actually the one we talk about in the constitutional structure when we talk about the right to get an abortion or use birth control.</em></p></blockquote>
<p>â€œAnonymityâ€ is a condition that is a deep structuring characteristic of the internet as you, Lessig and others have commented on.Â  And frequently we are promised (questionably) â€œsecrecyâ€ or anonymity as privacy protection by services handling our data on the internet.Â  But Eben (one of the USâ€™s great constitutional lawyers) points out that â€œautonomyâ€ is a key form of privacy in theÂ  US constitutional structure that is often compromised in situations where our digital selves may constrain our non-digital selves.</p>
<blockquote><p><em>The real issue here is about the forcing of choices on usâ€¦digital aspects of identity can quickly acquire an inflexibilty that constrains our non-digital selves.</em></p>
<p><em>I see again and again the ways in which people now find themselves unable to make certain life choices easily because there digital self has acquired an inflexibility that constrains their non-digital self.</em></p></blockquote>
<p>As we go beyond the end to end internet and we lose the structuring characteristic that has privileged anonymity: How do you see these three elements of privacy, anonymity, secrecy and most importantly autonomy, being worked out in a networked world beyond the end to end internet?</p>
<p>Are there any new structuring characteristics that could privilege autonomy? (which Eben indicates is linked to having a flexible identity).</p>
<p><strong>AG: If we accept for the moment a definition of autonomy as a feeling of being master of oneâ€™s own fate, then absolutely yes. One thing I talk about a good deal is using ambient situational awareness to lower decision costs &#8211; that is, to lower the information costs associated with arriving at a choice presented to you, and at the same time mitigate the opportunity costs of having committed yourself to a course of action. When given some kind of real-time overview of all of the options available to you in a given time, place and context &#8211; and especially if that comes wrapped up in some kind of visualization that makes anomaly detection and edge-case analysis instantaneous gestalts, to be grasped in a single glance &#8211; your personal autonomy is tremendously enhanced. <em>Tremendously</em> enhanced.</strong></p>
<p><strong>But as to how this local autonomy could be deployed in Moglenâ€™s more general terms, I donâ€™t know, and Iâ€™m not sure anyone does. Because heâ€™s absolutely right: Bernard Stiegler reminds us that the network constitutes a <em>global mnemotechnics</em>, a persistent memory store for planet Earth, and yet weâ€™ve structured our systems of jurisprudence and our life practices and even our psyches around the idea that information about us eventually expires and leaves the world. Its failure to do so in the context of Facebook and Flickr and Twitter is clearly one of the ways in which the elaboration of our digital selves constrains our real-world behavior. Let just one picture of you grabbing a cardboard cutoutâ€™s breast or taking a bong hit leak onto the network, and see how the career options available to you shift in response.</strong></p>
<p><strong>This is whatâ€™s behind Anne Gallowayâ€™s calls for a â€œforgetting machine.â€ An everyware that did that &#8211; that massively spoofed our traces in the world, that threw up enormous clouds of winnow and chaff to give us plausible deniability about our whereabouts and so on &#8211; might give us a fighting chance.</strong><br />
<strong><br />
TS: </strong>The concept of autonomy is signaled clearly in the title you have chosen for your next book, <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>,</a> and is a theme of all your writing!Â  While you talk about many of the possible constraints to presentation of self and potential threats to a flexible identity that ubicomp poses, your next book signals optimism. What are your key grounds for optimism?</p>
<p><strong>AG: Itâ€™s not optimism so much as hope. Whether itâ€™s well-founded or not is not for me to decide. I guess I just trust people to make reasonably good choices, when theyâ€™re both aware of the stakes and have been presented with sound, accurate decision-support material.</strong></p>
<p><strong>Putting a fine point on it: I believe that most people donâ€™t actually want to be dicks. We may have differing conceptions of the good, our choices may impinge on one anotherâ€™s autonomy. But I think most of us, if confronted with the humanity of the Other and offered the ability to do so, would want to find some arrangement that lets everyone find some satisfaction in the world. And in its ability to assist us in signalling our needs and desires, in its potential to mediate the mutual fulfillment of same, in its promise to reduce the fear people face when confronted with the immediate necessity to make a decision on radically imperfect information, a properly-designed networked informatics could underwrite the most transformative expansions of peopleâ€™s ability to determine the circumstances of their own lives.</strong></p>
<p><strong>Now thatâ€™s epochal. If that isnâ€™t cause for hope, then I donâ€™t know what is.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg"><img class="alignnone size-full wp-image-3076" title="obamannook1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg" alt="obamannook1" width="375" height="500" /></a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/3246420459/" target="_blank">Newson Obamanook</a> &#8211; photo by Adam Greenfield, &#8220;The fact that it was one of the happiest days of my adult life may have colored my appreciation of this space. A bit, anyway.&#8221;</em></p>
<p><strong>TS:</strong> In your writing you seem to imply that we will not find answers to our new relationship with Everyware by transposing the internet onto things for convenienceâ€™s sake but rather like the bike messengers -Â  we must explore the rich and complex terrain of the city that is ours to use in a give an take relationship.Â  Through our own exertions we find- how â€œanything reasonably smooth and approximately horizontal can become a thoroughfare,â€Â  rather than be served up the city as something for us to consume.</p>
<p>You seem to be suggesting our city becomes ours to use because of the way we use it in our personal journeys -like â€œthe messenger subconsciously maps the contours of an economic geography &#8211; known sources and sinks of courier assignments, or â€œtagsâ€ &#8211; and a threat landscape, this latter comprised of blind corners, cable-car and metro tracks, and traffic lanes.</p>
<p>But bike messengers are the lone ranger of our big cities. Others surf the city in tribes that ride the roiling tides of highly networked information together. How are the â€œnaturalâ€ gestures of these tribes, e.g. day traders, who yoked to the tracings of a hive mind, part of the city that is here for us to use?Â  I thought the comment <a href="http://twitter.com/ginsudo" target="_blank">@ginsudo</a> made shortly after joining Twitter and setting up TweetDeck particularly poignant:</p>
<blockquote><p><em><span class="status-body"><span class="entry-content">â€œwatching Tweetdeck is like watching stock market of your personality ebb and flow. needs analytics to maximize inherent self-involvement.â€</span></span></em></p></blockquote>
<p>But, for many of us our work has more in common with the day trader than the bike messenger, and are we pretty hooked on the ever growing possibilities for â€œcontactâ€ and identity sharing/construction, social media has producedÂ  (with all theâ€Here Comes Everybody,â€ C. Shirky, benefits and risks).Â  Early theorizing of a â€œcalm,â€ invisibleâ€ ubicomp seems out of synch with the excitable, active, engaged, contact driven, â€œusersâ€ that are <span class="status-body"><span class="entry-content">watching stock market of their personality (or personal brand) ebb and flow.</span></span></p>
<p>How will these excitable/exciting processes of contact and identity sharing that have captured of a pretty large segment of popular imagination (not confined to the West -services like <a id="f9mb" title="Gupshup" href="http://www.smsgupshup.com/">Gupshup</a> does much of the same curating, linking and distributing of identity that web based social media does in SMS) be/ or not be part of <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a>?<strong><br />
</strong><br />
<strong>AG: Letâ€™s remember that ubicomp itself, as a discipline, has largely moved on from the Weiserian discourse of â€œcalm technologyâ€; Yvonne Rogers, for example, now speaks of â€œproactive systems for proactive people.â€ You can look at this as a necessary accommodation with the reality principle, which it is, or as kind of a shame &#8211; which it also happens to be, at least in my opinion. Either way, though, I donâ€™t think anybody can credibly argue any longer that just because informatic systems pervade our lives, designers will be compelled to craft encalming interfaces to them. That notion of Mark Weiserâ€™s was never particularly convincing, and as far as Iâ€™m concerned itâ€™s been thoroughly refuted by the unfolding actuality of post-PC informatics.</strong></p>
<p><strong>All the available evidence, on the contrary, supports the idea that we will have to actively fight for moments of calm and reflection, as individuals and as collectivities. And not only that, as it happens, but for spaces in which weâ€™re able to engage with the Other on neutral turf, as it were, since the logic of â€œsocial mediaâ€ seems to be producing</strong><em><strong> Big Sort</strong></em><strong>-like effects and echo chambers. We already â€œmaximize inherent self-involvement,â€ analytics or no, and the result is that the tools allowing us to become involved with anything but the self, or selves that strongly resemble it, are atrophying.</strong></p>
<p><strong>So when people complain about K-Mart and Starbucks and American Eagle Outfitters coming to Manhattan, and how it means the suburbanization of the city, I have to laugh. Because the real</strong> <strong>suburbanization is the smoothening-out of our social interaction until it only encompasses the congenial. A gated community where everyone looks and acts the same? <em>Thatâ€™s</em> the suburbs, wherever and however it instantiates, and I donâ€™t care how precious and edgy your tastes may be. Richard Sennett argued that what makes urbanity is precisely the quality of necessary, daily, cheek-by-jowl confrontation with a panoply of the different, and as far as I can tell heâ€™s spot on.</strong></p>
<p><strong>We have to devise platforms that accommodate and yet buffer that confrontation. We have to create the safe(r) spaces that allow us to negotiate that difference. The alternative to doing so is creating a world of ten million autistic, utterly atomic and mutually incomprehensible tribelets, each reinforced in the illusion of its own impeccable correctness: duller than dull, except at the flashpoints between. And those become murderous. Nope. Unacceptable outcome.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg"><img class="alignnone size-full wp-image-3075" title="uncannyvalleys" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg" alt="uncannyvalleys" width="500" height="369" /></a></strong><br />
<em><a href="http://flickr.com/photos/studies_and_observations/3119708407/" target="_blank">Uncanny Valleys </a>- Adam comments,&#8221;Our apartment in NYC as rendered in Google Earth, with realtime traffic, weather, daylight and shadow as well as geodetic, street grid and service overlays. Camera view is South; that&#8217;s First Avenue just left of center-screen.&#8221;</em></p>
<p><strong><br />
TS:</strong> Smart phoneâ€™s are now drawing everyware data into the system and the net is reaching into who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc..</p>
<p><a id="u:ys" title="Nathan Freitas" href="http://openideals.com/">Nathan Freitas</a> says Android:<em> </em>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ (Android APIs for a wide range of sensor data.)</p>
<p>What in your view will be the most likely platform, Android or what?, to socialize the idea that sensor data could be a piece of every application?</p>
<p><strong>AG: An open platform. A platform with lots of hooks and ways to plug things into it, a strong developer community, a shallow learning curve and/or an easy-to-use, high-level development environment.</strong></p>
<p><strong>I donâ€™t have a dog in this race, mind you. I couldnâ€™t care less who gets there first.</strong></p>
<p><strong>TS: </strong>New location based services, e.g., <a id="kvue" title="Xtify" href="http://xtify.com/featured">Xtify</a> and <a id="fajp" title="ViaPlace" href="http://www.viaplace.com/">ViaPlace</a>, are offering us ways to share location data across lots of different applications (eg Xtify and a dating application like <a id="yixz" title="MeetMoi" href="http://www.meetmoi.com/welcome">MeetMoi</a> ). In return for services that allow us to share information, we must give up key information up to one site to store and disperse (although there are many differences in approach to our data, from the Twitter stance â€œshow but donâ€™t ownâ€ as opposed to Facebookâ€™s stance &#8211; â€œin order to show we must have rights to itâ€). But the basic model of Twitter &#8211; to provide a white noise platform for people to build service on top off seems to be being transposed to location based services. Obvious questions arise like what happens to our data in a start up like MeetMoi if they go belly up?Â  Apparently in the dot.com bust data was the first thing to go on the auction block in bankrupcy cases.</p>
<p>Also, I suppose it is hardly surprising (if disappointing to me) that some of the early location based services are trying to get mindshare by picking up on the glue celebrities give to mass culture. At the last New York Tech Meetup, <a href="http://m.twitter.com/omgicu" target="_blank">OMGICU</a> demoed a rather terrifying new pre-launch location based â€œparticipatory celebrity gossip applicationâ€ which seems to combine all the worst features of social media with celebrity stalking, plus a narrative to change the notion of celebrity itself by â€œturning D listers into A listers.â€</p>
<p>Hopefully location based applicationsÂ  will not get stuck on â€œstalker, stalker, stalkerâ€ apps like OMGICU .</p>
<p>David Oliver, <a id="qgz3" title="Oliver Coady" href="http://olivercoady.com/">Oliver Coady</a> gave me a good question: &#8220;How does timeliness and location-independence change our ideas of social media?</p>
<p>And how can we design new architectures that can reinforce the sense that I am in control of my own fate?</p>
<p><strong>AG: But weâ€™ve already come so far in terms of turning D-listers into A-listers! On a daily basis, Iâ€™m exposed to almost as many cues insisting I attend to nonentities and dullards like Robert Scoble as those insisting I attend to nonentities like Madonna or Thomas Friedman.</strong><strong> Itâ€™s gotten ridiculous.</strong></p>
<p><strong>Now, how does timeliness and location change our ideas of social media? It makes them dangerous!</strong></p>
<p><strong>Look, even a proud Z-lister like myself &#8211; Iâ€™m a public person only in the most debased and degraded meaning of that word &#8211; Iâ€™ve had experiences that shook me up, like having someone approach me while I was quietly hanging out in the back of St. Markâ€™s Books, and wanting to strike up a conversation based on some talk theyâ€™d seen me give a year or so previously. Now part of learning to deal with this kind of thing is shrugging it off, being grateful and flattered that someone thinks youâ€™re interesting enough to single out for that kind of attention, or chalking this up to Sennettâ€™s observation about the constitution of urbanity. Or doing all three at once.</strong></p>
<p><strong>But letâ€™s remember that at the end of the day, a â€œsocial networkâ€ is nothing but a group of arbitrarily distributed human beings joined by a communications channel, and those people have eyes and ears. The degree to which they recognize some shared interest gives them significance filters. If social capital accrues to those in the network who are able to claim some connection with a â€œcelebrity,â€ no matter how fleeting, then such connections are going to be mobilized, made explicit. And now say the network has been provided with the tools allowing it to plot the appearances of those putative celebrities in space and time, and what do you get? You get a circumstance in which it is very, very difficult to maintain any membrane between the private self and the world, for anyone whoâ€™s even remotely a public figure, whether they particularly want to be a public figure or not. You get network effects that amplify those locational traces, and further undermine any possibility of anonymity, even anonymity-by-suspension-of-interrogative-awareness (which is a clumsy way of referring to that blasÃ© matter-of-factness around famous people that most big-city folks eventually develop).</strong></p>
<p><strong>Am I letting myself off the hook? Not in the slightest. I passed Terence Stamp on the street not so long ago, and you bet I Twittered it. My only excuse was that I Twittered it to a closed loop of no more than a few dozen people. But then, who knows what those few dozen people will turn around and do with that fact, on the open networks to which they in turn belong?</strong><strong> And that, too, is my responsibility.</strong></p>
<p><strong>Iâ€™m not sure thereâ€™s anything to be done about any of this but cultivate our own urbanity, learn to say â€œso whatâ€ when we happen to find ourselves next to Philip Seymour Hoffman in the line at Whole Foods.</strong><strong><br />
</strong></p>
<p><strong>TS: </strong>Zittrain in <a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It</a>, foregrounds â€œgenerativityâ€ and a generative devices (as opposed to appliances) as the most fortuitous starting point for: â€œtools to bring about social systems to match the power of the technical one.â€</p>
<p>Are appliances a threat to the city that is here for you to use? How can generativity ensure <em><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></em> as Zittrain argues it has ensured, even if imperfectly, that the internet has been here for us to use?<strong><br />
</strong><br />
<strong>AG: You know, I havenâ€™t read the book, Iâ€™ve only heard him give the talk, so itâ€™s certainly possible thereâ€™s a subtlety to the argument that Iâ€™m missing. But Iâ€™m not sure Jonathan isnâ€™t simply wrong about this notion of generativity. Not that the concern is misplaced, but that heâ€™s insufficiently trustful in human agency. Is a car â€œgenerative,â€ by his definition? Certainly not. And yet look at all the cultural production that goes on around â€œthe car,â€ look at all the assemblages people make with cars, from Beach Boys songs to <a href="http://en.wikipedia.org/wiki/Ghost-riding">ghost riding the whip</a>, from J.G. Ballard novels and <em>Herbie the Love Bug</em> to <em>Tokyo Drift.</em></strong></p>
<p><strong>Or probably more to his point: look at the Japanese mobile-phone market &#8211; seemingly one of the most locked-down and unpropitious circumstances imaginable for the production of culture, in technical terms and Zittrainâ€™s both. And yet fully 50% of the bestselling books in Japan last year were written on mobile phones. Not <em>read</em>, which would already be impressive enough (if â€œimpressiveâ€ is indeed the word): </strong><em><strong><a href="http://www.nytimes.com/2008/01/20/world/asia/20japan.html">written</a>. </strong></em><strong>What does that imply for his argument?</strong></p>
<p><strong>So, yes, I think there are grounds for concern in that we don&#8217;t allow technologies and frameworks to appear that unduly limit the scope of human creativity</strong><strong>. Code is still law. But I also think people are quite amply able to reach into what would appear to be the least propitious technologies and tell their own stories with same.<br />
</strong></p>
<p><strong><br />
TS: </strong> One aspect of Everyware that seems in need of some visionary yoga is the how we will relate to pixels anywhere.</p>
<p>In <em><a href="http://www.lulu.com/content/1554599">Urban Computing and its Discontents</a></em> you mention how our technological trajectories often make it seem as if we seem to get fixated on particular scenes in movies, e.g., <em>Minority Report</em>. You point out that so many ambient informatics projects seem simply â€œto expand the reach of signage and advertising in dense urban spacesâ€¦.as if weâ€™ve become transfixed by the scene from <em>Minority Report</em> where heterosexual cop John Anderton is on the run from his colleagues.â€</p>
<p>Ideas from the <em>Minority Report</em> continue to hold sway in designs as we saw in the recent MIT demo of <a href="http://ambient.media.mit.edu/projects.php?action=details&amp;id=68" target="_blank">SixthSense</a> at TED.</p>
<p>But visions of augmented reality were pretty high profile in this years Super Bowl commercials this year (including a highly anthropomorphic imagining of ubicomp that was a kind of WoW mashup with a Pixar movie).</p>
<p>What recent movies/commercials have produced scenes mostly likely to be are new fixation fodder for ubicomp and why?</p>
<p><strong>AG: I donâ€™t think Iâ€™m qualified to answer that, actually. We donâ€™t have a TV, so I donâ€™t see much in the way of commercials, and most of the films I wind up seeing are the kind that play at Anthology Film Archives. What I can say is that science fiction is currently suffering in toto from an inability or disinclination to posit future scenarios that are any weirder or more visionary than those emerging from other sectors of the culture. And that would be fine, except sf has traditionally been the place where we wrestled with the imaginary.</strong></p>
<p><strong>We need that set of tools, badly. If for no other reason than something I glean from personal experience: essentially my entire professional career has simply been the leveraging of ideas and concepts I originally wrestled with in the encounter with William Gibson and Bruce Sterling when I was 16. Today&#8217;s visionary sf means tomorrow&#8217;s halfway-competent generalist.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg"><img class="alignnone size-full wp-image-3030" title="nurrikim" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg" alt="nurrikim" width="375" height="500" /> </a></strong><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank"></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank">Nurri Kim in the waiting zone</a> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>My AR friend, <a href="http://curiousraven.squarespace.com/about-me/">Robert Rice</a>, who is <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">working on a markerless AR platform,</a> notes that data visualization is one of the critical elements of AR in terms of â€œmake or break.â€ Robert says, â€œeven with the ultimate in ubiquitious data from everything, without good data vis it will all be uselessâ€</p>
<p>Also something Cory Doctorow said to me last year has really stuck in my mind. When I asked him what happens when Cyberspace everts, he talked about a reverse surveillance society:</p>
<div style="margin-left: 40px;"><em>â€œSurveillance is all about when people in authority know a lot about you. Instrumentation is when you know a lot about the world,â€</em></div>
<blockquote><p>C<em>ory: Well this is like Spook Country the new Gibson novel â€“ What happens when cyber space everts â€“ hmmm? Iâ€™m not sure I have anything very pithy to say on that EXCEPTâ€¦â€¦â€¦ </em><br />
<em> Apart from all the traditional kind of overlay reality stuff, if there is one thing I am actually interested seeing from a virtual world migrating to the real world its instrumentation. </em><br />
<em> I think lot of things that are characteristic of very successful internet based business is that they are extremely finally instrumented so like Amazon knows in aggregate on a second by second basis how their site is being used by people and they can twiddle the dials in real time. </em></p>
<p><em> As users of the world we have very little access to that kind of instrumentation. We donâ€™t even know how the tube is running. The tube knows how the tube is running and we kinda of donâ€™t. I would be really interested in seeing that. Youâ€™ve seen <a href="http://joi.ito.com/">Joi Itoâ€™s</a> WoW interface right. Have you seen it â€¦ </em></p></blockquote>
<p>Joi Itoâ€™s WoW interface seems a long way from the calm, invisible imaginings for ubicomp by early ubicomp visionaries?</p>
<p><strong>AG: Well, heâ€™s got a particular kind of neural wiring. And thereâ€™s not a thing thatâ€™s wrong with that, except that Iâ€™d never, ever want to assert that whatâ€™s appropriate for Joi Ito necessarily is or should be understood to be appropriate for anybody else. The point of calling for open systems and frameworks is to allow us maximum scope of diversity in the ways we choose to interface with the worldâ€™s richness and complexity.</strong><em><strong><br />
</strong></em> <strong><br />
TS: </strong>What new imaginings/possibilites do you see when pixels anywhere are linked to everyware?<strong><br />
</strong><br />
<strong>AG: Product placement. Commercial insertions and injections, mostly.</strong></p>
<p><strong>Beyond that: one of the places where Mark Weiser logic breaks down is in thinking that the platforms we use now disappear from the world just because ubiquitous computingâ€™s arrived. Weâ€™ve still got radio, for example &#8211; OK, now itâ€™s satellite radio and streaming Internet feeds, but the interaction metaphor isnâ€™t any different. By the same token, weâ€™re still going to be using reasonably conventional-looking laptops and desktop keyboard/display combos for awhile yet. The form factor is pretty well optimized for the delivery of a certain class of services, itâ€™s a convenient and well-assimilated interaction vocabulary, none of thatâ€™s going away just yet. And the same goes for billboards and â€œTVâ€ screens.</strong></p>
<p><strong>But all of those things become entirely different propositions in everyware world: more open, more modular, ever more conceived of as network resources with particular input and output affordances. We already see some signs of this with Microsoftâ€™s recent â€œSocial Desktopâ€ prototype &#8211; which, mind you, is a very bad idea as it currently stands, especially as implemented on something with the kind of security record that Windows enjoys &#8211; and weâ€™ll be seeing many more.</strong></p>
<p><strong>If every display in the world has an IP address and a self-descriptor indicating what kind of protocols itâ€™s capable of handling, then you begin to get into some really interesting and thorny territory. The first things to go away, off the top of my head, are screens for a certain class of mobile device &#8211; why power a screen off your battery when you can push the data to a nearby display thatâ€™s much bigger, much brighter, much more social? &#8211; and conventional projectors.</strong></p>
<p><strong>Then we get into some very interesting issues around large, public interactive displays &#8211; who &#8220;drives&#8221; the display, and so forth. But here again, we&#8217;ll have to fight to keep these things sane. It&#8217;s past time for a public debate around these issues, because they&#8217;re unquestionably going to condition the everyday experience of walking down the street in most of our cities. And that&#8217;s difficult to do when times are hard and people have more pressing concerns on their mind.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg"><img class="alignnone size-full wp-image-3045" title="citywarecrash" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg" alt="citywarecrash" width="500" height="375" /></a><br />
</strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2786991056/" target="_blank">Citywarecrash</a> &#8211; photo by Adam Greenfield, &#8220;An occupational hazard for urban screens.&#8221;</em></p>
<p><strong>TS: </strong>I know in <em>Everyware</em> you mentioned that architects have play an important visionary role in imagining ubicomp and I know you work closely with your wife, artist <a href="http://www.nurri.com/">Nurri Kim</a>.Â  Robert Rice asked me the following question &#8211; which I will in turn ask you: &#8220;In terms of augmented reality do you think virtual worlds and virtual reality experts / leaders / are good pioneers for thought and guidance on AR? Or, should we look for new leaders, or where are new leaders emerging? Is the tech similar enough for the old crowd to be useful or is it different enough to be a disadvantage coming from the old models?.<strong>&#8221;<br />
</strong><br />
<strong>AG: I should make it clear that I have absolutely no interest in virtual worlds or virtual reality. The so-called virtual worlds Iâ€™ve experienced seem sad and really rather tatty &#8211; eversions of the most predictable adolescent fantasies of unlimited power, reinscriptions of all the usual politics &#8211; and completely lacking in just about everything that makes life resonant, meaningful and awe-inspiring. And anyway, to paraphrase J.G. Ballard, ordinary, everyday life is now far more vividly and fantastically weird than anything youâ€™ll see in Second Life. I mean, Garry Kasparov was heckled by a radio-control dildocopter, Joe the Plumberâ€™s off to Gaza as a war correspondent, a sea of dust-covered BMWs waits in the long-term parking lot at Dubai International for owners who are never, ever coming back.</strong></p>
<p><strong>Look to virtual worlds for insight into the hard work of negotiating the actual, with its physics, its entropy, its suffering, with all its constraints? Oh my goodness gracious, no.<br />
And look to leaders? Never.</strong><strong> Leaders are for followers, and who wants to be that? I donâ€™t mean you canâ€™t take inspiration and insight from the work of others &#8211; not at all &#8211; but use your own imagination, take some personal risk, do your own damn work.</strong></p>
<p><strong>Now, having said that. This opposition of virtual and physical worlds strikes me as increasingly a false one, as it does many people. The hard-and-fast distinction between â€œthe real worldâ€ and virtual environments make less and less sense, as righteously satisfying as making it can sometimes seem. There may be attributes of this physical environment that are impossible to see or make use of without access to the networked overlay, and those attributes may in time come to constitute the primary wellsprings of a given placeâ€™s meaning. And if youâ€™re offering me some insight that I think could be of utility in resolving the challenge of making this overlay accessible to all, equally, Iâ€™ll gladly accept it, no matter what domain or disciplinary background you claim</strong><strong> as your own. </strong></p>
<p><strong>Am I aware of any such insight coming out of virtual worlds? No. As Bryan Boyer notes, â€œIf you want to start talking about some serious cross-disciplinary pollination then you better take both sides of that disciplinary divide seriously. When your </strong><em><strong>ubi- </strong></em><strong>runs into my building with its boring HVAC, mundane load paths, typical finished floors, plain old foundations, etc., the transformative powers of </strong><em><strong>comp </strong></em><strong>are bracketed pretty seriously by the realities of the physical world.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg"><img class="alignnone size-full wp-image-3064" title="thecloudgate" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg" alt="thecloudgate" width="500" height="375" /></a><br />
<a href="http://flickr.com/photos/studies_and_observations/1904838102/" target="_blank"><em>The Cloud Gate has landed</em></a><em> &#8211; photo by Adam Greenfield, &#8220;Tell me this doesn&#8217;t look *just* like the descriptions of &#8220;stasis fields&#8221; in 70s SF. In fact, the picture looks practically CGId to me.&#8221;</em></p>
<p><strong>TS:</strong> Some people thought the whole world would have been plastered with RFID by now.Â  But before that has happened markerless AR seems to be in our sights.</p>
<p>If I understand it correctly marker versus markerless AR has quite different implications for how the cyberspace of ubicomp evolves?Â  I asked Robert Rice (he is developing a markerless AR platform) to explain some of the differences.Â  He said:</p>
<div style="margin-left: 40px;"><em>markers are discreet physical objects at worst, they are passive images that are linked to some sort of static data in a database somewhere (like a 3D object). If you destroy them, thats it. With markerless stuff, everything is persistent, dynamic, already linked in cyberspace. Marker based stuff requires a secondary infrastructure of hardware for telecommunications</em></div>
<p><em><br />
</em>Robert also pointed out to me that markerless AR may prove even more problematic for privacy:</p>
<div style="margin-left: 40px;"><em>Markers are easy to see, so you know where they are. RFIDs cant really be seen, but they can be detected. With markerless AR, there is nothing obvious to the naked eye you dont know if someone has active AR going on or not, so you could be tracked and not know it. Not much more than today with CCTVs all over the place so, it is the same [a surveillance issue] as marker based, but more subtle or inobvious.</em></div>
<p><em> </em></p>
<p>Do you have any thoughts about the different roles that markerless versus marker techinologies will play in AR and Ubicomp?</p>
<p><strong>AG: I need to admit that Iâ€™ve never until this moment heard the phrase â€œmarkerless AR,â€ although Iâ€™d think itâ€™s more or less self-explanatory to anyone whoâ€™s been following this stuff. Let me make the distinction explicit, shall I, for anyone who hasnâ€™t been? And you or Robert can correct me if Iâ€™ve gotten it wrong.</strong></p>
<p><strong>Augmented reality means that I have some mediating artifact that provides me with a visual overlay on the world</strong><strong>. This could be a phone, it could be a windshield, it could be a pair of glasses or contact lenses, doesnâ€™t matter. And youâ€™re going to use that overlay to superimpose some order of information about the world and the objects in it onto the things that enter my field of vision &#8211; onto what I see. So far, so good: thatâ€™s AR 101.</strong></p>
<p><strong>Now where does that information come from?</strong></p>
<p><strong>What youâ€™re calling marker-based AR implies that thereâ€™s some reasonably strong relationship between the information superimposed over a given object, and the object itself. That object is an onto, a spime, itâ€™s been provided with a passive RFID tag or an active transmitter. And itâ€™s radiating information about itself that Iâ€™m grabbing, perhaps cross-referencing against other sources of information, and superimposing over the field of vision. Fine and dandy.</strong></p>
<p><strong>But thereâ€™s another way of achieving the same end, right? Instead of looking at a suit jacket on a rack and having its onboard tag tell you directly that itâ€™s a Helmut Lang, style number such-and-such from menâ€™s Spring/Summer collection 2011, Size 42 Regular in Color Gunmetal, produced at Joint Venture Factory #4 in Cholon City, Vietnam, and packed for shipment on September 3, 2010, youâ€™re going to run some kind of pattern-matching query on it. And without the necessity of that object being tagged physically in any way, youâ€™re going to have access to information about it. But this set of information isnâ€™t, necessarily, what the object itself, or its creators or merchandisers, want you to know about it; it could be derived from online discussion fora or review sites, or blog posts, or whatever. All there needs to be is a lookup table, essentially, that tells you where to find information about any object in the field of vision whose identity can be established.</strong></p>
<p><strong>Do I have that right? And if I do, then as I understand it, the distinction is primarily a pragmatic one: itâ€™s just easier to get to an augmented world, by far, if we donâ€™t actually have to go to all the trouble of tagging everything in the world with its own dedicated RF transponder. Easier, and cheaper, and quicker, and more environmentally sound besides, because the relevant traffic is in bits not atoms.</strong></p>
<p><strong>Unless Iâ€™ve missed something, you donâ€™t, then, get the distinction between classes of objects and instances of same. Sometimes, when thereâ€™s a 1:1 correlation between the two, thatâ€™s not going to matter: Iâ€™m walking down the street in Madrid, and my glasses or whatever can easily recognize that this building is the Caixa Forum. Thereâ€™s only one of it, and I can get a positive ID via pattern recognition. But for some edge cases &#8211; twins and lookalikes, mostly &#8211; the same thing is generally true of people.</strong></p>
<p><strong>But other times it will matter. Is <em>this specific watch</em> a real, $10,000 Panerai or a $50 Kowloon fakery? How has <em>this</em> black 1998 Honda Civic over here differ from this other one in terms of its use and maintenance history? Does <em>this</em> O-ring gasket need to be replaced? I donâ€™t see how you extract data from specific instances of things without the necessary sensor instrumentation, transmitter, etc., being coextensive with the object in question or very closely colocated with it over time &#8211; in the terminology youâ€™re using, a â€œmarker.â€</strong></p>
<p><strong>So using these terms, Iâ€™d say that â€œmarkerlessâ€ AR comes first, is relatively easy to deploy, and generates not-insignificant value. But &#8211; again, unless Iâ€™m missing something &#8211; there are some things that it wonâ€™t ever be able to do, and for those things you need some provision for self-identification and self-location.</strong></p>
<p><strong>Ultimately I think it&#8217;s a distinction without a difference, from the user&#8217;s point of view. People will care much more about the source of whatever information shows up on their overlay than the precise technical means used to get it there.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg"><img class="alignnone size-full wp-image-3042" title="smileuroncctv" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg" alt="smileuroncctv" width="394" height="500" /></a><br />
</strong></p>
<p><a href="http://flickr.com/photos/studies_and_observations/3274544108/" target="_blank"><em>The surrender to cynicism</em></a><em> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> Much early thinking around ubicomp seems to have come from visionary architects and engineers but recently I was at the <a href="http://www.toccon.com/toc2009" target="_blank">O&#8217;Reilly Tools of Change for Publishing Conference</a> (publishing in the Digital Age) and I met several book futurists.Â  It struck me how ubicomp from the perspective of the book created some interesting questions for how particular material cultures will shape and be shaped by Ubicomp differently.</p>
<p><span class="status-body"><span class="entry-content">I noted, Google seemed well down the path to holy grail â€œconverting images to original intent XML.â€</span></span> And <a id="ricl" title="Peter Brantley" href="http://radar.oreilly.com/peter/">Peter Brantley</a> talked about machine parsed <span class="nfakPe">books</span>.</p>
<p>At TOC there were many suggestions about how b<span class="nfakPe">ooks</span> might manifest as everyware. (Although it did not seem that many people felt books had a special relationship to time and history and would not vanish as one of the great metaphors of calm and solitary enjoyment in our culture soon).Â  Books as everyware will, it seems, include, amongst other things:</p>
<p><span class="nfakPe">books</span> that read <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read context</p>
<p>context that reads <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read me</p>
<p><span class="nfakPe">books</span> linked to mobility &#8211; timeliness and location independence</p>
<p><span class="nfakPe">books</span> that are not <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> becoming babble</p>
<p><span class="nfakPe">books</span> bubbling up from the babble</p>
<p>There is an Institute of the Future of the Book. Will all former material cultures require their own institutes of the future to guide their cultures into everyware?Â  Do you think books transition into everyware is especially significant and why?</p>
<p><strong>AG: But all objects have a relationship to time and history, no?</strong></p>
<p><strong>TS: </strong>Yes! What I meant to convey really was the idea that many people expressed at TOC that books had a privileged relationship to knowledge in our culture that was valuable and related to some aspects of their current form, and that books as everyware, e.g. machine parsed books, and more sociallly generated forms would not replace that entirely.<br />
<em><strong><br />
</strong></em><strong>AG: Gotcha. Well, I certainly agree that books constitute an interesting category unto themselves &#8211; Iâ€™ve held onto my physical books, and in fact still spend a fortune buying new ones, where I stopped buying music on discs a long, long time ago. But I donâ€™t think this state of affairs can or should obtain forever.</strong></p>
<p><strong>Lately thereâ€™s been a good amount of thought around the notion of </strong><strong>&#8220;<a href="http://theunbook.com/about/">unbooks</a>,&#8221; which I regard as</strong><strong> a container for long-form ideas appropriate to an internetworked age. By building on some of the tropes of software development, mostly having to do with version control, open-endedness and an explicit role for the â€œuserâ€ community, unbooks can usefully harness the dynamic and responsive nature of discourse on the Web. At the same time, you preserve the things books are really good at: coherence, authorial voice and intent.</strong></p>
<p><strong>The important part is in acknowledging two points which have usually been understood as contradictory, but which are actually nothing of the sort: firstly, that the expression of ideas in written form has something to learn from the practices that have evolved around the collaborative creation of dynamic, digital documents over the half-century-long history of software; and secondly, that certain ideas require elaboration in the reasonably strongly-bounded form we know as a â€œbook,â€ and cannot meaningfully be shared otherwise. A third point, concomitant to the second, is that despite recent technical advances, screen-based media still cannot, and may not ever fully be able to, deliver the extratextual cues and phenomenological traces that support, inform and extend the meaning of written documents.</strong></p>
<p><strong>The unbook lets you have your cake and eat it too. So, for example, when we publish <em>The City Is Here</em>, one of its manifestations will be a static, physical document &#8211; and hopefully, if we do our jobs well, a very nice one indeed. But even before that, youâ€™ll be able to download a Creative Commons-licensed PDF of every numbered version of the manuscript, from zero onward. Bottom line: you buy the book if, and only if, you want the object. The ideas are free.</strong><br />
<strong><br />
TS: </strong><em><a id="ed35" title="David Brin" href="http://www.davidbrin.com/tschp1.html"> David Brin</a> sees two futures:1) the government watches everybody, and 2) everybody watches everybody (the latter he calls &#8220;sousveillance&#8221;).Â  My friend <a id="suag" title="Ben Goertzel" href="http://www.goertzel.org/">Ben Goertzel</a> says â€œhooking AI up to a massive datastore fed by ubicomp is the first step toward sousveillance?â€ What do you think the role of AI in ubicomp will be?Â  Is it worth thinking about what is the first important â€œAI meets ARâ€ app is?</em></p>
<p><strong>AG: I donâ€™t believe that artificial intelligence as the term is generally understood &#8211; which is to say, a self-aware, general-purpose intelligence of human capacity or greater &#8211; is likely to appear within my lifetime, or for a comfortably long time thereafter.</strong></p>
<p><strong>Having said that, your friend Ben seems to be making the titanic (and enormously difficult to justify) assumption that a self-aware artificial intelligence would share any perspectives, goals, priorities or values whatsoever with the human species, let alone with that fraction of the human species that could use a little help in countering watchfulness from above. â€œHooking [an] AI up to a massive datastore fed by ubicompâ€ sounds to me more like the first step toward enslavementâ€¦if not outright digestion.</strong></p>
<p><em><strong>Sousveillance </strong></em><strong>- the term is Steve Mannâ€™s, originally &#8211; doesnâ€™t imply â€œeverybody watching everybodyâ€ to me, anyway, so much as a consciously political act of turning infrastructures of observation and control back on those specific institutions most used to employing same toward their own prerogatives. Think Rodney King, think Oscar Grant.</strong><em><strong><a href="http://www.davidbrin.com/tschp1.html"><br />
</a></strong></em><br />
<strong>TS: </strong>I have one last question from Usman Haque.</p>
<p><strong>Usman Haque:</strong> insofar as a lot of what adam describes as desirable could be said to constitute pretty radical socio-political change (or perhapsâ€¦ â€œadjustmentâ€) i would be really interested to know how his current work @ nokia is or isnâ€™t able to gel with the themes of his writing. in some senses thereâ€™s quite an undercurrent strongly challenging corporate practices, in other senses it could be seen as gentle nudges. how does adam see it? and how about the nokia behemoth? does he have success nudging nokia towards the kind of world he would like to see (i imagine the answer is â€˜yesâ€™ otherwise he wouldnâ€™t be doing itâ€¦) but iâ€™d love to know more about the limits/challenges.</p>
<p><strong>AG: I am told that Henry Kissinger, on his first trip to China in 1971, asked Zhou Enlai whether he thought the French Revolution had or had not advanced the cause of human freedom.<br />
Zhou thought for a moment, pursed his lips, and replied, â€œIt is too soon to tell.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
		<item>
		<title>Is it â€œOMG Finallyâ€ for Augmented Reality?: Interview with Robert Rice</title>
		<link>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/</link>
		<comments>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/#comments</comments>
		<pubDate>Sun, 18 Jan 2009 01:03:32 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual goods]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[AR Geisha Doll]]></category>
		<category><![CDATA[compass in the android]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[hybrid augmented/virtual reality]]></category>
		<category><![CDATA[immersive mobile augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[massively multiuser augmented reality]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Neogence]]></category>
		<category><![CDATA[next generation transparent wearable displays]]></category>
		<category><![CDATA[NYC Tech Meetup]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[socializing sensor data]]></category>
		<category><![CDATA[Unreal 3]]></category>
		<category><![CDATA[Web Alive]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2620</guid>
		<description><![CDATA[Neogence is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, Robert Rice.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&#38;D labs: â€œthe patio on the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-2557" title="neogencesekrithqpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/neogencesekrithqpost.jpg" alt="neogencesekrithqpost" width="450" height="412" /></p>
<p><a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a> is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, <a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a>.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&amp;D labs: â€œthe patio on the lower left is where I do a lot of pacing and smoking my pipe and the porch and office upstairs is whereÂ  a lot ofÂ  meetings have been held.â€</p>
<p><a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a> (<a id="x_:i" title="@RobertRice" href="http://twitter.com/RobertRice" target="_blank">@RobertRice</a> ), CEO of <a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a>, recently tweeted:</p>
<p><em><strong>Iâ€™m changing my name to Robert Mobile Ubiquitous Geospatial Augmented Rice. Iâ€™m betting on radical changes in next 18 months.</strong></em></p>
<p>Although Robertâ€™s new AR platform is still under wraps, I think you will get a good idea of what direction he is going in from this interview (full text at end ofÂ  this post). Robert is the author of â€œ<a id="c:rr" title="MMO Evolution" href="http://books.google.com/books?id=dkZ-6C5utz8C&amp;dq=MMO+Evolution&amp;printsec=frontcover&amp;source=bn&amp;hl=en&amp;sa=X&amp;oi=book_result&amp;resnum=4&amp;ct=result" target="_blank">MMO Evolution</a>â€ and is a key developer and thought leader in persistent immersive environments, simulations, virtual worlds and massively multiplayer games as well as large scale communities and social networking.</p>
<h3>It is OMG finally, at least, for minimally immersive but truly useful AR.</h3>
<p>Since the launch of Android a new generation of useful augmented reality applications like <strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> are emerging.</p>
<p>After the last<a href="http://www.meetup.com/ny-tech/calendar/9466657/" target="_blank"> NYC Tech Meetup</a>, myÂ  friend <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas</a>,Â  <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">(</a><a title="@NatDefreitas" href="http://twitter.com/natdefreitas" target="_blank">@NatDefreitas</a>),Â <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank"> </a>or rather Nathan Mobile Meets Social Freitas, demoed for me a cool graffiti appÂ  he has developed on Android.Â Â  You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art.Â  For more cool mobile projects Nathan is working on &#8211; <a href="http://blog.twittervotereport.com/" target="_blank">Vote Report </a>and data collection for mass events, a guide to pubs and nightlife in New York City, and more, see his blog, â€œNathanâ€™s<a href="http://openideals.com/" target="_blank"> OpenIdeals. </a>With Camera, GPS, compass, and accelerometer, and APIs on Android for temperature, light meters, (no hardware yet), Nathan says Android:</p>
<p><a href="http://openideals.com/" target="_blank"><em><strong> </strong></em></a><em><strong>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ </strong></em></p>
<p>As Nathan is fond of saying:</p>
<p><strong><em>The compass is a killer app enabler!</em></strong></p>
<p><a href="http://openideals.com/" target="_blank">Also see </a><a id="ixwx" title="OpenIntents" href="http://code.google.com/hosting/search?q=label:sensors" target="_blank">OpenIntents</a> for some interesting Android Sensor projects.</p>
<p><img class="alignnone size-full wp-image-2558" title="wikitudepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/wikitudepost.jpg" alt="wikitudepost" width="450" height="356" /></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> was one of <em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Thomas Wrobel</a>â€™s</strong></em> two top AR milestones for 2008 (see <a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/" target="_blank">Gamesalfreso</a>):</p>
<p><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> I think. Seems the first released, useful, AR software.</strong></em></p>
<p><em><strong></strong></em> <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> is also a remarkable breakout for AR &#8211; but useful, nah.</p>
<p>I asked Robert if he also thought <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> asÂ  significant breakthroughs:</p>
<p><em><strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</strong></em></p>
<p><em><strong>Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</strong></em></p>
<p><em><strong>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</strong></em></p>
<h3>So when is it â€œOMG finally!â€ for massively multiuser augmented reality?</h3>
<p><img class="alignnone size-full wp-image-2559" title="ar-guipost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-guipost.jpg" alt="ar-guipost" width="450" height="360" /></p>
<p>The picture above is from <a id="kzm2" title="benjapo's portfolio" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">benjapoâ€™s portfolio</a> on istockphoto &#8211; also see the <a id="cqhi" title="istock video here" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">istock video here</a>.</p>
<p><a id="ylpn" title="Alex Soojung-Kim Pang considers" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">Alex Soojung-Kim Pang</a> (who weighed in recently on the <a id="vr8o" title="twitter-baby" href="http://www.endofcyberspace.com/2008/12/twitter-baby.html" target="_blank">twitter-baby</a> debates &#8211; see my <a href="http://tishshute.com/twitter-baby-debates" target="_blank">KickBee Posterous</a> blog) challenges design assumptions for augmented reality that take as a given the userâ€™s desire for numerous private enhancements to their reality.</p>
<p>Alex points out less will probably be more so that enhancements do not impinge on shared experience.Â  See his write up of a talk he gave at the Royal College of Art, <a id="bxx1" title="&quot;and the end of my own private Shibuya.&quot;" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">â€œand the end of my own private Shibuya.â€</a> Photo below by <em>StÃ©fan, â€œ</em><em><a href="http://www.flickr.com/photos/st3f4n/130889444/in/pool-84787688@N00">Karaoke in Shibuya</a></em><em>â€œ</em></p>
<p><em></em><em><strong>Part of the pleasure of these streetscapes is precisely that theyâ€™re collectively experienced, rather than individual visions: for even a brief period, we share with other postmodern, globe-hopping flaneurs and expatriates and temporary natives the light of the ABC-Mart sign and storefront.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2560" title="karaokepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/karaokepost.jpg" alt="karaokepost" width="450" height="338" /><br />
</strong></em></p>
<p>It is collective experience of enhanced, augmented, virtual or real experiences that interests me too. This is one of the reasons I find <strong><em><a href="http://www.pachube.com/" target="_new">Pachube</a></em></strong> and the <a href="http://www.eeml.org/" target="_blank">EEML project </a>of Haque Design and Research so interesting.</p>
<p><strong><em>Extended Environments Markup Language (EEML), a protocol for sharing sensor data between remote responsive environments, both physical and virtual. It can be used to facilitate </em><em>direct connections between any two environments; it can also be used to facilitate many-to-many connections as implemented by the web service <a href="http://www.pachube.com/" target="_new">Pachube</a>, which enables people to tag and share real time sensor data from objects, devices and spaces around the world.</em></strong></p>
<h3>â€œDistinctions between virtual and real are as quaint and outmoded as distinctions between mind and bodyâ€ (Usman Haque)</h3>
<p><img class="alignnone size-full wp-image-2603" title="chair1post1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair1post1.jpg" alt="chair1post1" width="150" height="150" /><img class="alignnone size-full wp-image-2602" title="remotechair-slpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/remotechair-slpost.jpg" alt="remotechair-slpost" width="150" height="150" /><img class="alignnone size-full wp-image-2604" title="chair2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair2post.jpg" alt="chair2post" width="150" height="150" /></p>
<p>Usman Haque (founder of <a href="http://www.haque.co.uk/pachube.php" target="_blank">Pachube</a> and <a href="http://www.haque.co.uk/" target="_blank">Haque Design and Research</a>) points out this is an underlying premise of his work &#8211; and augmented reality (full interview coming up soon!).</p>
<p>The pictures above show the Haque Design project, <a href="http://www.haque.co.uk/remote.php" target="_blank">Remote</a>:</p>
<p>â€˜<em><strong>Remoteâ€™ connects together two spaces, one in Boston the other in Second Life, and treats them as a single contiguous environment, bound together by the internet so that things that occur in one space affect things that happen in the other and vice versa &#8211; remotely controlling each other.</strong></em></p>
<p>There was a discussion in twitter recently about how the terms like Second Life, Exit Reality, Virtual Worlds are misleading and outmoded. As Robert pointed out we need:</p>
<p><em><strong>one word pleaseâ€¦that sums up virtual and/or augmented reality, interactive, immersive, virtual worlds, mmorpgs, simulations, etcâ€¦ also, I really donâ€™t like the term â€œaugmented realityâ€ or â€œmixed realityâ€. Neither is all that great. And NO â€œmatrixâ€ or â€œmetaverseâ€ please</strong></em></p>
<p>Robert argues strongly that there is a stultification both in virtual world technology &#8211; much of what we call virtual world technology was already, basically, where it is now in the mid 90â€™s. And MMOGs have devolved into gameplay design â€œthat emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.â€</p>
<p>Robert suggested I take a cruise through a new Virtual Space -Â  <a href="http://www.cooliris.com/">CoolIris</a> to find some good pictures for this post (note the partnership between <a href="http://blog.cooliris.com/2009/01/14/cooliris-and-seesmic-streamline-video-blogging/" target="_blank">CoolIris and Seesmic to Streamline Video Blogging.</a> I added the Cooliris Plugin to Firefox and typed Augmented Reality into search and soon I was cruising a highway of images and links. The Road Map image grabbed my attention (see below). It shows the continua that <a href="http://www.metaverseroadmap.org/" target="_blank">the Metaverse RoadMap</a> authors thought are likely to influence the ways in which the Metaverse unfolds. It is â€œa map of the spectrum of technologies and applications ranging from augmentation to simulation; and the spectrum ranging from intimate (identity-focused)external (world-focused)â€</p>
<p><img class="alignnone size-full wp-image-2561" title="metaverseroadmap" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/metaverseroadmap.jpg" alt="metaverseroadmap" width="452" height="427" /></p>
<p>Quite to my surprise, when I clicked out of <a href="http://www.cooliris.com/">CoolIris</a> to the source for the image, I found it had been drawn from a post I wrote in May 2007, <em><strong><a id="jv.r" title="Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle." href="../../2007/05/22/hybridized-digitalphysical-worlds-where-pop-and-corporate-cultures-mingle/" target="_blank">Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle.</a> </strong></em>My post talks about a number of hybridization experiments that were bringing together lifelogging, sensors everywhere, simulation, virtual worlds, and augmentation.</p>
<p>The striking difference from 2007 to now is that we have definitely moved on from mere experimentation. And the poles of the continua<em><strong> intimate/extimate, augmentation/simulation </strong></em>as<em><strong> </strong></em>expressed in the Metaverse Roadmap are now becoming entwined (note the picture above seems to be slightly different to the one used in the road map as <a id="vdcf" title="posted here" href="http://www.metaverseroadmap.org/overview/" target="_blank">published here</a> &#8211; perhaps I had an early version?)</p>
<h3>&#8220;Augmented Reality is not just about overlaying dataâ€¦&#8221; (Robert Rice)</h3>
<p><img class="alignnone size-full wp-image-2562" title="totalimmersion" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/totalimmersion.jpg" alt="totalimmersion" width="450" height="332" /></p>
<p>Th<em>e </em>screenshot above is from <a id="c7vm" title="TotalImmersions video" href="http://www.t-immersion.com/en,video-gallery,36.html#">TotalImmersions video</a> demoing Augmented Reality with 3D Cell Phones.<em> Also see <a id="tvca" title="video of their immersive games" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">video of their immersive games</a>, and FutureScope kiosks <a id="eje0" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a> and <a id="h-:s" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a>.<br />
</em><br />
<a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/">Gamesalfreso</a> noted that Will Wright, delivered the best <a href="http://www.pocketgamer.co.uk/r/Various/Spore+Origins/news.asp?c=8725" target="_blank">augmented reality quote</a> of the year. When describing AR as the way of the future for games, Will Wright said:</p>
<p><em><strong>â€œGames could increase our awareness of our immediate environment, rather than distract us from itâ€.</strong></em></p>
<p>Robert points out in this interview the term Augmented Reality itself has become associated with a very limited understanding of what â€œenhancing your specific reality,â€ is really about. Robert notes:</p>
<p><em><strong>it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.</strong></em></p>
<p><em><strong>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</strong></em></p>
<p><strong><em>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</em></strong></p>
<p><strong><em>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</em></strong></p>
<p><strong><em><br />
</em></strong></p>
<h3>Goggleâ€™s Are Not The Secret Sauceâ€¦</h3>
<p><strong><em><img class="alignnone size-full wp-image-2563" title="ar-catpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-catpost.jpg" alt="ar-catpost" width="137" height="150" /><img class="alignnone size-full wp-image-2564" title="goggles-avatarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/goggles-avatarpost.jpg" alt="goggles-avatarpost" width="150" height="150" /><br />
</em></strong></p>
<p>AR Cat left and Robert Rice right</p>
<p>What has come to be associated with the term Augmented Reality, in the popular imagination &#8211; an idea of 3D graphics projected over markers that has been forever waiting for the advent of â€œwicked next generation transparent wearable displaysâ€ &#8211; nirvana for augmented reality. While such displays may be nirvana for AR (and they could be with us in less than twenty four months), Goggles are not the â€œsecret sauceâ€ of AR as Robert points out.<strong><em><br />
</em></strong></p>
<p><em><strong>All the glasses are, is another display device. At the end of the day, it doesnâ€™t matter if you are looking at an LCD monitor, an IPhone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</strong></em><br />
<em><strong><br />
The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2585" title="vuzix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/vuzix.jpg" alt="vuzix" width="450" height="318" /><br />
</strong></em></p>
<p>There has been quite<a href="http://www.realwire.com/release_detail.asp?ReleaseID=10934" target="_blank"> a buzz going around</a> about the new <a href="http://www.vuzix.com/iwear/products_wrap920av.html" target="_blank">Vuzix Eyewear</a>, and recently Robert talked with Vuzix and checked The Wrap 920AV eyewear out:</p>
<p><em><strong>Vuzix is not alone in pursuing the ultimate in hardware, at least as far as wearable displays. However, I think they are much farther than the rest of the pack in vision, roadmap, and execution. They have put together a team that has a sense of urgency and ambition that will blow the industry away. After talking to them, I got the feeling that they really know what they are doing and there is a lot of mind blowing stuff in their pipeline. Iâ€™m sure they are one of the few companies that really gets it and has a clear vision of the future. Definitely my first choice to work with.</strong></em></p>
<p><em><strong><br />
</strong></em></p>
<h3>Hybrid Augmented/Virtual Reality</h3>
<p><img class="alignnone size-full wp-image-2566" title="qa_2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/qa_2post.jpg" alt="qa_2post" width="450" height="347" /></p>
<p><a id="va0_" title="Cory Ondrejka posted" href="http://ondrejka.blogspot.com/2009/01/anybots-telepresence-robot.html" target="_blank">Cory Ondrejka posted</a> this picture of the anybots telepresence robot and â€œcongrats to <a href="http://www.tlb.org/">Trevor Blackwell</a> and the rest of the <a href="http://anybots.com/">Anybots</a> team on the launch of <a href="http://anybots.com/abouttherobots.html">QA at CES</a>.â€Â  Cory (one of the founders and former CTO of Second Life) also made some predictions for Virtual Worlds, some optimistic and some less so, including â€œthe increasing need to be able to diversify the Second Life product offering to begin truly rebuilding the code base.â€</p>
<p>Robert is unabashedly irritated with the state of play in Virtual Worlds and MMOGS:<br />
<em><strong><br />
</strong><strong>Unless both industries (Virtual Worlds and MMOGs) have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</strong></em></p>
<p><em><strong></strong><strong>Iâ€™d like to see someone grab an engine like Offset, Crytek, HERO, or Unreal 3, and smack on a fat MMO server infrastructure (Eve or Bigworld)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</strong></em></p>
<p>I did just find out that Nortelâ€™s <a id="qkxv" title="WebAlive is powered by the Unreal 3 engine" href="http://www2.nortel.com/go/news_detail.jsp?cat_id=-8055&amp;oid=100251105&amp;locale=en-US" target="_blank">WebAlive is powered by the Unreal 3 engine</a>. You <a id="xqbw" title="can try WebAlive" href="http://www.lenovo.com/elounge" target="_blank">can try WebAlive</a> out here.</p>
<p>Robert<strong><em> </em></strong>points out how rare it has become to see people really push virtual worlds technology and MMOGs into entirely new directions.Â  Although, of course, there are exceptions.Â  I managed to engage some interest from Robert in the possibilities the <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">opensource modular architecture of OpenSim</a> opens up, and <a id="vx_i" title="the augmented reality experiments from Georgia Tech with Second Life" href="http://arsecondlife.gvu.gatech.edu/" target="_blank">the augmented reality experiments from Georgia Tech with Second Life</a> (screenshot below) got praise from Robert for trying to do something new. (Georgia tech have also put out a <a id="kfzj" title="virtual pet app for the iphone" href="http://uk.youtube.com/watch?v=_0bitKDKdg0" target="_blank">virtual pet app for the iphone</a> ).</p>
<p><img class="alignnone size-full wp-image-2567" title="picture-4" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/picture-4.png" alt="picture-4" width="321" height="245" /></p>
<p>But while Robert clearly has zero patience for virtual world technology which he sees stuck in the mid nineties, he notes:</p>
<p><em><strong>the innovative and wonderful stuff about SL isnâ€™t SL, it is what people are doing and creating on their own with terrible tools *IN* SL</strong></em> [Second Life].</p>
<p>The immersive mobile augmented reality platform Robert is building, he hopes, will generate this kind of user creativity but with 21st century tools.</p>
<h3>So is it â€œOMGâ€ finally for the Augmented Reality we have dreamed about?</h3>
<p>According to Robert:</p>
<p><em><strong>It really boils down to a markerless solution and a good application.</strong></em></p>
<p>In the interview below we cover a number of topics including business models for Augmented Reality, e.g., how business models based on micro-transactions and virtual goods will translate to Augmented Reality.</p>
<p>Many of the challenges to becoming mainstream faced by virtual worldsÂ  are similar to the challenges AR must overcome. Robert discusses these including the interface/gui that is a critical element for AR, solving the riddle of one world or many, patent wars in Virtual Worlds and Augmented Reality, the role of Augmented Reality in the future of sustainable computing, and what interoperability is about.</p>
<h3>The Back Story for AR/VRâ€¦</h3>
<p>In case you want to get up to speed on the required background reading forÂ  Augmented Reality. This is Robertâ€™s required reading list and Denno Coil is an absolute <strong>must</strong> see (feel free to add to this list in the comments, please).</p>
<p>â€œIf you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!)â€</p>
<p><img class="alignnone size-full wp-image-2568" title="dennoucoil" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/dennoucoil.jpg" alt="dennoucoil" width="450" height="256" /></p>
<p>Screenshot from Denno Coil from<a id="yic5" title="Concrete Badger" href="http://www.concretebadger.net/blog/2007/12/17/dennou-coil-full-series-2007-in-12-day-4/" target="_blank"> Concrete Badger</a>.</p>
<h3>Interview With Robert Rice</h3>
<p><strong>Tish Shute:</strong> I am glad to hear that you are working on this [an immersive mobile augmented reality platform]!</p>
<p><strong>Robert Rice:</strong> We switched gears from MMO stuff about a year ago and we are finally getting some traction. It is very hard doing anything in this economy right now, but we found an opportunity to take AR to a new level beyond what you see on youtube. AR is still too â€œcuteâ€ and novelty. We donâ€™t want to play around.</p>
<p><strong>Tish Shute:</strong> I like Wikitude â€˜cos it even manages to do something useful!</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> Yeah, useful = traction. Now that we are getting near a prototype we are starting to get a lot of interest even though we are still technically way under the radar.</p>
<p><strong>Tish Shute:</strong> r u funded?</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> privately funded, some revenues from an early license, and ongoing discussions with several institutional investors. So, we have some funding, but nothing spectacular just yet.</p>
<p><strong>Tish Shute:</strong> are you just developing an AR platform?</p>
<p><strong> Robert Rice:</strong> hrm, sort of, but not just that. By platform I mean tools, sdk, and infrastructure plus some applications. The idea is to build something that facilitates everyone else making cool things and useful applications for different industries/sectors</p>
<p><strong>Tish Shute:</strong> Yes that is the cool thing to do but isnâ€™t that hard to fund!</p>
<p>(Robert grins) Well, that depends on the business model. Weâ€™ve got that figured out. Iâ€™d be absolutely happy if everyone and their brother were making applications on our stuff that gives us an edge on market penetration/saturation. There are plenty examples that prove the model. If you give people free and easy to use tools, they will run with it. ARtoolkit for example, has tons of people making nifty things and posting videos on youtube that has pushed them to the forefront as THE AR middleware to use right now, or heck, look at youtube free service, and they dominate video sharing.Â  Sure there will be a lot of â€œnoiseâ€, but there will also be a lot of â€œsignalâ€ that will rise to the top, facilitating and enabling is creating value in its own right.</p>
<p><strong>Tish Shute:</strong> But how do you expect to monetize?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> There are a good half a dozen ways to monetize AR or an AR platform.</p>
<p><strong>Tish Shute:</strong> What are your top 3?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, microtransactions, localized mobile advertising, and enterprise solutions (visualization)</p>
<p><strong>Tish Shute:</strong> Do you think the consumer market will give the lead?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Iâ€™m not sure. We are getting people from academia, intelligence, defense, border security, and some corporate types knocking on our door already, and pretty aggressively. It may be that those sectors push AR before consumer entertainment really kicks off.</p>
<p>But going back to a discussion we had earlier &#8211; yes working with â€œno markersâ€ is a big deal.</p>
<p><strong>Tish Shute:</strong> Can you talk about what you are doing there or is it still under wraps?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I can say that between some university tech transfer and some of our own proprietary stuff, we are using some fairly common visual tracking technology. if you are really plugged into the AR scene, you will know there are probably half a dozen visual tracking methods out there. We just looked for the best one, licensed it for commercial use, and then started working our magic. This is a very small piece of the overall effort, but worth noting.</p>
<p>The downside with working with university tech is that it is usually based on research, incomplete, and not wrapped up in a nice commercial package on the upside, it can be a good start to build on.</p>
<p><strong>Tish Shute:</strong> As you know I am very interested in â€œtechnology that mattersâ€ in particular tech that can help us accomplish the urgent goal of sustainable living.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: oh, Iâ€™m pretty keen on sustainable living as wellâ€¦after I sell off a few companies and have money of my own, Iâ€™m going to get into arcologies<br />
â¦<br />
Robert grins</p>
<p>The interesting thing with the visual stuff combined with our other tech, is that we can make things multiuser, persistent, dynamic, and mobile.<br />
The markers (fiducials) are really really limiting outside of basic applications. You canâ€™t really plaster everyone and everything with a marker.Â  And they are, by nature, static (even if they are animated or whatever).</p>
<p>Alsoâ€¦ our stuff works indoors and outdoors even without a GPS connection.<br />
â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> Now that does sound interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, with visual, you donâ€™t need a compass or accelerometers either. Less hardware : )</p>
<p>You start with wifi triangulation or gps coord to get a â€œbruteâ€ location, and then you use the visual stuff for down to the meter accuracy and that by nature, gives you your orientation and positioning.</p>
<p><strong>Tish Shute: </strong>Wow this is beginning to sound very interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Once you have that, it doesnâ€™t matter where you go, it continues to track and continually refines areas you have been before. Weâ€™ve spent the last year figuring all this out. There are so many problems and obstacles that are going to be developing in the future for anyone trying to do what we are, but we have already discovered solutions.</p>
<p>oh, visual tracking = gesture based interfaces too thatâ€™s going to take some work, but its doable.Â  The real pain in the ass there isnâ€™t the actual tracking, it is in the interface design.</p>
<p>Thatâ€™s something that almost every AR company, venture, and research program is missing out on entirely. They are so focused on making cute things with markers.Â  They are missing the larger problems of AR Spam, interface, iconography, GUI, metaphor, interoperability, privacy, identity.</p>
<p><strong>Tish Shute:</strong> So how are you dealing with all that!!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> We took the backwards approach of trying to think where we want things to be in ten years (and we read all the cool booksâ€¦Vinge, Stephenson, Gibson, etc.) and then we spent time trying to think of what the potential problems areâ€¦.like AR spam. Its bad enough when a giant penis flies by in second life, we donâ€™t want that to happen in a global wireless AR platform.</p>
<p><strong>Tish Shute: </strong>Do you have a prototype yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, 6 months away from youtubing something. Problem has been slow funding, which equals slow development. We also donâ€™t want to show our cards too soonâ€¦too many potential competitors out there.</p>
<p>â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> when you say microtransactions what is the business potential there?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>hrm last year I think, $1.5B was spent on virtual items. Thatâ€™s games and virtual worlds. That should hit $5B in a couple of years. Thatâ€™s basically people buying and selling things like WoW gold or items in SL or whatever. microtransactions, is basically the same thing, but in AR space.</p>
<p>Why couldnâ€™t a 3D artist make a wicked animated 3D dragon, and then sell it to someone else? With AR, you could sit it on your shoulder. With a good scripting engine, you could train it to do stuff. Thats what I want to enable.</p>
<p>tools + sdk + platform = enabling people to make and create. Add in a commerce level (microtransactions) and wala.</p>
<p><strong>Tish Shute:</strong> At the moment all of these virtual goods are very platform specific, is that a problem for you?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Not at all. This is at a higher level. You have to switch mental models when you talk about what AR could or should be. For example, lets contrast the web and virtual worlds. For every virtual world you go to, you have to download a whole new client. Imagine if that model was applied to the webâ€¦ you would need a brand new browser for every website you went to. That is just soâ€¦wrong.</p>
<p>Itâ€™s the same thing for ARâ€¦people are thinking about it with the same mental and business models and development philosophies as virtual worlds or web.Â  There are some things and aspects that work fine, but not everything.</p>
<p>Virtual worlds, are, by nature, necessarily different and walled gardens. The idea of 100% open and interoperable virtual worlds is a red herringâ€¦it sounds good but in practice it is a really dumb idea.</p>
<p><strong>Tish Shute: </strong>I was wondering if you had a way to leverage all the 3D content already created â€˜cos that would jump start things in AR wouldnâ€™t it?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Oh yeah, thatâ€™s easy. They all use the same polygons. Any virtual item in any game or virtual world is likely created with 3D studio or maya or something similar would be easy to convert and use.</p>
<p><strong>Tish Shute:</strong> So people could bring their WoW weapons into your system?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Not legally, but sure. Its just a 3D model with a texture.Â  It doesnâ€™t matter if you use corel draw or photoshop or paintshop proâ€¦.or one screwdriver or another. Part of my teamâ€™s advantage, is that we are all experienced in MMORPG and virtual world design and development. We know the tools, the tech, and what works and what doesnâ€™t.</p>
<p><strong>Tish Shute:</strong> But some of the 3D content created in the social worlds is what has most value to people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Right, and that can be exported out easily.</p>
<p><strong>Tish Shute: </strong>But back to â€œrealâ€ life applications. Is you platform really markerless?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes.Â  marker = printed icon or glyph, also known as a fiducial</p>
<p><strong>Tish Shute:</strong> But u must have some marker?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, more accurately, you need a point of reference.</p>
<p>Visual tracking has been around for more than a decade.Â  Lots of work for robots and other sectors.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> But isnâ€™t the specificity of reference n terms of RL applications a vital key, for example, for a database of things?</p>
<p>Robert grin That is a different problemâ€¦tracking, registration, mapping, positioning, etc. That question has to do with mapping which is related to visual tracking, but not the same thing. We have a rather unique approach to some of this that I canâ€™t discuss (patent pending).</p>
<p><strong>Tish Shute: </strong>But for example, to create an augmented natural history of food &#8211; say I want to point at the slab of meat on my plate and know where that cow came from, what feed lot how it was treated etc</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>That is not possible without ubiquitous nanotechnology. Shall I explain?</p>
<p><strong>Tish Shute:</strong> Yes please!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Ok, lets step back a minute and turn that burger back into a cowâ€¦ the first problem (of this particular situation) is differentiating from one cow to another since most cows look alike, you can either attempt to discriminate visually (cow patterns) or use a much simpler option, like giving each cow a rfid chip in their bell, or hoof</p>
<p>Now, most people would try to figure out how to jam all sorts of info in the rfid chip, which sounds like a good idea, but isnâ€™t, the trick would be to simply use the rfid to store a unique identifier with is then linked to a database elsewhere, or hoof.</p>
<p>That database should continually be updated with whatever relevant information you need so as you get close with your AR laptop, wearable displays, or embedded brain chip, you get the identifier broadcast, then you get the info downloaded to you, and it â€œsticksâ€ to the cow with the generic visual tracking (object following, even simple bounding box is sufficient for a slow moving cow)</p>
<p>So, up to that point, you can get tons of information about that specific cow, that cow population (remember, AR is not just about overlaying dataâ€¦it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.) Tie in data visualisation and some farmer tools and all sorts of other things happen. Now, lets move the timeline ahead a bit.</p>
<p>The butcher gets the cow and does his handiworkâ€¦because we know all the info about the cow, all of the meat can be properly labeled and marked. Ideally, with a UPC code or a unique glyph (somewhat problematic depending on how many unique glyphs you can create) so, while you are in the grocery store, you can access the relevant shopping dataâ€¦age of cow, state of origin, type of feed, how many spots, how much body fat, which butcher, whatever not because of what is inside the package, but the package itself.</p>
<p>Getting back to your hamburger, the problem is that it is a burgerâ€¦there is nothing to distinguish that burger from another one at the tableâ€¦unless you stuck a rfid chip in it or splattered it with ink and a unique glyph, or maybe a special one of a kind plate.</p>
<p>However, a properly designed AR system could say â€œhey! that/s a hamburger! and I know I am at Fat Daddyâ€™s Burger Joint in Raleigh North Carolina on Glenwood Avenue, and I know that they cook their burgers this particular way, and their meat supplier is those guys over there, and they usually get their cow meat from a farm out in Utahâ€</p>
<p>With ubiquitous nanomites or whatever, then its not that far out to consider edible nanos that are in the meat and that broad cast info so a slab of meat can tell you about itself and broadcast that to the general public.</p>
<p><strong>Tish Shute:</strong> What useful scenarios can we create without the nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> If it wasnâ€™t a burger or a consumable organic, the scenario changes.</p>
<p><strong>Tish Shute: </strong>What is the time scale on nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ehhhhhhh 20 years minimum if we are lucky. They sound good on paper, but there is a whole book worth of problems and why they are so far offâ€¦as consumer grade, all over the place, type of stuff.</p>
<p><strong>Tish Shute:</strong> Did you see the Nokia Home Control center?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, I saw the Nokia stuff.</p>
<p>AR for sensors, like security systems, temperature control, etc. all become â€œsources of dataâ€ that a AR system can visualize. So yes, thats easily doable. You could do that in a short period of time with some half decent engineers.</p>
<p>The trick of what Nokia is doing is aggregating sensor data from a building/home/facility, mashing it together, and sending the mobile device alerts and data visualization conceptually rather simple, but no one has done it right or well yet.</p>
<p>It wouldnâ€™t surprise me if Nokia pulled it off.</p>
<p><strong>Tish Shute:</strong> yes and if they do and someone does an AR interface to it that would be an inflection point for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> In a roundabout way, yes. You could get data directly from your house, or get it through your mobile device and in either case, use the AR for visualization and control.</p>
<p>The interface/gui is a critical element for AR. That is one of the areas where it, as an industry, risks doing a bad job and turning into just a fad or another novelty like VR.Â  Virtual worlds have been struggling with that for a while, but MMORPGs have had the effect of extending their life cycle</p>
<p><strong>Tish Shute: </strong>Yes VWs have not solved the interface problem.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The interface is one of their problems yes. Most virtual worlds are stuck in 1996/98</p>
<p><strong>Tish Shute:</strong> If ARÂ  is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc. seems that it is the ideal interface for home control?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Well for home control, you must know:</p>
<p>1) Who am I? Am I authorized to know this information? Am I a guest?</p>
<p>2) Where am I? Is this my house? or someone elses?</p>
<p>3) What am I doing? Do I want to make all the doors lock? Turn on or off lights? Open the garage door? Trigger the security alarm?</p>
<p>So the same questions apply</p>
<p>Iâ€™d say that all virtual worlds are stuck in the mid 90s. They are at least a decade behind the game worldsâ€¦in technology, design, implementation, architecture, etc. etc. In my opinion, things like Second Life are shameful in how they are presented as state of the art, innovative, ground breaking, new, wonderful, and world changing.</p>
<p>But thats another topic of conversation : )</p>
<p><strong>Tish Shute: </strong> Well for me the contribution of VWs is the presence enabled real time interaction with application (as 3D info machine) and context with other people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Oh,there is no doubt that they are greatly useful and have a phenomenal amount of potential.</p>
<p>They *could* be all those things I just said that SL isnâ€™tâ€¦the problem is that they are either just existing, or they are meandering around without any real focus or direction. They arenâ€™t evolving.</p>
<p>Even MMORPGs are losing their way and beginning to stagnate terribly</p>
<p><strong>Tish Shute:</strong> yes I agree</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>But, AR has the potential to change a lot of things.</p>
<p>Im sure you have seen <a id="n_22" title="the yellowbook commercials" href="http://www.youtube.com/watch?v=zdPFBTQpk-U" target="_blank">the yellowbook commercials</a>? The technologies you are seeing here are doable in hrm, a year or less maybe. The tricky part is the interactivity and AIâ€¦that is, the content. Everything else isnâ€™t a problem. The avatar there could be photorealistic or stylized like a WoW character.</p>
<p>You could do that to some degree with markers for registration but dynamically changing the content linked to those markers is a little weird</p>
<p>(by the way, for the record, I like markers just fine, I just donâ€™t think they are useful for real-world mobile applications)</p>
<p>I also think that the guys that want to dust the planet with miniature rfid chips are on crack and are going about it the wrong way</p>
<p><strong>Tish Shute: </strong>A high level of interactivity is hard though. Isnâ€™t it? Even in VWs it is very limited.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it depends if you can track what the user is doing, and interpret that properly. Interactive is also a very lose term.</p>
<p>Clicking a button and making a light blink could be considered interactive.</p>
<p><strong>Tish Shute: </strong>In VWs a high level of interactivity wouldÂ  be to wield a virtual hammer and have a real nail go in! is physics part of the problem?</p>
<p><strong>Robert Rice:</strong> physics arenâ€™t difficult, plenty of middleware out there for it. The problem with that isnt so much the physics as much as it is the scale and purpose</p>
<p><strong>Tish Shute:</strong> well for robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> that gets into a conversation about meshes, textures, and volumetric collision detection and stuff</p>
<p><strong>Tish Shute:</strong> virtual robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> You mean teleremote/telepresence of real robots?</p>
<p><strong>Tish Shute: </strong>yes!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ah, for that, you need some tactile feedback and some other stuff &#8211; doable, but insanely difficult. Thatâ€™s why you donâ€™t see a whole lot of remote controlled surgery robots all over the place.</p>
<p>They do existâ€¦</p>
<p><strong>Tish Shute: </strong> Will AR contribute to sustainable living by freeing us from some of our energy hogging devices?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>AR will ultimately encourage energy saving and recycling. where did I leave a light on at? where is the nearest trash can? what is the UV index outside today?</p>
<p>Yes, computers are energy hogs, but as we start seeing larger SSD drives, more efficient CPUs (even if the number of cores increases in multiples), and so on, the power will go down.</p>
<p>Also, think about thisâ€¦wearable displays potentially use less energy than LCD monitors on your desk.</p>
<p><strong>Tish Shute: </strong>Yes I should pick the brains of my intel chums on energy saving!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Getting rid of the monitor and switching to solid state drives will save an assload of power. Yes, I said assload.</p>
<p>Tell your intel chums to quit screwing around with single core mobile CPUs. We need multiple cores, that are smaller, faster, and use less power.</p>
<p><strong>Tish Shute: </strong>Is AR is the sustainable future of VW and MMOGs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The fun stuff will happen when they are both integrated in some fashion.</p>
<p><strong>Tish Shute:</strong> So perhaps this is why the Georgia guys are thinking in trying to combine AR and SL (<a id="boum" title="see video here" href="http://uk.youtube.com/watch?v=O2i-W9ncV_0&amp;feature=related" target="_blank">see video here</a>).</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That first video was pretty damn cool. It just pains me that they are using SL for it. And omg, all those markers on the table.</p>
<p>Although, I could care less about seeing my SL avatar on my coffee table. I would rather see an avatar representing ME in the real world, moving around in a virtual world that is a â€œto scaleâ€ replica of the real world. That is MUCH more interesting and innovative.</p>
<p>But even if I donâ€™t like where they are going, or that they are using SL, the important thing is that they are doing something and forging ahead. I have a massive amount of respect for anyone, private, government, or academic, that is doing that.</p>
<p>And yes, the door (or window, or looking glass) has to work both ways for maximum potential, at least, thatâ€™s what Id like to see. They donâ€™t *have* to, but it would be rather cool.</p>
<p>And going back to sustainability, AR has the potential to make monitors generally obsolete, laptops too. Thatâ€™s a lot of power hungry devices with all sorts of metals and batteries inside.</p>
<p>But, even if the tech was absolutely crazy awesome right this minute, it would take a little while for consumer adoption.</p>
<p><strong>Tish Shute:</strong> But AR unleashes the mobile device?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, AR is going to be built on powerful mobile devices for the near future, eventually embedded comps in clothing and whatnot. But that is a ways off</p>
<p>Entertainment is going to be the first huge driver.</p>
<p><strong>Tish Shute:</strong> So people will get used to having a pet virtual dragon on their shoulder first?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, virtual dragon is way cool, easy tech for games, and can eventually be leveraged into a smart agent which becomes a practical applicationâ€¦agent based contextual search, etc. Yes, entertainment will also drive people to get used to the tech</p>
<p><strong>Tish Shute: </strong>Oh thanks for turning me on to <a id="kzbv" title="gamesalfresco" href="http://gamesalfresco.com/" target="_blank">gamesalfresco</a>!<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Ive noticed that the good stuff usually gets linked to there. They donâ€™t list my blog, but thatâ€™s what I get for staying under the radar and not posting often. But anyway, gamesalfresco is the first place I send people that need a crash course in AR. Great site, great owner.</p>
<p><strong>Tish Shute:</strong> So are you in agreement with Thomas Wrobelâ€™s positioning ofÂ <a href="http://www.mobilizy.com/wikitude.php" target="_blank"> </a><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong></em> and <em><strong><a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> </strong></em>as being significant milestones for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</p>
<p class="MsoNormal">Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</p>
<p>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</p>
<p><strong>Tish Shute:</strong> Many people would say SnowCrash (metaverse) is now and Halting State (AR) is ten years from now. But you are seeing a development timeline for some popular AR apps in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Anyone that says SnowCrash is -now- is living in a box. Virtual Worlds, Virtual Reality, and immersive tech in general stopped innovating in the mid 90s. Iâ€™m continually flabbergasted at the number of people that think that things like Second Life are state-of-the-art or innovative. You might as well try to market a walkman as cutting edge, even though we have IPods out there.</p>
<p>Id like to see someone grab an engine like offset, crytek, hero, or unreal 3, and smack on a fat mmo server infrastructure (eve or big world)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</p>
<p><strong>Tish Shute:</strong> Why doesnâ€™t anyone do that?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Its not cheap, people will only fund a copy of something that exists already, people fear change and innovation, etc, The list goes on. The right money goes to the wrong people all the time.</p>
<p>Alternatively stated, there is a lot of â€œright idea, wrong implementationâ€</p>
<p>MMORPGs carried the torch and have made huge strides on the technology front, but have devolved in design. More often than not the gameplay emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.</p>
<p>Unless both industries have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</p>
<p>But yes, Iâ€™d say that the next 18 months are going to be very interesting with a lot of money being thrown around, new ventures, and plenty of content/applications. I expect most of this will be centered on single user AR experienced through a mobile device with a screen (iphone, android, etc.). I expect that there will be a significant boost after Vuzix releases some of their wearable *transparent* displays, putting Microvision back into the â€œhas potential but is too quietâ€ position.</p>
<p><strong>Tish Shute:</strong> AR conjurs an image in many peopleâ€™s minds of dreadful head gear!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, it is either transparent wearable displays (in eyeglass formfactor) or nothing. HMDs with miniature LCD or OLED displays are good for streaming video, but for the mobile ubiquitous AR we all dream about, it has to be something that looks and feels like a pair of Oakleys.</p>
<p>I should also mention that several different types and modes of AR are going to find themselves being defined and refined over the next two years as we continue to blaze new trails, establish a lexicon (we keep borrowing terms from games, VR, virtual worlds, mmorpgs), and really work out the how as well as the why.</p>
<p>Even though the idea of AR has been around for a long time, the technology is just beginning to emerge, and very few people are even looking far enough ahead to figure out the problems and solutions that the tech creates. Really, who is thinking about how to deal with AR spam right now?</p>
<p><strong>Tish Shute: </strong>Do you see any successful networked AR applications emerging in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes and no.</p>
<p>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</p>
<p>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</p>
<p>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</p>
<p>The big thing Iâ€™m worried about is AR becoming the next silicon valley trendâ€¦once they realize the potential, an enormous amount of capital will flow to a bunch of startups with half baked ideas, weak business models, ten year old tech, and a lot of overhyped marketing. That is the very thing that will kill this technology as something that has true power and potential to literally change the way we interact with each other, our surroundings, information, and media.</p>
<p><strong>Tish Shute: </strong>Do you think AR has value for a project like Pachube that helps us connect dtat from lots of different environments and sensor actuator data?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I think that AR has value as an interface to this data (essentially data visualization based on information streaming from a sensor or source that is interpreted in some dynamic graphical manner that has meaning). This is one of the â€œbig areasâ€ where ubiquitous augmented reality and wearable computing will really shine. Iâ€™ll definitely be keeping an eye on Pachube .</p>
<p><strong>Tish Shute:</strong> I canâ€™t help it! I am really interested to hear more about the Vuzix glasses?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, everyone is getting hung up on the glasses as the end-all be all and having markers everywhere too.</p>
<p>All the glasses are, is another display device. At the end of the day, it doesnt matter if you are looking at a lcd monitor, a iphone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</p>
<p>The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</p>
<p>I think that we are within 24 months of the magic wearables (these new ones by vuzix are probably the real first generation attempt at doing it right). They wont be perfect, but I expect they will be functionalâ€¦and once we have functional, we can start doing the good stuff.</p>
<p><strong>Tish Shute:</strong> You mentioned you disappointement with VWs and MMORPGs earlier.Â  Could you tell me more about that?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Yeah, there was an evolutionary divergence between virtual worlds and mmorpgs a while back. One stagnated almost completely, and the other leapt ahead in one sense and devolved horribly in the other sense. Neither is where the state of the art should be.Â  That is a whole other conversation, and probably a second book.</p>
<p><strong>Tish Shute:</strong> So making AR persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc. that is where your efforts are going?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. I fully expect that the hardware is almost ready for it. You can cobble together some amazing things in the lab right now, and I think commercial viability is imminent. The real value (as far as Iâ€™m concerned) is in making it mobile, wireless, persistent, and massively multiuser. You could argue that augmented reality will take over where virtual reality failed and become internet 3, internet one being the internet, internet two being the webâ€¦</p>
<p>mmorpgs are nothing more than single player games in a multiuser environment these days. Iâ€™m more than a bit bitter about it. All the right money went to the wrong people, and the best games we have are barely shadows of what we could have had by now.</p>
<p><strong>Tish Shute:</strong> Are there any open source AR platform dev projects?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>open source? hrm, Im sure there are multiple ones out there</p>
<p>if not entirely open source, there are plenty of things to experiment with that are generally free if you arenâ€™t trying to sell something, DART and ARTOOLKIT come to mind as very accessible applications.</p>
<p>Marker based AR is very important right nowâ€¦it is easy, low tech, understandable, highly customizable, and most importantly, accessible to the average joe. Ultimately though, we need a method of pure trackingâ€¦no markers glued to everything on the planet, no â€œbillions of RFIDsâ€ embedded in every square inch of every object on the planet, etc.</p>
<p><strong>Tish Shute:</strong> What do you mean by interoperability in AR? And what do you think about the development of standards?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Ooh, good question.</p>
<p>Ok, so the internet is basically computers communicating with computers, and the web is mostly pages linking to other pages (Iâ€™m greatly oversimplifying here). Hold this thought for a minute.</p>
<p>Switch over to MMORPGs. If you want to play in one (or a virtual world), you need to download a client that is specific to that world. One client does not work with another world. There are plenty of efforts to change this, but they are all barking up the wrong tree. The specific uniqueness of each world defeats the need and purpose of true interoperability, unless you completely reinvent the whole thing with a common backbone, features, functionality, etc. The very nature of virtual worlds and mmorpgs rebels against this.You absolutely do not want an avatar from second life running around in world of warcraft (for reasons that should be obvious).</p>
<p>On the other hand, with the web, you can use just about any client (browser) to access nearly any website (some requiring plugins or whatever).</p>
<p>The thing with augmented reality, is how do we go about making this? Iâ€™ve seen a few people thinking about this from the wrong perspective. There was a question at the last techcrunch to the Sekai Camera guys (a conceptual AR application for the iphone) where someone on the panel wanted to know how website owners would convert their content for augmented reality. BZZZZZT! That is a fundamental misunderstanding of what AR is, or could be, and it falls into the same trap I see a lot of people doingâ€¦and that is looking at AR through the web 2.0 lens or the virtual world lens. It is absolutely fundamentally different at the coreâ€¦sure there are similarities: it has social networking/media applications and properties, and it has 3D graphics, but it stops there.</p>
<p>Ubiquitous augmented reality will be dramatically different depending on which standards, approaches, and philosophies get the most traction first. Will you walk down the street with your AR glasses and have a pop up every 30 feet asking you if you want to access the AR content on another server? Will you then have to register, subscribe, or whatever?</p>
<p>Or will all AR content be mediated by one sole master control server deep in the bowels of google? What about some other option? Will you need different sets of glasses to access different features and content from multiple sources?</p>
<p>At the end of the day, it should not matter what brand of glasses you are wearing, you should never have to deal with AR server popups to join/subscribe, and so forth.</p>
<p>Interoperability, in the context of what I was saying earlier, is the sense of how to build the infrastructure so all of this is seamless to the end user, but still maintaining the features/functionality necessary for all of what augmented reality promises usâ€¦I dont want to see everything in AR space, I want to be able to tune in or filter out some things, and I want to customize the snot out of what I see (perhaps changing metaphors or â€œholoscapesâ€), and so on. It all has to work together and simplify the end-user experience or it wonâ€™t get anywhere</p>
<p><strong>Tish Shute: </strong>So what caused the stagnation of new development and devolution of MMOGs in you opinion?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>yes, look at all the hope and hype for the mmorpgs released in the last 12 months really, what is different or better? Now, what is worse?</p>
<p>I bet any decent mmorpg gamer could give you a list of 2 or 3 things for the first question and 20-30 things for the second.</p>
<p>And, VWs seem to be stuck in a feedback loop</p>
<p><strong>Tish Shute: </strong>feedback loop?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Imagine nailing one of your feet to the ground and then trying to run â€™round and â€™round and â€™round.</p>
<p><strong>Tish Shute:</strong> Why do you think this happened to VWs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Men in suits and flashy watches.</p>
<p>actually, hang onâ€¦..</p>
<p>I saw a video clip the other day from a conference about using various virtual and game technologies for simulations and other real world applications several people were talking about â€œavatar technologyâ€ and how theirs was better than their competitions and what not.</p>
<p>Now, can you tell me what â€œavatar technologyâ€ is? Avatar technology is a red herring. Avatar technology is the same thing as calling a toaster a new â€œfire technology.â€</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The problem is that a lot of people that donâ€™t have a clue about what they are doing are selling the tech to other people that have no clue what they are buying, but they feel like the should for some unknown reason.</p>
<p>That is happening all over the government, academic, and industrial sectors now with a few companies selling virtual worlds (again, mid 90-s tech) as the ultimate solution to all problems.</p>
<p>Anyway, getting back to your question</p>
<p>Once virtual reality started getting some buzz, some people got greedy and jumped into the avatar/virtual world thing and tried making it commercial too soon half of the 3D chat worlds were being jammed into platforms for virtual shopping malls.</p>
<p>Most of the money funding tech R&amp;D started funneling towards VRML, and doing 3D in web pages, etc.</p>
<p><strong>Tish Shute: </strong>yes horrible idea trying make web pages 3D IMHO</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The money people got involved too soon, and then the greedy people jumped in and tried patenting everything possible. Take a look at the worlds.com patent for 3D worlds.</p>
<p>They filed it back in 2000 or so and it was awarded in 07 (it shouldnt have been in my opinion) now they are suing everyone they can.</p>
<p><strong>Tish Shute: </strong>Will there be patent wars in AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, the AR patent wars will be legendary once people start waking up to the real potential here.</p>
<p>The only solution is for everyone to band together and pre-emptively patent or make public domain every possible patentable concept, technology, or implementation for AR otherwise, you havenâ€™t seen anything yet.</p>
<p><strong>Tish Shute:</strong> Is the AR community organized enough to do that yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That depends on how my company fares in the next six months.</p>
<p><strong>Tish Shute:</strong> Will you patent or make your tech public domain?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I plan on patenting the snot out of everything we can possibly think of, and then giving away our content creation tools and SDK stuff for free. The whole goal of what we are trying to build is to empower the end user and facilitate the creation of a wonderful world of augmented reality.</p>
<p>There are some things we will make public domain for sure, on top of that</p>
<p><strong>Tish Shute:</strong> So back to my question on networked real time experience. Will we have networked Real time AR experiences in the next 18 months</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It is possible, yes. Other than what we are doing, I am not aware of anyone else taking the same approach we are, but the potential for an â€œunder the radar ventureâ€ (much like my own company) is definitely there.</p>
<p><strong>Tish Shute: </strong>Will you use cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>I think thatâ€™s overrated and probably another attempt at the whole â€œthin clientâ€ model that some companies have been pushing for the last 20 years.</p>
<p>It sounds good on paper, but ultimately takes power and control away from the end user.</p>
<p><strong>Tish Shute:</strong> cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. You know, we arenâ€™t playing around, We are totally building â€œTHE ARâ€ that everyone keeps dreaming about. None of this cute stuff you see on youtube. Actually, if you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!).</p>
<p><strong>Tish Shute:</strong> So scaling the real time experience wonâ€™t be a problem in your project hehe</p>
<p>Cos no sharding allowed in AR right</p>
<p>And if you have lots of API calls?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: haha, sharding is one of the dumbest things to happen to the VW/MMO industry</p>
<p>It is a solution to a technical problem that was relevant 15 years ago.</p>
<p><strong>Tish Shute:</strong> so why did it stick (i know men in suits)</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it stuck because â€œthats what the other guys didâ€ and the mmo designers are too lazy to reconcile gameplay for PvP and RP gamers</p>
<p>However, there is a curious problem between dealing with â€œone worldâ€ and â€œanyone can start their own custom AR serverâ€</p>
<p><strong>Tish Shute: </strong>Now that is a very interesting problem the one world and own AR server</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It took me a few weeks of not sleeping to figure that one out. It gets back to the interoperability issue</p>
<p><strong>Tish Shute:</strong> What did you come up with?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> a solution. Thats all I can say for now on that.</p>
<p><strong>Tish Shute</strong>: eeextra seeekrit!</p>
<p>Well I will definitely have to bug you on that.</p>
<p>The problem has produced some creativity in OpenSim with people coming up with hybrids of p2p and oneworld</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> As far as virtual worlds are concerned, they need to look at the problem from a different perspective. They are trying to make all virtual worlds interoperable intead of creating a new model for interoperable worlds that new ones will be created to adhere to.</p>
<p><strong>Tish Shute: </strong>well some people are. I would say most OpenSim developers see their modular approach doing this.Â  And you choose to interoperate based on what modules you have activated and then social agreementsâ€¦</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, thats a start, but that only works on a functional and social level &#8211; doesnâ€™t account for content (story, mythos, game rules), unique data (my +3 sword), or the concepts of commerce, inherent value, and intellectual property</p>
<p>Enabling my WoW avatar to run around in SL and vice versa creates more problems than it solves.</p>
<p>Its like two alien races working hard to make sure that their two spaceships can dock but no one is paying any attention to the fact that race A breathes nitrogen and race B breathes sulpher.</p>
<p>Its technically possible, but they are missing the boat on the content side of the problem.</p>
<p><strong>Tish Shute:</strong> Yes but donâ€™t you think when a modular open source tech for virtual worldsÂ  becomes pervasive, what will happen is that those interested in a similar genre will increasingly use the module in ways that allows their content to interoperate if they want it too</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>everyone has to use the same backend tech, and the front end clients need to adhere to the same standards. Bu I have to admit, I havenâ€™t been paying much attention to the vw space in the last 9 months or so.</p>
<p>Oh I have to run now.Â  But download and install <a id="vsnt" title="cooliris" href="http://www.cooliris.com/" target="_blank">cooliris</a>. I promise you will be blown away and will start using it to search for images and videos</p>
<p>Its frigging awesome.</p>
<p><strong>Tish Shute:</strong> Will do!Â  Thanks so much great talking to you. I canâ€™t wait for your launch.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/feed/</wfw:commentRss>
		<slash:comments>27</slash:comments>
		</item>
	</channel>
</rss>
