<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; google goggles</title>
	<atom:link href="http://www.ugotrade.com/tag/google-goggles/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Urban Augmented Realities and Social Augmentations that Matter: Talking with Bruce Sterling, Part 2</title>
		<link>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/</link>
		<comments>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/#comments</comments>
		<pubDate>Fri, 17 Sep 2010 21:43:35 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[3D point clouds]]></category>
		<category><![CDATA[an ARG for World Peace]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave Android client]]></category>
		<category><![CDATA[ARWave at Software Freedom Day]]></category>
		<category><![CDATA[augmented foraging]]></category>
		<category><![CDATA[augmented reality checkins]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Bertine van Hovell]]></category>
		<category><![CDATA[Biological Globalisation]]></category>
		<category><![CDATA[Boskoi]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Crisis Filter]]></category>
		<category><![CDATA[cryptoforests]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[deterritorialization]]></category>
		<category><![CDATA[difference between augmented reality and ubiquitous computing]]></category>
		<category><![CDATA[emergency response]]></category>
		<category><![CDATA[Favela Chic]]></category>
		<category><![CDATA[fightthegooglejugend]]></category>
		<category><![CDATA[Four Square]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[gardens gone wild]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[homophilies]]></category>
		<category><![CDATA[hyperlocal experiences]]></category>
		<category><![CDATA[interview with Bruce Sterling]]></category>
		<category><![CDATA[JCPT the open Android 3D engine]]></category>
		<category><![CDATA[Jesse James Garrett]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Joshua Kauffman]]></category>
		<category><![CDATA[Ken Eklund]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lightning Laboratories]]></category>
		<category><![CDATA[location based social networking]]></category>
		<category><![CDATA[Maarten Lens-FitzGerald]]></category>
		<category><![CDATA[machine intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[Mark Evin]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[NextHope]]></category>
		<category><![CDATA[NextHope AMD]]></category>
		<category><![CDATA[Occipital]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[open distributed platform for AR]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[proximity-based social networking]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[real-time information brokerages]]></category>
		<category><![CDATA[realtime information brokerages]]></category>
		<category><![CDATA[Shaping Things]]></category>
		<category><![CDATA[ShapingThings]]></category>
		<category><![CDATA[Sixth Sense for Autism]]></category>
		<category><![CDATA[SMSSlingshot]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Social Augmented Experiences that Matter]]></category>
		<category><![CDATA[social mapping]]></category>
		<category><![CDATA[Software Freedom Day]]></category>
		<category><![CDATA[Swift]]></category>
		<category><![CDATA[territorialization]]></category>
		<category><![CDATA[The Cryptoforests of Utrecht]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Ubistudio]]></category>
		<category><![CDATA[urban augmented realities]]></category>
		<category><![CDATA[Urban Edibles Amsterdam]]></category>
		<category><![CDATA[urban fallows]]></category>
		<category><![CDATA[urban forsts]]></category>
		<category><![CDATA[urban informatic mapping]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[vision assisted augmented reality]]></category>
		<category><![CDATA[vision based augmented reality]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave in a Box]]></category>
		<category><![CDATA[WaveinaBox]]></category>
		<category><![CDATA[Westraven Psychogeography]]></category>
		<category><![CDATA[Will Wright at Augmented Reality Event]]></category>
		<category><![CDATA[YDreams]]></category>
		<category><![CDATA[Zorop]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5627</guid>
		<description><![CDATA[Social Augmented Experiences leveraging geoawareness and human and machine intelligence to create real time information brokerages, combined with an augmented reality view, can create a new opportunities to reimagine our relationships with each other and our environment. This Summer, I have been on a blogging hiatus, which has meant I haven&#8217;t been sharing as frequently [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><strong><span> </span></strong></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/augmentedforaging1.jpg"><img class="alignnone size-medium wp-image-5651" title="augmentedforaging" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/augmentedforaging1-200x300.jpg" alt="augmentedforaging" width="200" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/westraven81.JPG"><img class="alignnone size-medium wp-image-5652" title="westraven8" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/westraven81-225x300.jpg" alt="westraven8" width="225" height="300" /></a></p>
<p>Social Augmented Experiences leveraging geoawareness and human and machine intelligence to create real time information brokerages, combined with an augmented reality view, can create a new opportunities to reimagine our relationships with each other and our environment.</p>
<p>This   Summer, I have been on a blogging hiatus, which has meant I haven&#8217;t   been sharing as  frequently and, unfortunately, the second half of two conversations I had earlier this year, both of which have much influence my thinking on social augmented reality, have languished in private mode -Â  part 2 of my talk with Bruce  Sterling (see <a title="Permanent Link to Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010" rel="bookmark" href="../../2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/">Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010</a>, and part 2 of my conversation with Anselm   Hook <a title="Permanent Link to Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">- Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook, Part 1.</a> Time to get caught up on some blogging!Â  The lightly edited transcript of Part 2 of <a href="#tag1">my conversation with Bruce Sterling is posted in full below</a>.</p>
<p>Bruce Sterling has been blogging all the key developments in augmented reality (amongst other topics of interest!) on <a href="http://www.wired.com/beyond_the_beyond/" target="_blank">his Wired Blog</a>, and <a href="http://www.wired.com/beyond_the_beyond/2010/08/augmented-reality-augmented-foraging/" target="_blank">he brought my attention</a> to <a href="http://libarynth.org/augmented_foraging">Boskoi</a> the <a title="http://www.ushahidi.com/" rel="nofollow" href="http://www.ushahidi.com/">Ushahidi</a> based app for Android phones, <a href="http://lib.fo.am/augmented_foraging" target="_blank">augmented foraging </a>pictured in use above &#8211; for more pics see<span> <a href="http://fightthegooglejugend.com/index.html" target="_blank">fightthegooglejugend</a>. </span></p>
<p><span><br />
</span></p>
<h3><strong><strong>Augmented Reality and Real Time Information Brokerages</strong></strong></h3>
<p><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-28-at-12.53.54-AM.png"><img class="alignnone size-medium wp-image-5630" title="Screen shot 2010-08-28 at 12.53.54 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-28-at-12.53.54-AM-300x176.png" alt="Screen shot 2010-08-28 at 12.53.54 AM" width="300" height="176" /></a><br />
</span></p>
<p><em><span>Picture above is the path the &#8220;nomads&#8221; took through the Westhaven cryptoforest with Pieter Bol,co-auteur of the book <a href="http://www.biologicalglobalisation.com/">Biological Globalisation</a> and Theun Karelse of <a href="http://urbanedibles.blogspot.com/">Urban Edibles Amsterdam</a> &#8220;who presented his &#8216;augmented foraging&#8217; app <a href="http://libarynth.org/augmented_foraging">Boskoi.</a>&#8220;Â   For more see, <a href="http://fightthegooglejugend.com/cryptoforests.html" target="_blank">The Cryptoforests of Utrecht </a>and, <a href="http://fightthegooglejugend.com/westraven.html" target="_blank">Westra</a><a href="http://fightthegooglejugend.com/westraven.html" target="_blank">ven Psychogeography, 6 June 2010.</a> </span><span> </span><span>Note</span><span>: Cryptoforests: 1) Urban forests hidden from view 2) Urban fallows that might or might </span><span> </span><span>not be considered as forests 3) Gardens gone wild)</span></em></p>
<p><strong> </strong></p>
<p>My interest in the Ushahidi family of ideas was already fired up by a conversation with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a> early this year.Â  We discussed a number of <a href="http://vimeo.com/ushahidi">Ushahidi</a> related    projects, <a href="http://swift.ushahidi.com/" target="_blank">Swift</a>, Crisis Filter and Anselm&#8217;s project <a href="http://hook.org/" target="_blank">Angel</a>, Augmented    Reality, and my own keen interest in an open, real time, distributed platform for    augmented reality &#8211; <a href="http://www.arwave.org/" target="_blank">ARWave</a>.</p>
<p>The Ushahidi platform and the related project Swift has pioneered the real  time brokerage of information with people acting in curatorial roles or  matchmaking roles coevolving with machine assisted  matching to connect wants to haves.Â  Ushahidi uses multiple gateways including SMS, and Twitter.Â  But the Ushahidi family of ideas is extremely interesting when combined with augmented reality and suggests many new possibilities for social augmented experiences, as Anselm pointed out, for human to human communications, human  to  civilization communication, and human to environment communications (e.g., perhaps, how machine intelligence can help bridge the difference in time scale that Kate Hartman explores in her, <a href="http://vimeo.com/10352604"> Research for Glacier-Human Communication Techniques).</a></p>
<p>Ushahidi, which means &#8220;testimony&#8221; in Swahili, is a website that was    initially  developed to map reports of violence in Kenya after the post-election  fallout at the beginning of 2008.  It is now an open platform with a wide range of applications and growing developer community.Â  See <a href="http://vimeo.com/7838030">What is  the Ushahidi Platform?</a> from <a href="http://vimeo.com/ushahidi">Ushahidi</a> on <a href="http://vimeo.com/">Vimeo</a>.</p>
<p><a href="http://swift.ushahidi.com/" target="_blank">Swift </a>- a project that emerged from the Ushahidi dev community, is a human sensor/real-time brokerage for dealing with emergencies, enabling the filtering and verification of real-time data from channels such as Twitter, SMS, Email and RSS feeds.</p>
<p><a href="http://libarynth.org/augmented_foraging">Boskoi</a> &#8211; <a href="http://lib.fo.am/augmented_foraging" target="_blank">augmented foraging </a><span>is the first app,Â  I have seen, to begin linking Ushahidi with augmented reality  &#8211; although I don&#8217;t think there is a full augmented view for Boskoi developed yet?</span></p>
<h3><strong>&#8220;The whole point of AR is to see things from a different point of view&#8230;&#8221;</strong></h3>
<p><strong><br />
</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus3post.png"><img class="alignnone size-medium wp-image-5705" title="ARWaveCurrentStatus3post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus3post-300x212.png" alt="ARWaveCurrentStatus3post" width="300" height="212" /></a><br />
</strong></p>
<p><strong> </strong></p>
<p><em>Click to enlarge poster from upcoming ARWave demo at Software Freedom Day &#8211; for more see below</em></p>
<p>I am often asked what augmented reality brings to the table with respect to location based social networking, which is on the verge of going mainstream in smart phone apps like <a href="http://foursquare.com/">Four Square</a>. While the first part to my answer is usually to explain what is unique to augmented reality.</p>
<p>As Bo Begole notes, the full vision of AR requires machine   perception  technologies to detect  the identity and physical   configuration of  objects relative to each  other to accurately project   information  alongside/overlaid with a physical object (see this post on the PARC Blog by Bo Begole on the <a href="http://bit.ly/9Rsh79">difference between AR and ubiquitous computing</a> &#8211; thank you <a href="http://gamesalfresco.com/2010/09/12/weekly-linkfest-62/" target="_blank">Rouli for bringing my attention to this</a>).</p>
<p>But it is only in recent months that we have begun to see the kind of tools that make this possible become freely available to developers &#8211; see<a href="http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/" target="_blank"> my interview with Jay Wright of Qualcomm here</a>. Â  Also see this post on <a href="http://phototour.cs.washington.edu/bundler/" target="_blank">Bundler: Structure from Motion for Unordered Image Collections</a> an open source system that allows the creation of 3D point clouds from unordered image collections, e.g. internet image collections.Â  We now have many tools available to move mobile augmented reality beyond the recent crop of apps relying on GPS and compass alone for positioning into a new era of vision assisted AR apps that will increasingly bring the full vision of AR into our daily lives.</p>
<p>Further, the  integration of visual search  applications   like <a href="http://www.google.com/mobile/goggles/#text">Google Goggles</a> and <a href="http://www.kooaba.com/">Kooaba</a> which can detect the identity of particular objects will add another vital tool to machine perception technologies enabling AR &#8220;checkins&#8221; on potentially anything in the physical world around us, and more fuel to the <a href="http://gamepocalypsenow.blogspot.com/">Gamepocalypse</a> (e.g. it would be easy to turn every trash can in the city into a basketball hoop as we discussed at the <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a> meetup last month).Â   And soon, the Pandora&#8217;s Box ofÂ  facial recognition (Google Goggles have the capability though it is not released to the  public  yet) will open up.</p>
<p>Jesse Schell described the importance of AR in a nutshell <a href="http://augmentedrealityevent.com/2010/08/25/are2010-keynote-by-jesse-schell-augmented-reality-will-define-the-21st-century/" target="_blank">in his keynote for are2010</a>:</p>
<p><strong>â€œThe  whole point of AR is to see things from a different point of  viewâ€¦How  can there be a more powerful art form than one that actually  changes  what you see?â€</strong></p>
<p>But how AR matures as a social experience will be the key to Jesse&#8217;s suggestion that:</p>
<p><strong>â€œAugmented Reality will be one of the things that fundamentally define the 21st centuryâ€</strong></p>
<p>There are many interesting forms of AR that are not reliant on a tight  registration between media and physical objects &#8211; several are put forward by Bruce in the convo below.Â  And, it is likely we will see AR eyewear as an occasional useful accessory to a smart phone long before we have the sexy, affordable augmented reality eyewear worn that we wear throughout the day. Â  <a href="http://www.yankodesign.com/2010/08/31/speech-to-text-glasses/" target="_blank">These speech to text glasses</a> would be a very useful and viable accessory to a smart phone right now for the hearing impaired.</p>
<p>For the moment, as Bruce notes, some of the most interesting and useful augmented experiences to date have not been in the cell phone space:</p>
<p><strong> &#8220;There are other aspects of AR besides the cell phone space. Thereâ€™s  Total Immersion&#8217;s big display screens. Thereâ€™s the web-based fiduciary  stuff. And thereâ€™s projection mapping. And then thereâ€™s experience  design just for people who need their reality augmented for whatever  personal or social reason.&#8221;</strong></p>
<p>On of my favorite social AR experiences is this<a href="http://www.youtube.com/watch?v=oLnKSKaY1Yw&amp;feature=player_embedded" target="_blank"> SMS Slingshot</a>.</p>
<p>But I have been excited for a long while about the intersection of mobile social augmented    reality, real time communications, and ubiquitous computing see <a title="Permanent Link to Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave" rel="bookmark" href="../../2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/">Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave</a>.Â    And I have  described in    many places why I think real time, open,   distributed communications  for AR are so    important to developing social augmented experiences &#8211; see <a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank">the slides for my talk at Augmented Reality Event here</a>, <a href="../../2010/04/02/ar-wave-at-where-2-0-exploring-social-augmented-experiences/" target="_blank">here</a> and <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">here</a> for starters.</p>
<p><strong><br />
</strong></p>
<h3><strong> ARWave at Software Freedom Day 2010, September 18th 2010<br />
</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-12.12.02-PM.png"><img class="alignnone size-medium wp-image-5683" title="Screen shot 2010-09-17 at 12.12.02 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-12.12.02-PM-300x38.png" alt="Screen shot 2010-09-17 at 12.12.02 PM" width="300" height="38" /></a></p>
<p>Thomas Wrobel and Bertine van Hovell will demo the first ARWave Android client <a href="http://www.sfd2010.nl/" target="_blank">at Software Freedom Day this weekend</a>!</p>
<p>A number of people have asked me, (including Bruce), What will be the future of ARWave now that Google Wave is no longer a stand alone application?Â  Yes, the recently announced release of <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a> (see <a href="http://arstechnica.com/web/news/2010/09/google-sticks-wave-source-in-a-box-sticks-a-bow-on-top.ars" target="_blank">here </a>and<a href="http://www.readwriteweb.com/archives/google_announces_wave_in_a_box.php" target="_blank"> here</a>) is very exciting for the ARWave team.</p>
<p>The ARWave Android client is the  first open AR client built on an open, real time, distributed platform -Â  based on a server that anyone can download and set up, currently the  &#8220;FedOne&#8221; server but Wave in a Box, hopefully,  will be even easier to deploy.Â  Wave in a Box seems perfect for ARWave&#8217;s needs &#8211;  for more <a href="https://groups.google.com/group/wave-protocol/browse_thread/thread/70067fc740b4c8d3" target="_blank">see the WiaB Google Group here</a>.Â   And for more information on the ARWave client -Â  click to enlarge the poster below, see the <a href="http://arwave.org/pages/Videos.php" target="_blank">ARWave concept video here</a>, and for more, and how to get involved see <a href="http://arwave.org/new_index.php" target="_blank">arwave.org</a>.Â Â  Props to <a href="http://www.lostagain.nl/#" target="_blank">Thomas Wrobel and Bertine van Hovell</a> (posters below from demo for Software Freedom Day), Mark Evin, <a href="http://twitter.com/need2revolt" target="_blank">Davide Carnovale</a>, and <a href="http://twitter.com/kusako" target="_blank">Markus Strickler</a>, for all their hard and brilliant work on ARWave.Â  Also to <a href="http://www.jpct.net/" target="_blank">JCPT the open Android 3D engine</a> that has saved a lot of work!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus1post.png"><img class="alignnone size-medium wp-image-5687" title="ARWaveCurrentStatus1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus1post-212x300.png" alt="ARWaveCurrentStatus1post" width="212" height="300" /></a></p>
<p><em>click to enlarge slide</em></p>
<h3><strong>Social Augmented Experiences that Matter</strong></h3>
<p>My ideas on the future of social augmented experience have been deeply informed by the the conversations I had with Bruce Sterling and Anselm Hook this year.</p>
<p>Bruce  Sterling notes in the conversation below, location based social  apps like, Four Square, are interesting because they are not <strong> &#8220;urban geography like Google&#8217;s  satellite stare from above,&#8221;</strong> but  rather <strong>&#8220;groups of citizens are doing portraits  of their own region.&#8221; </strong> Augmented Reality, with its of lauded power to make the invisible visible is, of course, is the ideal tool for &#8220;citizen portraits&#8221;Â  to the next level.Â  Cory Doctorow  described to me three years ago (<a href="http://www.ugotrade.com/2007/10/31/cory-doctorow-a-reverse-surveillance-society/" target="_blank">see here</a>) an &#8220;inverse surveillance society,&#8221; enabled by an augmented viewÂ  &#8211; &#8220;<strong>where all the data from the positional and temporal  characteristics of all the objects that we own  were in aggregate  visible and available so that we can mix and match them  remix them  understand them and have more agency in the world.&#8221;</strong></p>
<p>It is very cool to go back to reread <a href="http://www.ugotrade.com/2007/10/31/cory-doctorow-a-reverse-surveillance-society/" target="_blank">this  conversation </a>now that it is becoming possible to build the kinds of apps Cory described, and Bruce Sterling envisioned in <strong><a href="http://mitpress.mit.edu/catalog/item/default.asp?tid=10603&amp;ttype=2" target="_blank">Shaping Things</a></strong> (see Amazon.orgÂ  page 111).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/shapingthings.jpg"><img class="alignnone size-thumbnail wp-image-5689" title="shapingthings" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/shapingthings-150x150.jpg" alt="shapingthings" width="150" height="150" /></a></p>
<p><em>click to enlarge</em></p>
<p>MyÂ  conversation with Bruce earlier this summer (see below) took place on the heels of <a href="http://augmentedrealityevent.com/">are2010 &#8211; Augmented Reality Event</a>.Â Â  <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">See the video of Bruce&#8217;s keynote, &#8220;Bake a BigPie,&#8221; here</a>,Â  and the <a href="http://augmentedrealityevent.com/2010/08/25/are2010-keynote-by-jesse-schell-augmented-reality-will-define-the-21st-century/" target="_blank">final keynote, &#8220;Seeing,&#8221; by Jesse Schell (see video here)</a> in which Jesse riffed on AR and the man with the X-ray eyes.Â  Both these awesome talks are still fresh in my mind.Â  Bruce noted how we should pay attention to augmentations for people and situations that could really use some augmentation&#8230; and not get too fixated on the coming of AR Goggles.Â  He elaborated on this in our conversation (again full transcript below):</p>
<p><strong>&#8220;Well,  itâ€™s a matter of deciding whose reality it is that youâ€™re  trying to augment.  Iâ€™m not trying to be a bleeding heart about it, but  obviously there are people in our society right now with reality that  could really use some augmentation.  They are mostly disadvantaged  people.  They are vision impaired, or maybe they have autism.  They  might be senile and just canâ€™t remember where they put their shoes.   These are people who could really use some help, right?&#8221;</strong></p>
<p><strong>So, start  with people who really need sensory or cognitive help. Before you  turn  our geeks into Superman, why donâ€™t you try turning some people who are  harmed into more functional individuals?  Then youâ€™ll be able to learn  how to do that. Then maybe you can ramp it up to these Nietzschian  heights of the superb Man With the X-ray Eyes.  Whatever.&#8221;</strong></p>
<p>What will make AR interesting and useful long before and long after we see the full vision of AR eyewear manifest is its social aspects.Â  Bruce points out:</p>
<p><strong>&#8220;My  argument would be that if you want people to be  more sensitive toward   certain, say, issues and problems, itâ€™s better to  find the people who   are already sensitive to those issues and  problems, and give them a   bigger stake in your augmentation system.&#8221;</strong></p>
<p><strong>&#8220;Say that I am really worried about public health.   Well, if I have a lot of nurses that are using my system, people who are  aware of my issues, then I could be walking around and Iâ€™ll see a lot  more tags saying, â€œThis is where he got food poisoning!â€  &#8220;In this  shooting gallery, many people have caught AIDS!â€  Or, you know,  â€œTuberculosis has been spotted over here in this building.â€</strong></p>
<p><strong>At  that point, I could simply share their knowledge and get some social  intelligence.  As opposed to trying to  amp the basements of my little  hacker-mind and drag stuff up thatâ€™s escaped my conscious attention.&#8221;</strong></p>
<p>Finding new ways to broker information &#8211; bring together needs with haves and different participants, empowered and disempoweredÂ  is., as Anselm discussed with me, one way to change our view of human to human, human to environment and human to civilization communication (particularly in light of thisÂ  &#8220;sobering account of how open data is used against the poor in Bangalore&#8221; that as <a href="http://twitter.com/timoreilly/status/23179898934" target="_blank">@timoreilly noted</a> recently <a href="http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/" target="_blank">OpenData Empowering the Empowered)</a>.</p>
<p>The key idea in a crisis filter, Anselm noted,Â  was to break  up the participants into different kinds, to connects wants with haves:</p>
<p><strong>&#8220;There are  people who are  inÂ  situation.Â  We call them citizens.Â  And  then there  are reporters,  people who report situations back to Twitter.Â  And then there are curators, people that canvas Twitter    looking for important Tweets.Â  And then there are first responders, people who take the curating collection of responses and then act on them.&#8221;</strong></p>
<p>This kind of brokerage between people acting in a curatorial role or matchmaking role with each other can be extended into and coevolve with machine assisted matching as Anselm explains.</p>
<p>It is also a vital part of creating social augmented experiences that matter.</p>
<p>One of Anselm Hook&#8217;s projects, which is called <a href="http://hook.org/" target="_blank">Angel</a> is the the most radical expression of connecting wants with haves in that the  idea is that &#8220;you have a  situation, you broadcast that  situation, and help  magically appears.Â   You donâ€™t even sign up forÂ a service.Â  You just get  help â€¦</p>
<p>As Anselm explains this is the same idea of a brokerage for dealing with emergencies, but applied to the long tail of crisis response.Â  As Anselm describes it:</p>
<p><strong><strong>&#8220;I am interested in personal crisis.Â  &#8216;I lost my cat.Â  Help.Â  I canâ€™t find </strong>where my kid is.Â  I am out of gas.Â  I have a flat tire.Â  My house is on fire.Â  My aunt is trapped in the bedroom.&#8217;Â  The kind of personal crisis    that is just as important, but is not enough to get a national  movement   to help you&#8230;</strong></p>
<p>I will publish this conversation with Anselm in full in an upcoming post.</p>
<h3>Zorop &#8211; an ARG for World Peace</h3>
<p><strong><strong><span> </span></strong></strong><a href="http://libarynth.org/augmented_foraging"><span style="font-family: 'times new roman';"><span style="font-size: small;"> </span></span></a>If you want to be part of a really exciting experiment to reimagine our relationships with each other and can be in San Jose this weekend, I highly recommend exploring <a href="http://zorop.org" target="_blank">this &#8220;rabbit hole&#8221;</a>.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/czUpYfme0kg?fs=1&amp;hl=en_US" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="385" src="http://www.youtube.com/v/czUpYfme0kg?fs=1&amp;hl=en_US" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Thank you <a href="http://www.lightninglaboratories.com/tcw/about-2/" target="_blank">Gene Becker</a>, <a href="http://www.lightninglaboratories.com/" target="_blank">Lightning Laboratories</a> and <a href="http://ubistudio.org/" target="_blank">Ubistudio</a> for sending me this invite:</p>
<p><strong>&#8220;Ken  Eklund (<a href="http://twitter.com/writerguygames" target="_blank">@writerguygames</a>) is developing a wonderful game for the 01SJ  Biennial called ZOROP, aimed at creating World Peace(!). Some of you  might know Ken from his work on the amazing ARGs EVOKE and World Without  Oil. Anyway Ken, along with his collaborator Annette Mees, are  furiously working to get ZOROP ready to go for the Sept 17th premiere at  01SJ.</strong></p>
<p><strong>Are you intrigued? I thought so, and here are your next steps down the rabbit hole:</strong> <strong> </strong></p>
<p><strong>&gt; Check out </strong> <strong><a href="http://zorop.org/" target="_blank">http://zorop.org</a> to learn about the game</strong></p>
<p><strong>&gt; Follow @ZoropPrime to watch it unfold: </strong> <strong><a href="http://twitter.com/zoropprime" target="_blank">http://twitter.com/zoropprime</a></strong></p>
<p><strong>&gt; &#8216;Like&#8217; ZOROP on FB for a different view: </strong> <strong><a href="http://www.facebook.com/pages/Zorop/141140772593618" target="_blank">http://www.facebook.com/pages/Zorop/141140772593618</a></strong></p>
<p><strong>&gt; Become one with the game; consider volunteering as a Zoropathian: </strong> <strong><a href="mailto:curious@zorop.org">curious@zorop.org</a></strong></p>
<p><strong>&gt; Head down to San Jose on the 17th, play the game, and ride the ZOROP Mexican Party Bus. Seriously.&#8221;</strong></p>
<p style="margin: 0pt;">
<p><strong><br />
</strong></p>
<h3><strong>Interview with Bruce Sterling</strong><strong> </strong><a name="tag1"></a></h3>
<p><a href="http://www.flickr.com/photos/brucesterling/4671866157/in/photostream/" target="_blank"><img class="alignnone size-medium wp-image-5676" title="Screen shot 2010-09-16 at 7.59.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-16-at-7.59.56-PM-300x180.png" alt="Screen shot 2010-09-16 at 7.59.56 PM" width="300" height="180" /></a></p>
<p><em>Click on image above to see video clip from</em> <em><a href="http://www.flickr.com/photos/brucesterling/4673885122/" target="_blank"><em>from brucesflickr</em></a></em></p>
<p>[Note the<a href="http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/" target="_blank"> first part of this interview is here</a> and I broke in anticipation of Part 2 just as I started experimenting with an idea <a href="http://www.linkedin.com/in/joshuakauffman" target="_blank">Joshua Kauffman</a> &#8211; an advisor and entrepreneur working on design  in the public sphere gave me for an interview technique &#8211; the All Souls College one-word  question interview.Â  Although apparently <a href="http://www.nytimes.com/2010/05/28/world/europe/28oxford.html" target="_blank">they recently scrapped it</a> and I am not very good to sticking to a single word!]</p>
<p><strong>Tish  Shute:</strong> We were talking about these proximity-based social work networks like Foursquare and Gowalla and how they may influence the emergence of social augmented experiences.</p>
<p>So Joshua&#8217;s suggestion for the first word was &#8220;territorialization&#8221; e.g. how do these new mobile social experiences like Foursquare,  and the observation that actually rather than breaking down territorialization &#8211; which would be a good thing, tend to support territorialization&#8230;</p>
<p><strong>Bruce Sterling: Yeah, theyâ€™re re-intensifying it in a very odd, electronic fashion.</strong></p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Bruce Sterling:  Itâ€™s not true of  projection mapping or the webcam fiduciary display stuff. But with the handheld stuff, and especially the urban informatic stuff, it really canâ€™t help but take on a local flavor. <a href="http://www.layar.com/" target="_blank">Layar</a> is like &#8220;Augmented Dutch Reality.&#8221;</strong></p>
<p><strong>And <a href="http://www.tonchidot.com/" target="_blank">TonchiDot</a> is &#8220;Augmented Japanese Reality.&#8221; Itâ€™s hard to imagine a Layar interface going gangbusters at Tokyo.  Whereas the TonchiDot interface, which is so clearly influenced by Anime and cartoon graphics&#8230;. Maybe it could find some niche of hipsters in Amsterdam hash barsâ€¦</strong></p>
<p><strong>Stuff that&#8217;s socially generated by people on the ground, as with Foursquare and Gowalla, is bound to take on a regional influence. Right? It&#8217;s like the New York hipsters who were early adopters of Foursquare. They&#8217;re not mapping New York! They&#8217;re mapping Hipster New York.</strong></p>
<p><strong>It&#8217;s all about Williamsburg and places where 24-year-olds go to drink&#8230; They found a demographic niche there. These guys are building the service for them. They&#8217;re people who are willing to work for Foursquare for free, because they want to wear the little king hat.</strong></p>
<p><strong>Tish Shute:</strong> I got the far far away badge &#8216;cos I live on the Upper West Side!</p>
<p><strong>Bruce Sterling: But that&#8217;s not urban geography, right? I mean, that&#8217;s not like Google&#8217;s satellite stare from above.  That&#8217;s a group of citizens doing a portrait of their own region.  You&#8217;re going to see interesting things happen because, of course, people who use Foursquare elsewhere are going to check into New York, and they&#8217;re going to look at the &#8220;New York Foursquare.&#8221;   They&#8217;re going to be aliens who interact with Foursquare people in New York and annotate what they&#8217;re seeing.</strong></p>
<p><strong>Tish Shute:</strong> Oh! Yes. Good point.</p>
<p><strong>Bruce Sterling:  That Foursquare community has a certain Ã©migrÃ© soul.  It&#8217;s different from the normal Ã©migrÃ© soul of simple tourists on New York. So you&#8217;re friend there is right about the territorialization.</strong></p>
<p><strong>Tish Shute:</strong> Yes, Joshua Kauffman is a smart guy!  Yes I am interested to see what interesting kinds of deterritorializations proximity based social networks and the hyperlocal view of augmented reality might bring, not just the new territorializations.</p>
<p><strong>Bruce Sterling: It&#8217;s not the intense kind of territorialization, like gangs putting down graffiti markers and beating people up.  It&#8217;s an inherent regional character that comes with using peer production to build your database.</strong></p>
<p><strong>Tish Shute:</strong> We were discussing whether AR could break down the walls between people &#8211;  people who share the same physical space but actually inhabit different territories even if they are sitting on the table next to you.</p>
<p><strong>Bruce Sterling: You know, I just wrote an article for my Italian magazine column. I think I mentioned this to you &#8211; a report about ARE 2010.   I titled it, &#8220;Chicks Dig Augmented Reality.&#8221;</strong></p>
<p><strong>Tish Shute:</strong> [laughs]</p>
<p><strong>Bruce Sterling:   There is a very heavy social element to AR, and a phone based element. So the question is: Why would a woman wear a fiducial marker? Like our <a href="http://www.metaio.com/" target="_blank">Metaio</a> speaker at ARE2010 who had a fiducial marker on her lapel pin.</strong></p>
<p><strong>Tish Shute:</strong> Right. Lisa!</p>
<p><strong>Bruce Sterling: Why would a woman go out in public with her Facebook profile on her body?</strong></p>
<p><strong>Tish Shute: </strong>Well I can think of some reasons&#8230;</p>
<p><strong>Bruce Sterling: So that men will approach her, of course.</strong></p>
<p><strong>Tish Shute:</strong> Yes the core of all successful social networks is always a form of dating app.</p>
<p><strong>Bruce Sterling: You do it a social icebreaker.  It&#8217;s like: I&#8217;m a woman, I&#8217;m sitting here alone, and you can sort of glide by and, you know, take a snap of me.  Then you retreat and have a beer with your friends and  you work up the courage, and then you come and say, &#8220;So! Susan!  I understand you like bicycling!  And, boy, me too!&#8221; Right?</strong></p>
<p><strong>Tish Shute:</strong> There are all kinds of social barriers between people in cities that AR might be helpful in breaking down.  An extreme example is the dilemma you actually quite often face as a New Yorker as you walk around a city.  There are people asleep on the pavement and you don&#8217;t know if they&#8217;re dead or alive.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> And you sort of like have this awful ethical dilemma of like, &#8220;Am I walking by someone I should be shaking by the shoulder, right, to wake them up so they don&#8217;t die, right?&#8221;</p>
<p><strong>Bruce Sterling: Yes.</strong></p>
<p><strong>Tish Shute: </strong> You said in your keynote that we should pay attention to augmentations for people and situations that could really use some augmentation..</p>
<p><strong>Bruce Sterling: Right. There actually is such an app in Britain right now.  I posted about it:  two Augmented Reality schemes for rubbish and hobos.</strong></p>
<p><strong>Tish Shute:</strong> Right. Yes I saw that!</p>
<p><strong>Bruce Sterling:  &#8220;Any sufficiently advanced technology is indistinguishable from garbage and hobos.&#8221;  You don&#8217;t need to personally find out whether this hobo is worth your help.  What you need is a good way to report the hobo to a hobo check-up service.   They come in, and they look on their own database or supply a database to you, or a facial recognition unit, whatever.  The service says: &#8220;Oh, well.  That&#8217;s Fred. He&#8217;s a paranoid schizophrenic. He always sleeps in that alley. Let him be.&#8221;</strong></p>
<p><strong>The same goes for the rubbish &#8212; although I don&#8217;t want to compare rubbish to hobos.   In fact, people do go out with their AR kits and take pictures of abandoned garbage bags and broken glass.  They upload them with geolocated tags for the local garbage guys.  Guys who are sitting around doing pretty much nothing because they don&#8217;t know where the rubbish is.</strong></p>
<p><strong>And they will come out and get the rubbish! I mean, they just deputize guys to go out and follow these alerts. Right?</strong></p>
<p><strong>But nobody predicted &#8212; least of all me &#8212; that you were going to have a high-tech Augmented Reality system that consisted of removing rubbish and derelicts. Right?   But rubbish and derelicts  always go profoundly under-reported. It&#8217;s just hard to get people&#8217;s attention.</strong></p>
<p><strong>But it&#8217;s very easy to set up a system so that, if you get  ten reports on the same piece of rubbish, that&#8217;s going to work its way to the top of the stack.   That&#8217;s why I was trying to get AR people away from the romance of  the hottest app for the shiniest machine.  More toward a design stance that&#8217;s more user-centric.</strong></p>
<p><strong>Where are the actual problems about stuff that we perceive?  Stuff we can&#8217;t do anything about?   Or people whose mechanisms of perceptions are harmed. They could be doing good work, being more participative, if they didn&#8217;t, basically, walk around without their glasses on.</strong></p>
<p><strong>Tish Shute:</strong> Well this leads well into the second word, Joshua suggested was interesting spring board &#8211; sensitivity.</p>
<p>On the one hand we can do these things for people who maybe need the augmentation because they have difficulty with one or another sense, e.g.,  their eyes are not functioning, or their ears are not functioning. But on the other hand, we can&#8217;t cross the social bridge to communicate with people who are temporarily disempowered in relation to the rest of society e.g. hobos and people who sleep on the streets of New York City.Â  And even though Augmented Reality could potentially be helpful it can even be more disempowering to the already disempowered.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> But re &#8220;sensitivity&#8221; &#8211; does augmentation increase or decrease our sensitivity?  This is a problem that Will Wright brought up [<a href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/" target="_blank">see video of Will Wright&#8217;s keynote at are2010</a>], e.g, the problem of parking HUDs getting in the way of your intuitive parallel parking skills.  The Lexus that takes driving control from you when you look back, &#8216;cos it knows that you&#8217;re looking at the road, and it starts to brake. Right?</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> The fact that the problem with technology is that it makes us less sensitive, right, augmentations sometimes get in our way?</p>
<p><strong>Bruce Sterling:  I suppose that&#8217;s true. But I&#8217;ve heard that said about practically every medium.  Especially television.</strong></p>
<p><strong>Everybody wants to blame machinery for their lack of morality.   It&#8217;s hard to top something like the Kitty Genovese killing in New York. This sort of legendary New York horror story from the 1960s. A woman is stabbed to death in public, no one does anything.</strong></p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Bruce Sterling: I don&#8217;t think that our media is making us any less humane or more callous.</strong></p>
<p><strong>Tish Shute: </strong>All right. Oh no! I see what you&#8217;re saying. Perhaps I misrepresented what Will was suggesting by putting it that way.  The question is perhaps more how do we get the sensitivity into the technology.  Human bodies are fantastically sensitive and sensory.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute: </strong>And we have these like sensitivities.  For instance, How could augmentations of reality be like a blush ? You definitely want an interaction that&#8217;s not just this data being pushed at you. But what is the data that counts, right?  Will shows a slide often of an iceberg with the tip of the iceberg which is the conscious mind.</p>
<p><strong>Bruce Sterling: Oh, I see.  Yeah.</strong></p>
<p><strong>Tish Shute: </strong> And underneath it is all the preconscious stuff that really counts, right?  Any thoughts on that?</p>
<p><strong>Bruce Sterling:  I did take interest in that.  Will has obviously been spending a lot of time studying cognition.</strong></p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Bruce Sterling:  Iâ€™m not convinced that AR has got a lot to do with that.  There is certainly a trend there.  There are a lot of people who want to do body hacks and brain hacks.  I can imagine AR being used for that purpose, but it seems like a niche application.   What is the point of our accessing even more stuff thatâ€™s outside of our consciousness?</strong></p>
<p><strong>Tish Shute:</strong> One of the things he is talking about is game dynamics, is it?  The role of the imagination in play.  For example, he shows the high dynamic range photos that make the world magical.  Something you want to engage with playfully.  This he points out increases a sense of agency because you are encouraged to engage and to play with the world.</p>
<p><strong>Bruce Sterling:  Well, Iâ€™m a literary guy.  Italo Calvino did a lot of writing about this.  He talked about the classics of literature.  Why do we read the classics?  Calvino said we do not read, but reread the classics.  And the reason we do that is that, at first, we read a classic book and we think, â€œBoy, this book is really good.&#8221;   Then, five years later, we read it again and we think, â€œBoy, this is a really good book, and itâ€™s got so much more in it than I thought it had when I was 18.â€  Then we read it again at 28, and itâ€™s like, â€œOK, now I really seem to understand this book, and it means something to me now that I didnâ€™t know when I was 18 and 25.â€</strong></p>
<p><strong>What you are doing through that access is learning something about yourself.  So Will is arguing is what I really need is like a better augmentation.  So that I can go in there and sop up the book all at once.  I can grab every cultural nuance in it, instead of the stuff thatâ€™s  sliding past me because Iâ€™m 18 and kind of young and hasty.  Maybe I could have certain words and phrases helpfully underlined, that are like, â€œOK, well, this part is problematic for you.â€  In some sense, thatâ€™s not allowing me to be 18.</strong></p>
<p><strong>Iâ€™m never going to have the experience of my own maturation against this text, because Iâ€™ve devoured it all in one gulp.</strong></p>
<p><strong>My argument would be that if you want people to be more sensitive toward certain, say, issues and problems, itâ€™s better to find the people who are already sensitive to those issues and problems, and give them a bigger stake in your augmentation system.</strong></p>
<p><strong>Tish Shute:</strong> Yes the social augmented experiences are going to be the most valuable.</p>
<p><strong>Bruce Sterling:  Say that I am really worried about public health.  Well, if I have a lot of nurses that are using my system, people who are aware of my issues, then I could be walking around and Iâ€™ll see a lot more tags saying, â€œThis is where he got food poisoning!â€  &#8220;In this shooting gallery, many people have caught AIDS!â€  Or, you know, â€œTuberculosis has been spotted over here in this building.â€</strong></p>
<p><strong>At that point, I could simply share their knowledge and get some social intelligence.  As opposed to trying to  amp the basements of my little hacker-mind and drag stuff up thatâ€™s escaped my conscious attention.</strong></p>
<p><strong>Tish Shute:</strong> Interesting that seems to bring us to another kind of repetitive theme in AR,  the people tend to pigeon hole it as &#8220;merely&#8221; a visual interface.  But actually, itâ€™s the intersection, isnâ€™t it, of social intelligence and augmentation.</p>
<p><strong>Bruce Sterling:  Well, it depends entirely on how you design the system.  If Iâ€™ve got a military augmented reality, I would expect that to be mostly about urban fighting.  Itâ€™s going to be about kicking in a door and shooting terrorists.   If I pull that helmet off my head and put that on the head of an emergency worker or a cop, Iâ€™m going to get a militarized cop or a militarized emergency worker.</strong></p>
<p><strong>Tish Shute:</strong> Well the histories of the two great mass media of the twentieth century &#8211; TV and the atomic bomb were intertwined, and I suppose the evolution of ubiquitous media, augmented reality and urban warfare is already intertwined too.Â   So how can we encourage augmented realities to move beyond military roots that is common to much technology and into more peaceful urban realities?</p>
<p><strong>Bruce Sterling:  Well,  itâ€™s a matter of deciding whose reality it is that youâ€™re trying to augment.  Iâ€™m not trying to be a bleeding heart about it, but obviously there are people in our society right now with reality that could really use some augmentation.  They are mostly disadvantaged people.  They are vision impaired, or maybe they have autism.  They might be senile and just canâ€™t remember where they put their shoes.  These are people who could really use some help, right?</strong></p>
<p><strong>So, start with people who really need sensory or cognitive help. Before you  turn our geeks into Superman, why donâ€™t you try turning some people who are harmed into more functional individuals?  Then youâ€™ll be able to learn how to do that. Then maybe you can ramp it up to these Nietzschian heights of the superb Man With the X-ray Eyes.  Whatever.</strong></p>
<p><strong>Tish Shute:</strong> Did you notice that a couple of apps actually like <a href="http://www.tagwhat.com/" target="_blank">TagWhat</a> have apps geared towards people with disabilities &#8211; I haven&#8217;t had a chance to check it out.</p>
<p><strong>Bruce Sterling: Iâ€™m sorry, I wasnâ€™t looking at their tags.</strong></p>
<p><strong>Tish Shute:</strong> I was discussing this with Joshua who mentioned <a href="http://www.eyewriter.org/" target="_blank">Zachary Liebermanâ€™s Eye Writer</a>, which is for people with locked-in syndrome. Do you know that?</p>
<p><strong>Bruce Sterling: Sure. And people appreciate that because the poor guy, heâ€™s laid up with Lou Gehrigâ€™s Disease. Now theyâ€™ve given him  a way out.  AR is like a spark of new hope that gives his life meaning. Whatâ€™s wrong with that?</strong></p>
<p><strong>Tish Shute:</strong> Yeah. And <a href="http://www.youtube.com/watch?v=IJ8VMLECToQ" target="_blank">Tim Byrne using Sixth Sense</a> for Autism is interesting.</p>
<p><strong>Bruce Sterling: Letâ€™s consider it the other way. Letâ€™s say this graffiti writer there, instead of him being sick and weak, letâ€™s say heâ€™s an athlete.  So I want to make him into a super-human graffiti writer. I want him to run around graffiti-tagging the entire town before dawn. Is that a good idea? Do we need that? Super human, super taggers? What if heâ€™s going to spray up stencils of  Nietszche?  I kinda wonder whether the game is worth the candle.</strong></p>
<p><strong>Tish Shute: </strong>Yes I suppose it is not a great social scenario to be always augmenting the lives of the elites!  Hmm, the third single word interview question is &#8220;homophily,&#8221; and earlier were youâ€™re saying that weâ€™ve kinda got to accept this is very much part of AR &#8211; as how it works, because hyperlocal experiences gets created by local communities &#8211; that up to know have tended to be homophilies.</p>
<p><strong>Bruce Sterling: Well, I think thatâ€™s easily handled with some design thinking. You&#8217;ve got to do some user observation and show some sympathy with the user, and to be aware that youâ€™re designing for the user and youâ€™re not designing for yourself.</strong></p>
<p><strong>In a field as young as this, itâ€™s mostly geeks building cool stuff for geeks. In a lot of ways, itâ€™s a â€œcan you top thisâ€ contest. Thatâ€™s OK, but itâ€™s not good design to be your own client all the time. Itâ€™s like writing novels to amuse yourself, or sitting on the porch singing the blues on your own guitar with only yourself to hear.</strong></p>
<p><strong>Tish Shute:</strong> What will it take for AR mature out of this &#8220;geeks building cool stuff for geeks&#8221; phase do you think?</p>
<p><strong>Bruce Sterling:  Itâ€™s necessary to master some of the tools first.  I think of the way the web has developed over the years. When the World Wide Web first appeared, it was just for physicists, and was all line commands and quite unstable and difficult. Then you got usability studies, and things like Ajax and so forth. Itâ€™s a very painstaking thing.</strong></p>
<p><strong>Weâ€™re not best at  building interfaces for the best computer scientists.  Web 2.0 was built from things like watching people cry while they were trying to fill out insurance forms. â€œWell, why are you so upset?â€</strong></p>
<p><strong>â€œWell, I got to the end of the webpage, and then it said I took too long, and it cut me off and now I have to start all over!â€ <a href="http://blog.jjg.net/" target="_blank">Jesse James Garrett</a>, right? Benefactor of mankind.</strong></p>
<p><strong>If youâ€™re experienced, you think:  â€œWhy donâ€™t I build a little module here, and kind of move the form over here, then Iâ€™ll periodically update it with some asynchronous Java and XTML.â€ And people are like, â€œGee, how odd.â€ But that really works for real people. It comes from studying what people want to do.  Whereas, the current AR approach to a problem like the insurance form would be like, â€œI will give you the ability to record the entire insurance form, and it will flash before your eyes!â€    OK great, thatâ€™s a cool hack, but I donâ€™t really need X-Ray Eyes to fill out my insurance form. What I need is a more user friendly interface.</strong></p>
<p><strong>Tish Shute:</strong> Well it seems like we are moving into the terrain of Joshua&#8217;s fifth word &#8220;ventilation,&#8221; &#8211; if I understand it rightly &#8211; it is at least partially the antidote to territorialization because itâ€™s this idea that a place needs air so we come out of our hermetically sealed boxes of the way we relate to a place and what kind of augmentation would bring more oxygen to that space.</p>
<p>There was an interesting moment in the Auggies because when <a href="http://twitter.com/dutchcowboy" target="_blank">Maarten Lens-FitzGerald</a> presented the guerrilla shopping Layar and basically Mark Billinghurst and Jessie Schell who spoke first didn&#8217;t seem too impressed. They didnâ€™t want to walk to shopping &#8211; that was what web shopping did, it saved us from walking to shop&#8230; but I felt, to me you picked up on something which might have some bearing on &#8220;ventilation&#8221; in that this AR shopping Layar was kind of squatting Prada &#8211; a favela chic AR shopping thing?</p>
<p><strong>Bruce Sterling: I wasnâ€™t sure if I was interpreting what Maarten had in mind by that.  But I think Maarten sees his structure accurately as an experience thing rather than a mapping thing. I think heâ€™s proudest of things like the Berlin Wall app on Layar, as opposed to Layars that help you go get a hamburger. Itâ€™s like&#8230;so when Layar inserts parasitic augmented shopping over other peopleâ€™s  real shopping? That was rather a subversive thing.</strong></p>
<p><strong>I think the key there is that his client is called &#8220;Hostage T-shirts,&#8221; right? I mean itâ€™s actually kind of a transgressive little hippy T-shirt store that Layar can dump anywhere in the world. Layered right over, say, Versace and Prada.  I donâ€™t know what becomes of that effort. And Iâ€™m not sure about the term â€œventilation,â€ because thatâ€™s a term of art I havenâ€™t heard much.</strong></p>
<p><strong>Tish Shute:</strong> Maybe it&#8217;s like in a cafe.  Ventilation would mean we were able to communicate with all these different categories of people that we normally would be unable to connect to, even though we might be sitting only a few feet apart.</p>
<p><strong>Bruce Sterling:   So it means ventilation in the bottles of our homophilies.</strong></p>
<p><strong>Thatâ€™s not a personal problem for me.  I commonly live in foreign cities and, you know, and spend a helluva lot of time talking to strangers at conferences. So I donâ€™t think Iâ€™d have that particular tight little social island problem.</strong></p>
<p><strong>Tish Shute:</strong> Of the three judges at the Auggies, you seemed most enthusiastic about the Layar entry.</p>
<p><strong>Bruce Sterling: It may be theyâ€™re not as familiar with the business models of locative AR as I am, and as Maarten is. It was kind of a subtle in-joke he was making about Layarâ€™s own business model there.</strong></p>
<p><strong>Tish Shute: </strong>How do you explain that?</p>
<p><strong>Bruce Sterling: Well, you know, Layar&#8217;s in the business of  selling software to make mapping and urban structures into ecommerce.</strong></p>
<p><strong>The ideal way to do that obviously would be to move the richest customers into the most expensive shops in the most rapid way possible. Or at least distribute them in the directions they want to go, a la Google. Whereas this app that Maarten was talking about puts big barnacles in the way that are selling punk t-shirts.</strong></p>
<p><strong>Tish Shute:</strong> Right! Right!</p>
<p><strong>Bruce Sterling:   The Dutch are a bit subtle in their humor.  I rather imagine thereâ€™s a lot of discussion in Layarâ€™s inner circle about exactly what they want developers to do with their platform. Theyâ€™re going to have considerable political difficulty deciding who can have a Layar key and how you discipline people when they start doing weird stuff. &#8220;The Oakland Medical Marijuana layar.&#8221;</strong></p>
<p><strong>Tish Shute:</strong> Well, finding nudists is one of the top layars at the moment.</p>
<p><strong>Bruce Sterling: You know, obviously so. And finding narcotics in Amsterdam, or a prostitution layer.  I warned them nine months ago this was bound to happen. Iâ€™m sure theyâ€™re aware of it.  I don&#8217;t think Layar wants Googleâ€™s style of cool, technocratic detachment.</strong></p>
<p><strong>Tish Shute:</strong> But thatâ€™s pretty difficult to do in current augmented reality because we donâ€™t have all the mathematical voodoo for full on AR search yet, do we?</p>
<p><strong>Bruce Sterling: Well, you can hire it out. Somebodyâ€™s going to do it, if they get interested enough.  Thereâ€™s Nokia-Yahoo. Nokia-Yahoo! just did a big corporate deal&#8230;involving Nokiaâ€™s mapping system and Yahooâ€™s localization. So the Nokia-Yahoo! mash-up is called Nooo!   Or could be called Yahno. Yakia!  Unfortunately ridiculous names.</strong></p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because you mentioned the spidersâ€™ mating problem at Google. Theyâ€™ve got all the pieces to make this kind of level of AR obviously right now. But they actually havenâ€™t done it yet.</p>
<p><strong>Bruce Sterling: There must be at least some discussion in Google, but the same goes for Microsoft. Iâ€™m frankly baffled by Microsoft, because itâ€™s just full of insanely brilliant people. What the hell are they doing in there? Name one serious innovation thatâ€™s come out of their labs in five years. They make Integral Research look dynamic. Itâ€™s really kind of sad.</strong></p>
<p><strong>Tish Shute:</strong> Itâ€™s a very curious situation with AR though, because AR more than any new technology relies on these big hordes of data particularly for the mapping, right? And only the big four have the data &#8211; although we are beginning to see upstarts, Earth Mine, Simple Geo&#8230; Did you get a chance to meet Di-Ann Eisnor  from <a href="http://www.waze.com/homepage/" target="_blank">Waze &#8211; real-time maps and traffic information based on the wisdom of the crowd</a>.Â  Waze is a very interesting project that is a potential giant killer.</p>
<p><strong>Bruce Sterling: No, I didnâ€™t talk to them.  Iâ€™ve seen people speculate that Earthmine and Apple are going to make an allegiance. I guess if youâ€™re thinking that urban informatic mapping is a super big thing for AR, that must be true.   But Iâ€™m not convinced thatâ€™s necessarily the case. People have pointed out that you can just use Google Maps, and you donâ€™t have to walk around with a little visor.  There are other aspects of AR besides the cell phone space. Thereâ€™s Total Immersion&#8217;s big display screens. Thereâ€™s the web-based fiduciary stuff. And thereâ€™s projection mapping. And then thereâ€™s experience design just for people who need their reality augmented for whatever personal or social reason. [dog barking]</strong></p>
<p><strong>Tish Shute:</strong> Right. Oh, Iâ€™m in the middleâ€¦ My sonâ€™s come. What a good hair cut!</p>
<p><strong>Bruce Sterling: Hi, there.</strong></p>
<p><strong>Tishâ€™s Son</strong>: Hi.</p>
<p><strong>Bruce Sterling: Howâ€™s it going, sir? Good to see youâ€¦</strong></p>
<p><strong>Tishâ€™s Son:</strong> Good.</p>
<p><strong>Tish:</strong> [laughs]</p>
<p><strong>Bruce Sterling: Yeah. Nice looking shirt. I like the back of it.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s from the American Shaolin Temple. [laughs<strong>]</strong></p>
<p><strong>Bruce Sterling: All</strong> right. Awesome. Kung Fu geek shirt.</p>
<p><strong>Tish Shute:</strong> Yup he is a bit of Kung Fu Geek. He and his dad did an iPhone app on it for Yu-Gi-Oh, for Yu-Gi-Oh scoring.</p>
<p><strong>Bruce Sterling: Awesome. Plenty of PokÃ©mon-style combat in Yu-Gi-Oh.</strong></p>
<p><strong>Tish Shute:</strong> Yeah. Well, itâ€™s interesting because youâ€™ve talked about this aspect. That all of this, the PokÃ©mon aspect of AR hasnâ€™t kicked in yet. But itâ€™s obviously a match made in heaven to some degree, isnâ€™t it?</p>
<p><strong>Bruce Sterling: One would think so, yeah.  The whole little kid gaming thing. What does that have to do with Google or Bing? You donâ€™t need a massive database for stuff like that.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, youâ€™re right. But good tracking, mapping and registration requires a lot of mapping&#8230;</p>
<p><strong>Bruce Sterling: Well, our current tracking, mapping and registration requires that. Maybe thereâ€™s some other way to hack it that we donâ€™t know about yet.</strong></p>
<p><strong>Tish Shute: </strong>Thatâ€™s a very interesting point. We always have to stretch the way we think about mappingâ€¦ perhaps its a real-time understanding of the location youâ€™re in&#8230;perhaps the map is being negotiated through several social processes?</p>
<p><strong>Bruce Sterling: There are maps, and then there are maps. Thereâ€™s a kind of artillery map where you need to know the precise location of target spaces. And then thereâ€™s the kind of social map where Iâ€™m really looking for the IN-N-OUT Burger where my sister went last Tuesday. Thatâ€™s a different  system.</strong></p>
<p><strong>Tish Shute:</strong> And I think AR, at the moment, weâ€™re getting the most out of the social maps certainly. And the other [machine   perception  technologies to detect  the identity and physical    configuration of  objects relative to each  other to accurately  project   information  alongside/overlaid with a physical object] is still kind of the big dream, isnâ€™t it?</p>
<p><strong>Bruce Sterling: They say that men never ask for directions and women never read maps. Clearly, the genders have different ways of navigating the world. Whoâ€™s to say what manner of augmenting our experiences is hottest?  Iâ€™m not convinced that todayâ€™s rather rigid geolocativity is really what our society wants from that particular service. Maybe what we want is something more folksy.   Some useful nudge in the right direction as opposed to grids with 200 meters here and instructions to turn such-and-such.</strong></p>
<p><strong>Besides, thereâ€™s other hacks we havenâ€™t considered.  Weâ€™re very dependent on GPS, but just suppose all those satellites are blown out of the sky in a solar storm. Would we really want to give up mapping? Wouldnâ€™t we just come up with some other nifty hack?  Radio beacons, letâ€™s just say. Atomic clock timers in towns. Or maybe just little QR codes on lampposts that give you the exact location of that lamppost, and just click the thing and have it calculate where you are.</strong></p>
<p><strong>Tish Shute:</strong> Yes the <a href="http://thenexthope.org/" target="_blank">NextHope</a> <a href="http://thenexthope.org/2010/07/hackable-badge-accessory-kits-available/" target="_blank">OpenAMD project</a> had a clever way of triangulating location indoors.</p>
<p><strong>Bruce Sterling: Well, GPS is there and people all want to use it. Itâ€™s got good API, so of course you want to. And the guys who are good at doing it are real geolocative freaks. But the mere fact that we can do it this way, and that you can make it pay, doesnâ€™t mean that itâ€™s the ultimate way to provide that service to people.  Itâ€™s like saying that Egyptian hieroglyphics must be the greatest way to write,  because weâ€™ve got a lot of them and theyâ€™re hard to learn. What if somebody comes along with an alphabet? Itâ€™s going to be a little embarrassing.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, thatâ€™s a very good point. Now, this is a more simple ordinary question about the event. <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams</a> went off the map in the Auggie voting, and walked away with The Auggies. No one doubted that that was the mostâ€¦</p>
<p><strong>Bruce Sterling: I donâ€™t know. I thought those <a href="http://occipital.com/blog/" target="_blank">Occipital</a> guys with the panoramic painting&#8230;. That was hairy. I would have been tempted to give them the prize myself, actually.</strong></p>
<p><strong>Tish Shute:</strong> And what did you like best about that? Because I agree. I love <strong><a href="http://occipital.com/blog/" target="_blank">Occipital</a></strong>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-6.20.58-PM.png"><img class="alignnone size-medium wp-image-5704" title="Screen shot 2010-09-17 at 6.20.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-6.20.58-PM-300x41.png" alt="Screen shot 2010-09-17 at 6.20.58 PM" width="300" height="41" /></a></p>
<p><em>click to enlarge</em></p>
<p><strong>Bruce Sterling: I thought it was a more technically difficult stunt than the hand registration thing.  Using a hand as a 3-D cursor is hot, but  not like painting a panorama in 3-D in real time.  That was an impressive technical feat.</strong></p>
<p><strong>Tish Shute: </strong>And they hinted at the 2.1.1 AR, more AR version of that. What do you see coming out of that as possibilities?</p>
<p><strong>Bruce Sterling: Well, Iâ€™d heard of <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams</a>, so I wasnâ€™t stunned. But Iâ€™d never heard of those guys. I wonder what else the heck theyâ€™ve got in the att</strong>ic.</p>
<p><strong>Tish Shute:</strong> very cool stuff&#8230;</p>
<p><strong>Bruce Sterling: Well, more power to them. But clearly YDreams was the popular favorite. And who couldnâ€™t like it? It was just so AR.</strong></p>
<p><strong>Tish Shute</strong>: It was so AR and so gorgeous.</p>
<p><strong>Bruce Sterling: It was pretty, actually.Â  Except for their ugly menu button and poor font choice.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes. You didnâ€™t like that, did you? [laughs] But with the Occipital panorama, what do you see the next stage of that?</p>
<p><strong>Bruce Sterling: Well, obviously quicker and faster. Quicker and faster and more accurate in a network. Letâ€™s just say Iâ€™m in New York and youâ€™re in New York and Iâ€™m calling you for help. And you say where are you?  I just whirl around like this and I mail it to you on a Google Wave. And you whirl around like that, and then we compare the two panoramas and do an instant triangulation. And you say: Iâ€™m over here on this red dot of your screen.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, exactly.</p>
<p><strong>Bruce Sterling:  Weâ€™re navigating with panoramas by having two connected panoramas and considering the difference.</strong></p>
<p><strong>Tish Shute: </strong> Yeah, very interesting&#8230;</p>
<p><strong>Bruce Sterling: Not shabby, right?</strong></p>
<p><strong>Tish Shute:</strong> Not shabby at all.</p>
<p><strong>Bruce Sterling: If you could do it in real time.</strong></p>
<p><strong>Tish Shute:</strong> Then the other thing I missed because I was going to meet Will was I missed the Launch Pad competition. Did you catch that?</p>
<p><strong>Bruce Sterling: I didnâ€™t see it either. I thought of another app though.</strong></p>
<p><strong>Tish Shute:</strong> Oh!</p>
<p><strong>Bruce Sterling: Youâ€™ve got a panorama maker in your home office, and it just scans the office 24 hours 365 and tags anything that moves, right? OK, whereâ€™s the clipboard?Â  At 8:15 it was over here.  Now itâ€™s vanished. Now another object is viewed over here. So, logically, ping, you hit it with a sticky light and there it is, right?</strong></p>
<p><strong>Tish Shute:</strong> Oh,  that&#8217;s cool also knowing what has changed in any environment would be a big enabler for a lot of AR visions.</p>
<p><strong>Bruce Sterling:  Iâ€™m sure there are many other things you could do with panoramas.</strong></p>
<p><strong>Tish Shute:</strong> My jet lag is beginning to kick in big time &#8211; so many ideas to pursue from are2010 &#8211; those panoramas are very exciting though.</p>
<p><strong>Bruce Sterling: Oh, well, itâ€™s all right.  We can augment reality!   Iâ€™ve got three heads and six hands!</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Vision Based Augmented Reality (AR) in Smart Phones &#8211; Qualcomm&#8217;s AR SDK: Interview with Jay Wright</title>
		<link>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/</link>
		<comments>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/#comments</comments>
		<pubDate>Thu, 05 Aug 2010 22:56:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR standards]]></category>
		<category><![CDATA[AR version of Rock'em Sock'em]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Chokkan Nabi]]></category>
		<category><![CDATA[Christian Doppler Handheld AR LAB in Graz]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[going beyond compass/gps based AR]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[InsideAR]]></category>
		<category><![CDATA[Junaio]]></category>
		<category><![CDATA[Junaio glue]]></category>
		<category><![CDATA[Karma Augmented Reality Mobile Architecture]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Maarten Lens-FitzGerald]]></category>
		<category><![CDATA[markerless tracking]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open Android JPCT 3D engine]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Patrick O'Shaughnessey]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Qualcomm AR Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Developer Challenge]]></category>
		<category><![CDATA[Qualcomm Augmented reality SDK]]></category>
		<category><![CDATA[Qualcomm Developer Challenge]]></category>
		<category><![CDATA[Simulation3D]]></category>
		<category><![CDATA[Snapdragon]]></category>
		<category><![CDATA[Thomas Alt]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Unifeye Mobile SDK]]></category>
		<category><![CDATA[Unifeye SDK]]></category>
		<category><![CDATA[Unity for AR]]></category>
		<category><![CDATA[Unity for augmented reality]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Upliq 2010]]></category>
		<category><![CDATA[vision based AR]]></category>
		<category><![CDATA[vision based augmented reality]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Yohan Baillot]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5593</guid>
		<description><![CDATA[Recently, Qualcomm announced an SDK for vision based augmented reality &#8211; currently in private beta and open to the public this fall. The Qualcomm augmented reality (AR) bonanza will launch with a $200,000 developer challenge and a SDK that will put vision based augmented reality into the hands of developers without licensing fees. This is [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank"><img class="alignnone size-medium wp-image-5616" title="Screen shot 2010-08-05 at 6.07.36 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-6.07.36-PM-300x212.png" alt="Screen shot 2010-08-05 at 6.07.36 PM" width="300" height="212" /></a></p>
<p>Recently, <a href="http://www.qualcomm.com/" target="_blank">Qualcomm</a> announced <a href="http://qdevnet.com/ar" target="_blank">an SDK for vision based augmented reality</a> &#8211; currently in <a href="http://qdevnet.com/dev/augmented-reality/private-beta-program" target="_blank">private beta</a> and open to the public this fall.  The Qualcomm augmented reality (AR) bonanza will launch with a <a href="http://qdevnet.com/dev/augmented-reality/developer-challenge" target="_blank">$200,000 developer challenge</a> and a SDK that will put vision based augmented reality into the hands of developers without licensing fees.</p>
<p>This is a big step forward for augmented reality and a very important move made by an industry giant to support the rapidly evolving AR industry.  Innovation at all levels of the AR stack, particularly at the hardware level (CPU/GPU optimization) is vital for the full vision of augmented reality &#8211; media tightly registered to physical space, to take center stage.   Vision based AR takes mobile AR beyond compass/GPS based AR post-its, which are only loosely connected to the world (but the staple of most current AR apps), towards the holy grail of AR &#8211; markerless tracking with the whole world as the platform.</p>
<p>Click on the image above or <a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank">see here</a> for a video demo of an  AR version  of Rock&#8217;em Sock&#8217;em Robots game.Â  <a href="http://www.mattel.com/">Mattel</a>, one of the first companies  working with the SDK demoed AR Rock&#8217;em Sock&#8217;em, at the <a href="http://uplinq.com/">Uplinq 2010</a> conference (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>).</p>
<p>The Qualcomm AR stack, which reaches from the metal to developer APIs, will give Android developers an important edge in AR development.   And, when vision based AR starts getting integrated with visual search capabilities, and combined with cool tools like <a href="http://unity3d.com/" target="_blank">Unity</a>, we will start to see the augmented world get really interesting.</p>
<p>Visual search is already an area of AR getting a lot of attention, with <a href="http://www.google.com/mobile/goggles/#text" target="_blank">Google Goggles</a>, <a href="http://europe.nokia.com/services-and-apps/nokia-point-and-find" target="_blank">Point and Find</a>, <a href="http://www.cnet.com.au/augmented-reality-taking-off-on-japanese-smartphones-339304998.htm" target="_blank">Japan&#8217;s NTT DoCoMo set to launch &#8220;chokkan nabi,&#8221;</a> or &#8220;intuitive navigation,&#8221; in September, and the <a href="http://www.layarnews.com/2010/07/kooaba-meets-layar.html" target="_blank">recent partnership between Layar and Kooaba</a>.  <a href="http://www.metaio.com/" target="_blank">Metaioâ€™</a>s mobile augmented reality platform <a href="http://www.metaio.com/products/junaio/" target="_blank">Junaio</a> is already integrated with <a href="http://www.kooaba.com/" target="_blank">Kooabaâ€™s</a> computer vision capabilities.</p>
<p>And, of course, I am particularly excited about including open distributed real time communications for AR in this stack, which is why I asked a group of developers who have been inputting into the <a href="http://arwave.org/" target="_blank">ARWave</a> project if they had questions for Jay Wright, Qualcomm.Â  Thank you <a href="http://www.linkedin.com/in/yohanbaillot" target="_blank">Yohan Baillot</a>, <a href="http://lightninglaboratories.com/" target="_blank">Gene Becker</a>, <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, <a href="http://patchedreality.com/about/" target="_blank">Patrick  O&#8217;Shaughnessey</a>, <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://twitter.com/kusako" target="_blank">Markus Strickler</a>, and <a href="http://twitter.com/need2revolt" target="_blank">Davide Carnovale</a> for your input.Â  [Note: see my upcoming post, about the future of <a href="http://arwave.org/">ARWave</a> and real time distributed communications for AR following <a href="http://googleblog.blogspot.com/2010/08/update-on-google-wave.html" target="_blank">this Google announcement</a>.]</p>
<p><a href="http://www.linkedin.com/in/jaywright" target="_blank">Jay Wright</a>, â€œis responsible for developing and driving Qualcommâ€™s augmented reality commercialization strategy.â€ He â€œhandles partnerships with leading innovators in industry and academia and leads Qualcommâ€™s efforts in enabling augmented reality within the mobile ecosystem.â€  In the interview below, Jay very generously answers our questions in detail.</p>
<p>A key contributor of questions for this interview is Yohan Baillot.  Yohan is working on a full vision of AR &#8211; integrating computer vision, visual search, open distributed real time communications and AR eyewear.  Yohan Baillot is founder of <a href="http://www.simulation3d.biz/" target="_blank">Simulation3D</a>, a consulting and system integration company specializing in interactive visualization systems and eyewear-based AR systems.  (I hope to bring you an interview with Yohan soon!).</p>
<p>Qualcomm was the title sponsor for <a href="http://augmentedrealityevent.com/" target="_blank">are2010, Augmented Reality Event</a>, and  played a vital role in making this event an historic gathering of the talent and creative minds at the heart of the emerging AR industry.  Watch out for the videos of the are2010 sessions to be posted at the end of August.  My are2010 co-chair, <a href="http://ogmento.com/team" target="_blank">Ori Inbar</a>, is preparing them to go online while kicking his newly funded start up, <a href="http://ogmento.com/" target="_blank">Ogmento</a>, into high gear! Ogmento is also one of the start ups pioneering vision based AR.</p>
<p><a href="http://www.metaio.com/" target="_blank">Metaio</a>, (with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>, they are one of the first augmented reality companies), has played a key role in bringing a vision component to smart phone augmented reality apps with their <a href="http://www.metaio.com/products/" target="_blank">Unifeye mobile SDK</a>.Â  Junaio, Metaioâ€™s own mobile augmented reality platform has gone beyond location based AR with â€œjunaio glueâ€ &#8211; â€œthe camera&#8217;s eye is now able to identify objects and &#8220;glue&#8221; object specific real-time, dynamic, social and 3D information onto the object itself,â€Â (see my upcoming interview with Metaio founder, Thomas Alt).Â   Also, recently, Layar &#8211; who continue to innovate at a breathtaking pace, announced a partnership with the computer vision company Kooaba.</p>
<p>Both Maarten Lens-FitzGerald, Layar, and Thomas  Alt, Metaio, when I spoke to them recently,  saw the Qualcomm SDK as a very positive development for AR, and they look forward to exploring its capabilities and integrating it where appropriate with their AR tools.Â  See more about <a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">Layar&#8217;s  upcoming visit, to the US here &#8211; </a><a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">August  10th NYC, and August 12th SF</a>.Â  Also save the date, Sept 27th, Munich, for <a href="http://www.metaio.com/index.php?id=1103" target="_blank">InsideAR,</a> Metaio&#8217;s  upcoming conference.</p>
<p>It is clear that vision based AR will be driving the next wave of AR apps.  And, as Maarten and Thomas both pointed out, it will be interesting to see which use cases capture the imagination of users the most.  Having more tools freely available to AR developers will certainly be a boost to creativity.  And, Qualcommâ€™s SDK is going to give Android developers, in particular, a big opportunity to take the lead.</p>
<p><strong><br />
<h3>Interview with Jay Wright, Director, Business Development, Qualcomm</h3>
<p></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright.jpg"><img class="alignnone size-medium wp-image-5598" title="JayWright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright-300x255.jpg" alt="JayWright" width="300" height="255" /></a><br />
</strong></p>
<p><strong>Tish Shute:</strong> Before I start with questions on the new Qualcomm vision based augmented reality SDK, I want to briefly look ahead to what many people feel is vital for the full realization of augmented reality &#8211; head mounted displays, or more specifically, comfortable, sexy AR eye wear.  Is Qualcomm going to be involved in the development of augmented eye wear and wearable displays?</p>
<p><strong>Jay Wright:   I think thereâ€™s some core technology that needs to come together so we can have what we think needs to be a see-through head mounted display with a decent field of view.  And that looks like something that is quite possibly further than a three to five year horizon.</strong></p>
<p><strong>Tish Shute:</strong> Gene Becker asked some interesting general questions about the Qualcomm AR initiatives.  He said,  â€œIâ€™m unclear exactly what Qualcommâ€™s goal is.â€  It would be interesting to hear from you the Qualcomm view, from the top down.</p>
<p><strong>Jay Wright:</strong> <strong> Our largest revenue stream comes from sales of chipsets.    And we see augmented reality as a technology that drives demand for increasing amounts of processing power.  So we want to create demand for chips, higher-end chips, and augmented reality does that.  Specifically vision based augmented reality because it is so computationally intensive.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  And I think that is why people are very excited by the Qualcomm SDK.  It is not only the first free toolkit for developers to build vision apps from, isnâ€™t it?  Thereâ€™s been nothing freely available before this, has there?  But also Qualcomm is paying attention to the complete AR stack to support vision based AR development, from the chips to game/app development tools like Unity.</p>
<p><strong> </strong><strong>Jay Wright:  Thatâ€™s really the goal.  Weâ€™re not here to be in the augmented reality applications business.  Qualcommâ€™s role in the ecosystem has been to serve as an enabler.  And thatâ€™s what we want to do with augmented reality: provide the enabling technology that allows the entire ecosystem to flourish.</strong><br />
<br /></br><br />
<h3>&#8220;Augmented Reality has a number of attributes that make it a  great fit for Qualcomm&#8217;s core competencies&#8221;</h3>
<p></br><br />
<strong>Augmented Reality has a number of attributes that make it a great fit for Qualcomm&#8217;s core competencies. </strong><strong>Itâ€™s very computationally intensive, algorithmically complex, requires tight integration of hardware and software, and benefits from tight integration of multiple hardware components.  And thatâ€™s the kind of problem we like here, where we can apply our core competence of really optimizing complex systems for performance, while at the same time minimizing power consumption. </strong></p>
<p><strong> And as you know Tish, mobile AR is really extremely power sensitive.  We sometimes talk about it as a batteryâ€™s worst nightmare.  Itâ€™s roughly equivalent to playing a 3D game and recording a video all at the same time.</strong></p>
<p><strong>Whenever there is something that takes a lot of power, thatâ€™s a definite opportunity for us to optimize it.</strong></p>
<p><strong>Tish Shute:</strong> Right.  One of the core business is chips right, but for Qualcomm thereâ€™s basically a lot of profit in licensing.  When I talked to the developer community about the Qualcomm SDK developers first question was, â€œWhatâ€™s the licensing?  Whatâ€™s this going to cost us in the long run to develop on this SDK re licensing?â€  And they had all different takes on this.  So everyone had different ideas about what your approach to licensing might or might not be.  Could you clarify the approach to licensing, as I think this is a core concern for developers.</p>
<p><strong>Jay Wright:   Anytime you see something for free, you kind of say, â€œHey, whatâ€™s the hook?â€  So yes, itâ€™s definitely a logical question.  Our intent is not to generate licensing revenue from application developers using the SDK.  So the SDK will be made available free of charge for development, and it will also be free of charge for developers to deploy applications.</strong></p>
<p><strong>Tish Shute:</strong> Now, this is another question.  You also include not just image recognition capabilities but Unity in the package you are offering developers.  Unity products usually involve a license.  They do have some free products too, I think.  But how does this work?  And how do you separate your part from their part, or donâ€™t you?</p>
<p><strong>Jay Wright:  Thatâ€™s a good question.  What weâ€™re trying to do with the platform is incorporate it into tools that people already know how to use.  So weâ€™re actually going to have the SDK support two different tool chains.  One of them is the Android SDK and NDK.  And then the other one, is Unity.</strong></p>
<p><strong>Weâ€™re working with Unity to create an extension to the Unity environment that will be available as part of the Unity installer when you install Unity from the Unity website.  Developers will still be paying whatever license fees are associated with Unityâ€™s products on their existing pricing schedule.</strong></p>
<p><strong>Tish Shute:</strong> One of Thomas Wrobelâ€™s question is whether developers can just use the image recognition without Unity?  Your answer is yes, you can work with the computer vision component of the SDK separate from Unity?</p>
<p><strong>Jay Wright:  Yes, you can.</strong></p>
<p><strong>Tish Shute:</strong> Good because we would like to build a completely open Android client for ARWave, and not tie it to Unity unless people choose to.  Heâ€™s using the <a href="http://www.jpct.net/" target="_blank">open Android JPCT 3D engine</a>, which heâ€™s adapting for AR.  So he could actually use the part of the SDK that does image recognition and association with that, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.  You are not required to use Unity.  Unity is just one option for building the application.</strong></p>
<p><strong>Tish Shute:</strong> Great! Thatâ€™s very good.  But Iâ€™m sure many developers are going to jump on the chance to use Unity.  But I mean itâ€™s nice to be flexible because itâ€™s so early for AR that people have different ideas and new use cases coming up all the time.  I think itâ€™s excellent youâ€™ve divided that.</p>
<p>Another of Thomasâ€™s questions was, â€œCan developers use their own positioning data sharing solution?â€  Heâ€™s really talking about AR blips.</p>
<p><strong>Jay Wright:  With data sharing solutions, I am assuming that by data he means referring to augmentation data or graphics?</strong></p>
<p><strong>Tish Shute:</strong> Yes, and Iâ€™ll ask him to elaborate.  But, at the moment, everyone is using different ideas for POI, arenâ€™t they?<br />
<br /></br><br />
<h3>&#8220;The goal with our platform is to make it just as easy for a  developer to create 3D content for the real world as it is for a game  world or a virtual world.&#8221;</h3>
<p></br><br />
<strong>Jay Wright:  Yes.  So let me answer it this way, Tish.  The goal with our platform is to make it just as easy for a developer to create 3D content for the real world as it is for a game world or a virtual world.  So all weâ€™re really trying to do is provide the computer vision piece that makes the real world look like a bunch of geometric surfaces and potentially some meta data that is associated with this so you know what you are looking at.</strong></p>
<p><strong>So that means from a developerâ€™s perspective, you are still doing all of the 3D content, all of the animations, all of the game logic, all of the rendering.  You are still doing that all yourself.  So if you think about doing an AR game, you are doing everything you used to do, except you are not creating a virtual terrain.  You are just going to map it in the real world.</strong></p>
<p><strong>So if you want to do a browser that is doing POIâ€™s, your POI data, or augmentation, or meta data, or whatever it is, that can be in your application, it can be in the cloud, it can be wherever you want to put it.  Weâ€™re not putting any constraints on what that content is or where itâ€™s stored.</strong></p>
<p><strong>Tish Shute:</strong> Right, and thatâ€™s what I hoped for.  And I think that does answer the question.  People are interested to know how far Qualcomm is going with this.  For instance, Gene Becker asked: â€œdo they see a business at a certain level in the AR stack?â€  As you said AR development basically feeds into the core business of chip development, right?  But does Qualcomm also see some new business models developing?</p>
<p><strong>Jay Wright:   I think itâ€™s foreseeable that Qualcomm could identify other business opportunities down the line.  But weâ€™re certainly not there today.  Today, our motivation for the investment in AR is to create technology that is going to advance the chipset business.</strong></p>
<p><strong>Tish Shute:</strong> When the news came out about Qualcommâ€™s support of a game development studio at Georgia Tech at the same time as the SDK I think I wondered what was the scope of Qualcommâ€™s interest [for more on using Unity for AR development see <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">Vision-Based Augmented Reality Technical Super Session  video</a> from <a href="http://uplinq.com/">Uplinq 2010</a>].Â  For example, I am interested to know how the Qualcomm initiative in developing an AR stack connects to the effort to introduce an AR browser based on web standards, i.e., the <a href="https://research.cc.gatech.edu/polaris/content/home" target="_blank">Kharma/Kamra KML/HTML Augmented Reality Mobile Architecture from Blair MacIntyre and the Georgia Tech team</a> (image below)?  Are you supporting the open standards based browser development too?</p>
<p><strong>Jay Wright:   Blair is going to continue to work on the browser effort.  And itâ€™s our expectation that he will use our SDK and technologies for vision pieces of the browser effort where appropriate.  So they are certainly not mutually exclusive.  I would just think about our technology as one element of what may be used in that browser, as I expect it would be an element of what any other app developer would put in their application, whether it be browser, or game, or whatever.</strong></p>
<p><strong>Tish Shute:</strong> Yes Now, this is an interesting question, which is sort of connectedâ€¦Iâ€™m trying to keep some form of narrative for this!  It follows from the question about Blairâ€™s web-based standards browser.  A few people have asked me why we havenâ€™t heard more from Qualcomm in all these various standard discussions that are starting to come up.  I mean is it just too early, or are you too busy, or what?</p>
<p><strong>Jay Wright:  No, let me explain.  The type of standards that have come up so far have been around how HTML should be extended for geo-browser type applications.  And while thatâ€™s interesting, I think the standards efforts that Qualcomm would be more likely to be associated with in the near term are those related to APIâ€™s that are hardware accelerated.</strong></p>
<p><strong>So one of the things that we are in the process of doing right now, Tish â€“ because as you know, Qualcomm is a company that adheres to standards and strives to produce a leading implementation of those standards on our hardware and software â€“ is we are in the process of determining what API set within the existing SDK should be standardized.</strong></p>
<p><strong>Tish Shute:</strong> Right.</p>
<p>Now, my next question is, â€œWho are the other players at this level of the AR stack in the standards conversation? Who else is working at that level?â€  Obviously, the AR Lab in Graz was, but now they are Qualcomm, right?</p>
<p><strong>Jay Wright:   They are still independent.  Qualcomm is the exclusive industrial partner of the Christian Doppler Handheld AR LAB in Graz.</strong></p>
<p><strong>Tish Shute:</strong> Does this compete with, say, the work that other AR start ups are doing?</p>
<p><strong>Jay Wright:  Our intent is not to compete with companies that have done augmented reality technology.  Our intent is to enable the entire ecosystem.  So we would like to work with both Metaio and Total Immersion to find ways that they can benefit from our technology.  That would be the hope &#8211; that our technology can kind of lift and float all boats in the ecosystem.</strong></p>
<p><strong>Tish Shute: </strong>There are not many implementations of vision based AR right now?  I mean obviously Microsoft is doing stuff because they have <a href="http://www.robots.ox.ac.uk/~gk/" target="_blank">Georg Klein</a> now, right, and there is Google Goggles, Total Immersion, Metaio, and it will be interesting to see where Layarâ€™s partnership with Kooaba will lead?</p>
<p><strong>Jay Wright:  Yes.  I think there are relatively few commercial implementations of vision based AR stacks.</strong></p>
<p><strong>Tish Shute:</strong> One of Patrick O&#8217;Shaughnessey&#8217;s question is he wants to understand what features are going to be in the vision component, very specifically.  Patrick Oâ€™Shaughnessy, <a href="http://patchedreality.com/" target="_blank">Patched Reality</a>, working with <a title="Circ.us" href="http://circ.us/" target="_blank">Circ.us</a>,  <a title="Edelman" href="http://edelman.com/" target="_blank">Edelman</a>,   and <a title="metaio" href="http://metaio.com/" target="_blank">Metaio</a> used the Unifeye SDK to do <a href="http://mashable.com/2010/07/09/ben-and-jerrys-iphone-app/" target="_blank">a vision based AR app for Ben and Jerryâ€™s</a> thatâ€™s been getting all the attention lately. He was a speaker at are2010.</p>
<p>He very specifically wants to know what features will be included in the computer vision component.  He says, â€œIâ€™m most interested in understanding what features are going to be in the vision component.  Is it marker based?â€  Well I know itâ€™s more than marker  based.  I saw some of it in <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>.  Is it â€œNFT?  PTAM? other?  Also, are you are integrating any backend services.â€  That is an interesting question!</p>
<p><strong>Jay Wright:  So letâ€™s get to the features on the client side, the vision based features.  Thereâ€™s support for, what AR aficionados would know as natural feature targets, or image based targets.  And we use those to represent, obviously, 2D planar surfaces.</strong></p>
<p><strong>The other thing that we are trying to do to set expectations, Tish, about where these can be used is to let people know that they work best in what weâ€™re calling near-field environments.  So the idea isnâ€™t that you use the system to create a large scale AR system that can recognize buildings indoors and outdoors.  Itâ€™s the idea where I can recreate 3D experiences that take place on surfaces that are in my immediate field of view, whether that be on the table in front of me, or on the floor, or on the wall, or on the shelf.</strong></p>
<p><strong>Also, when you talk about near field experiences, there are some other constraints that are implied.  Like, if itâ€™s in front of me and my immediate field of view is probably going to be pretty well lit.  And lighting, of course, is an important requirement.</strong></p>
<p><strong>So weâ€™ll support these natural feature targets, or image targets.  And we also have support for sort of a hybrid marker image type.  Itâ€™s something called a frame marker, which has kind of a black border with some dots on it.</strong></p>
<p><strong><a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank"><img class="alignnone size-medium wp-image-5610" title="Screen shot 2010-08-05 at 5.13.50 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-5.13.50-PM-300x166.png" alt="Screen shot 2010-08-05 at 5.13.50 PM" width="300" height="166" /></a><br />
</strong></p>
<p>Click on the image above or <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">here to view Vision-Based Augmented Reality Technical Super Session video</a> from <a href="http://uplinq.com/">Uplinq 2010</a></p>
<p><strong>Jay Wright:  So thereâ€™s this additional type.  And the reason for this additional hybrid marker type is it has a lower computational requirement than a natural feature target.  So the idea is these things can be used as game pieces or elements of play where I want to have a large number of them detected and tracked simultaneously.</strong></p>
<p><strong>So you can have, for example, one big natural feature target that serves as a game board or game surface, and you can use these other things as smaller game pieces.  And when you put them out, different types of content can appear on them and do different things.</strong></p>
<p><strong>Tish Shute:</strong> Yes, thatâ€™s nice!  And the other thing I noticed was the virtual buttons.  How well developed is that?</p>
<p><strong>Jay Wright:  The idea behind virtual buttons is, in addition to supporting augmentation, we want to support interaction.  And we think there are going to be different types of user interaction with augmented reality content.  It may be hand tracking and finger tracking, but another compelling form weâ€™ve identified so far is the ability for me to touch particular surfaces and have an event fire within the application..</strong></p>
<p><strong>So virtual buttons are rectangular areas on image targets that a developer can define, and they serve as buttons.  So you can create a target that is a game board, for example, and define certain regions.  And when the user covers that region with his hand, like pushing a button, your application can detect that event and take some action.</strong></p>
<p><strong>Tish Shute:</strong> Nice!  And what is the documentation on these capabilities that is offered by Qualcomm&#8230;For example Yohan Baillot, who is interested in integrating eyewear-based AR systems with smartphones asked. How deep does this go?  Will there be full documentation on <a href="http://www.qualcomm.com/products_services/chipsets/snapdragon.html" target="_blank">Snapdragon</a>, people who want to work at that level? Is there a chip SDK?</p>
<p><strong>Jay Wright:   . Qualcommâ€™s model is to work with providers of the operating systems and deliver functionality of the chip through the operating system. So many operating systems APIs will take advantage of functionality thatâ€™s in the chip. But there is no separate chip SDK per se.</strong></p>
<p><strong>Tish Shute:</strong> I suppose that does come up a little bit with one of Anselm Hookâ€™s questions, because there is some overlap with Google Goggles here, isnâ€™t there, in terms of what youâ€™re doing, right? Are you going to work closely with Google Goggles ?</p>
<p><strong>Jay Wright: Google Goggles is performing what weâ€™ve described â€˜visual searchâ€™. So the idea is you take a picture, send it to the cloud and identify it and the results come back. I think if we see Google Goggles go in a direction where thereâ€™s an AR experience that would be a good area for us to collaborate with Google.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank">Anselm Hook</a> is very interested in having some kind of open standard around this physical tagging of the world, right, &#8211; the physical world as a platform. But I suppose thatâ€™s down the road but is there a plan to start talking about open standards here &#8211; visual search with image recognition? Thatâ€™s a very powerful combination. (see my interview with Anselm Hook here).</p>
<p><strong>Jay Wright:    I think it is. And weâ€™re very interested to hear from developers and others that have ideas about how they would want to integrate with the functionality that we have to best enable those kinds of combined experiences.</strong></p>
<p><strong>Tish Shute:</strong> Well, I know Anselm has a lot of very important ideas on that.</p>
<p><strong>Jay Wright: Iâ€™d be very interested in hearing those because we want to do everything we can to enable the maximum number of applications and best user experience for anything that people want to do.</strong></p>
<p><strong>Tish Shute:</strong> Letâ€™s go back to some specific questions about the platform, right? For example Yohan Baillot asked, â€œIs it arbitrary image/tag recognition supported? Is the tag / image specifiable by user? Is face recognition supported?â€  Not yet, face recognition, right?</p>
<p><strong>Jay Wright:    Not yet.</strong></p>
<p><strong>Tish Shute:</strong> What are the plans with that?</p>
<p><strong>Jay Wright:    I think weâ€™ve identified it as an interesting area and something that thereâ€™s some interest in, but have not made a decision on a particular technology direction.</strong></p>
<p><strong>Tish Shute:</strong> Youâ€™ve answered some of these but 3D model based vision tracking. Yohanâ€™s question was, â€œIs 3D model based vision tracking supported (that is recover the pose of the camera using a known 3D model and a 2D camera view of this model)?â€</p>
<p><strong>Jay Wright:    Thatâ€™s something weâ€™re looking at very closely, but again, donâ€™t have a plan, or donâ€™t have a future date for.</strong></p>
<p><strong>Tish Shute:</strong> And you said with the natural landmark tracking thatâ€™s not supported, right?</p>
<p><strong>Jay Wright:    I donâ€™t know if I know what that means, Tish. But we donâ€™t have any APIs that provide compass or GPS functionality other than already exists in the operating system. So if you want to take advantage of the compass or other sensors, you can absolutely do that, but the SDK does not currently provide anything different or anything more than already exists in the OS.</strong></p>
<p><strong>Tish Shute:</strong> This is an interesting question, â€œIs Snapdragon offloading some processing to the GPU, if any?â€</p>
<p><strong>Jay Wright:    Certainly  rendering functionality that utilizes OpenGL is being offloaded to the GPU. Weâ€™re currently in the process of determining multiple methods for offloading functionality between both symmetric and heterogeneous cores on Snapdragon. Which would include the GPU, the apps processor, and  DSPs.</strong></p>
<p><strong>Tish Shute: </strong> No one has truly solved optimizing the GPU/CPU for mobile AR yet have they?</p>
<p><strong>Jay Wright:    That really gets to the heart of the optimization here. Which pieces ought to be operating on which cores and when, and why? And thatâ€™s something that weâ€™re looking at very closely.</strong></p>
<p><strong>Tish Shute: </strong> Right.  The only AR &#8211; that is truly 3D media tightly registered to the physical world has been done for military and medical (and that has often been with a locked of camera!).  But to take mobile AR to the next level I think many developers would like access to the CPU/GPU, for example a developer interested in the future of eyewear like Yohan?</p>
<p><strong>Jay Wright:     Weâ€™re very interested in hearing what kinds of tools developers would like to see.</strong></p>
<p><strong>Tish Shute:</strong> What is the best forum for discussing feature specifics?</p>
<p><strong>Jay Wright:    To provide feature requests to us?</strong></p>
<p><strong>Tish Shute:</strong> Yes. And discuss them.</p>
<p><strong>Jay Wright:    if people go to <a href="http://qdevnet.com/ar" target="_blank">qdev.net/AR</a> thereâ€™s an application up there for the private beta program. So if people do have ideas about features or other things they would like to see, theyâ€™re welcome to submit [their requests and ideas] there.</strong></p>
<p><strong>Tish Shute:</strong> I also have some questions about the specifics of the competition.  Some people are a little confused about some things.  Yohan asked, â€œWhat is the expected form of the project?  Lab demonstration?  Specific capability?  Complete end to end system?â€</p>
<p><strong>Jay Wright:  The only requirement is that they submit an Android application that we can then get running on a device.  So if it has a backend component or backend server that it works against, great.  If it does, it does.  If it doesnâ€™t, it doesnâ€™t.  But thatâ€™s really it. Thereâ€™s no limit to the application category.  It can be a game, it can be a museum tour, it can be a childrenâ€™s learning game or learning experience.  It can really be anything.  The idea is we want to find experiences for which AR delivers some unique value. Weâ€™ll be announcing more specifics about the competition in the near-future.</strong></p>
<p><strong>Tish Shute:</strong> Right, because some people werenâ€™t sure about the Unity being separated whether it was biased towards games.  And itâ€™s not really, is it?</p>
<p><strong>Jay Wright:  Unity is a bias toward just rapid development for 3D, I think.  Itâ€™s most commonly associated with games, but there are also a lot of Unity customers that use it for medical simulations and other types of applications that arenâ€™t really games at all.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  Itâ€™s very flexible, I know.  You did bring up the backend services again.  Are you thinking of offering any of that?</p>
<p><strong>Jay Wright:  There is a backend tool that we offer.  And the backend tool is what you use to generate your targets.  So if you want to create or use a particular image for a target in your application, you upload it to our target management application, and then it will evaluate that target and tell you how well it will work.  So as you know, certain images are more likely to be recognizable than others.  And so thereâ€™s metrics in that application that will give you some feedback.</strong></p>
<p><strong>And then you can download your target resource from the website that you can then incorporate into your application project.</strong></p>
<p><strong>Tish Shute:</strong> So this is available at the moment to people who are in the private beta and not to&#8230;you know, all of this information and documentation, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.</strong></p>
<p><strong>Tish Shute: </strong>So thatâ€™s an incentive.  Now, just to encourage people to submit to the private beta is the other thing that people seem confused about.  In one part you say 25 developers.  And some people have thought that meant it was limited to 25 individuals.  And some people have like maybe four people on their team, so they were going, â€œWell, are we going to be accepted because we have four developers, or do we count as one because we are all working at the same project?â€</p>
<p><strong>Jay Wright:   itâ€™s just 25 companies.</strong></p>
<p><strong>Tish Shute: </strong> OK.  I think weâ€™ve gone through the questions.  Just to clarify and maybe give some incentive for people to apply to the private beta&#8230;the big advantage of getting in the private beta, aside from getting a monthâ€™s start on the competition, is that you get a chance to input, right?</p>
<p><strong>Jay Wright:  Yes.  A chance to provide feedback, get early access to the technology.  And then we are also providing a free HTC phone.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes.  I forgot the phone.  Yes, right.  In the requirements, though, you basically seem to be asking for sort of a full app&#8230;some people get reticent about delivering their full application plan, right?</p>
<p><strong>Jay Wright:  Yes.  I understand that.  People should just reveal what they are comfortable talking about.  Just so you understand the constraint on this end, this is early technology and weâ€™re trying to understand exactly what the support requirement is going to be.  And we have limited supported resources at this time, so we want to make sure that we can focus the resources that we have on folks that are really going to use the technology and have a sound plan to actually build something.  So thatâ€™s really the motivation behind limiting the size of the private beta.</strong></p>
<p><strong>Tish Shute:</strong> OK.  Yes, itâ€™s good to reiterate that.  Weâ€™re down to the last question that I have, and then Iâ€™ll ask you if there is anything that I missed.  You say you are partnering with Mattel.  Who are the developers?  Because I mean Mattel isnâ€™t an augmented reality development team.</p>
<p><strong>Jay Wright:  Mattel used a subcontractor, <a href="http://www.aura.net.au/">Aura Interactive</a>.</strong></p>
<p><strong>Tish Shute: </strong> Nice.  But thatâ€™s your only partner that I saw, right?  Why Mattel?</p>
<p><strong>Jay Wright:  Well, to launch a new technology, companies will often find showcase partners to demonstrate compelling uses of it.  And we thought Mattel and the Rockâ€™em Sockâ€™emâ„¢ toy was a great example of combining augmented reality with an existing toy.</strong></p>
<p><strong>Tish Shute:</strong> And I think people agree with you on Rockâ€™em Sockâ€™em (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameron&#8217;s RWW post</a>).</p>
<p><strong>Jay Wright:  And thereâ€™s other showcase partners and applications that we will continue to work on to kind of spur the ecosystem and show what is possible.</strong></p>
<p><strong>Tish Shute: </strong>OK.  Now, is there anything Iâ€™ve left out that you think?  Whatâ€™s the core of this narrative that we need to get across, and if Iâ€™ve left anything out that is a key piece?</p>
<p><strong>Jay Wright:  I think youâ€™ve done an excellent job of covering all the bases, Tish.</strong></p>
<p><strong>Tish Shute: </strong> [laughs]</p>
<p><strong>Jay Wright:  I think the important overriding message to get across is that we really see ourselves in an enablement role here, and that we are trying to provide&#8230;.weâ€™d like to provide fundamental technology that helps all developers build content for the real world.</strong></p>
<h3><strong><strong><br />
</strong></strong></h3>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010</title>
		<link>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/</link>
		<comments>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/#comments</comments>
		<pubDate>Wed, 16 Jun 2010 21:58:28 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mapping and Augmented Reality]]></category>
		<category><![CDATA[3d smartphone animated avatars]]></category>
		<category><![CDATA[Alan Turing-style AI]]></category>
		<category><![CDATA[Andrea Carignano]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR and Farmville]]></category>
		<category><![CDATA[AR as an interface for devices]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR technology]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[AR Wave at are2010]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave at are2010]]></category>
		<category><![CDATA[Auggie Award]]></category>
		<category><![CDATA[Augmented Reality Consortium]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality game development]]></category>
		<category><![CDATA[augmented reality gamers]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[Augmented reality shoes]]></category>
		<category><![CDATA[Blaise Aguera y Arcas]]></category>
		<category><![CDATA[Brad Foxhaven]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruno Uzzan]]></category>
		<category><![CDATA[Chris Cameron]]></category>
		<category><![CDATA[Cloud Mirror]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[e23 Games]]></category>
		<category><![CDATA[Eric Gradman]]></category>
		<category><![CDATA[federation and AR]]></category>
		<category><![CDATA[fiduciary markers]]></category>
		<category><![CDATA[gamer guys at are2010]]></category>
		<category><![CDATA[glocal]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Goggles on the iphone]]></category>
		<category><![CDATA[H.E.AI.D]]></category>
		<category><![CDATA[Helen Papagiannis]]></category>
		<category><![CDATA[Iguchi Takahito]]></category>
		<category><![CDATA[Ivan FRanco]]></category>
		<category><![CDATA[Jay Wright]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Jesse Schell at are2010]]></category>
		<category><![CDATA[Jesse Schell's keynote at are2010]]></category>
		<category><![CDATA[Joe Dunn]]></category>
		<category><![CDATA[Joshua Kauffman]]></category>
		<category><![CDATA[Kent Demaine]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[linked data and AR]]></category>
		<category><![CDATA[Mark Billinghurst]]></category>
		<category><![CDATA[Mark Billinghurst at are2010]]></category>
		<category><![CDATA[Marvin Minsky-style hard AI]]></category>
		<category><![CDATA[Microsoft and AR]]></category>
		<category><![CDATA[mini-global micro-startups]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[oooii]]></category>
		<category><![CDATA[Open AR]]></category>
		<category><![CDATA[OPen AR Stack]]></category>
		<category><![CDATA[Open AR Standards]]></category>
		<category><![CDATA[OpenAR]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Parrot AR Drone]]></category>
		<category><![CDATA[Patched Reality]]></category>
		<category><![CDATA[Patrick O'Shaughnessey]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Qualcomm at are2010]]></category>
		<category><![CDATA[rÃ©alitÃ© augmentÃ©e]]></category>
		<category><![CDATA[realtÃ  aumentata]]></category>
		<category><![CDATA[Roger Corman]]></category>
		<category><![CDATA[Rudy Rucker]]></category>
		<category><![CDATA[Sekai camera]]></category>
		<category><![CDATA[Sekai No Camera]]></category>
		<category><![CDATA[semantic search and AR]]></category>
		<category><![CDATA[Sigal Arad Inbar]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[standards for AR]]></category>
		<category><![CDATA[Stupid Fun Club]]></category>
		<category><![CDATA[Talking with Bruce Sterling at are2010]]></category>
		<category><![CDATA[territorialization]]></category>
		<category><![CDATA[The Future of AR eyewear]]></category>
		<category><![CDATA[The Hollywood AR Scene]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[VR]]></category>
		<category><![CDATA[Will Wright at are2010]]></category>
		<category><![CDATA[X: The Man with the X-Ray Eyes]]></category>
		<category><![CDATA[YDreams]]></category>
		<category><![CDATA[Zenitum]]></category>
		<category><![CDATA[Zenitum at are2010]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5524</guid>
		<description><![CDATA[Shortly after Augmented Reality Event &#8211; are2010, I talked with Bruce Sterling on skype and in gdocs about his experience there.Â  I am posting the conversation in two parts to make it a more blog friendly length! The picture above is the Auggie Award for the best AR demo (above) designed by Sigal Arad Inbar.Â  [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/auggie.jpg"><img class="alignnone size-medium wp-image-5525" title="auggie" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/auggie-300x217.jpg" alt="auggie" width="300" height="217" /></a></p>
<p><em> </em></p>
<p>Shortly after <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event &#8211; are2010</a>, I talked with Bruce Sterling on skype and  in gdocs about his experience there.Â  I am posting the conversation in two parts to make it a more blog friendly length!<strong><br />
</strong></p>
<p>The picture above is the <a href="http://gallery.me.com/pookatak#100153" target="_blank">Auggie  Award</a> for the best AR demo (above) designed by <a href=" http://www.pookatak.com" target="_blank">Sigal Arad Inbar</a>.Â  It was won by <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams!</a> See, <a title="Permanent Link to Ivan Franco recounts the teamâ€™s   ARE 2010 experience, and winning the eventâ€™s first-ever Auggie Award" rel="bookmark" href="http://www.ydreams.com/blog/2010/06/05/ivan-franco-recounts-the-team%e2%80%99s-are-2010-experience-and-winning-the-event%e2%80%99s-first-ever-auggies-award/">Ivan   Franco recounts the teamâ€™s ARE 2010 experience, and winning the  eventâ€™s  first-ever Auggie Award,</a> for more. Â  The video below was shot at the <a href="http://www.ydreams.com/" target="_blank">YDreams</a> booth by Bruce Sterling.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="300" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="flashvars" value="intl_lang=en-us&amp;photo_secret=40ef3f4bc9&amp;photo_id=4671874785&amp;flickr_show_info_box=true" /><param name="bgcolor" value="#000000" /><param name="allowFullScreen" value="true" /><param name="src" value="http://www.flickr.com/apps/video/stewart.swf?v=71377" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="400" height="300" src="http://www.flickr.com/apps/video/stewart.swf?v=71377" allowfullscreen="true" bgcolor="#000000" flashvars="intl_lang=en-us&amp;photo_secret=40ef3f4bc9&amp;photo_id=4671874785&amp;flickr_show_info_box=true"></embed></object><br />
<em>&#8220;The Hotness&#8221; &#8211; <a href="http://www.flickr.com/photos/brucesterling/4671874785/in/photostream/" target="_blank">YDreams rocking it at ARE2010 from brucesflickr</a></em></p>
<p>Rudy Rucker, who was hanging out with  Bruce Sterling, captured the are2010 buzz and some great  images in his post, <a title="Permanent Link to Augmented Reality,  Painting,  Twitter" rel="bookmark" href="http://www.rudyrucker.com/blog/2010/06/06/augmented-reality-painting-twitter/">Augmented   Reality, Painting, Twitter.</a> As Rudy put it:</p>
<p><strong>&#8220;AR is  hoping to be a next big thing, a cozier and more commerce-driven  cousin  of the old VR, or virtual reality.&#8221;</strong></p>
<p>Bruce Sterling&#8217;s opening key note is up<a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">, ARE 2010 Keynote by Bruce Sterling: Bake a Big Pie!</a>,   and also<a title="ARE 2010 Keynote by Will Wright: Brilliant  Inspiration  for the  Augmented Reality Community" href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/"> </a>the<a title="ARE 2010 Keynote by Will Wright: Brilliant Inspiration   for the  Augmented Reality Community" href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/"> ARE 2010 Keynote by Will Wright: Brilliant  Inspiration for the   Augmented Reality Community</a> with more videos from are2010 on the  way.Â  One must read post on are2010 is Chris Cameron&#8217;s post, <a href="http://www.readwriteweb.com/archives/augmented_realitys_next_steps_sitting_down_with_titans_of_ar.php" target="_blank">Augmented Reality&#8217;s Next Steps: Sitting Down with  the Titans of AR</a>.</p>
<p><strong><br />
</strong></p>
<h3>Talking with Bruce Sterling, Part 1</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/bruceandauggiepost.jpg"><img class="alignnone size-medium wp-image-5528" title="bruceandauggiepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/bruceandauggiepost-300x199.jpg" alt="bruceandauggiepost" width="300" height="199" /></a><br />
<em>The Auggie panel, <a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://gamepocalypsenow.blogspot.com/" target="_blank">Jesse Schell</a>, and Mark <a href="http://www.hitlabnz.org/wiki/Billinghurst,_M." target="_blank">Billinghurst</a> inspect the award.</em></p>
<p><strong>Tish Shute:</strong> In your keynote at the 9am of the augmented reality industry you asked  some questions of the are2010 audience: &#8220;Whatâ€™s the mission statement?Â   Youâ€™re the worldâ€™s first pure play experience designers, except that  user experience itâ€™s mostly futuristic hot air.Â  But run with that,  right?Â  What are your tactical steps?Â  You should get dressed, have a  coffee, have a to-do list.&#8221;</p>
<p>How much of that did you see going on in the  next two days?</p>
<p><strong>Bruce Sterling: </strong> <strong>Well, I wasnâ€™t privy to any of the business discussions.Â  I didnâ€™t  think it was an accident that <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-total-immersion-standards-proposal/" target="_blank">this standard AR enabled tag thing came up  from Bruno Uzzan, Total Immersion</a>.Â  That seemed to me to be a useful  thing. Â I was always interested in the <a href="http://www.arconsortium.org/" target="_blank">Augmented Reality Consortium</a>. Â It  struck me as remarkable that there was this group of people who clearly all knew one another and it had some  kind of game plan. Â I applaud them for that, because these are not the  1980â€™s.Â  [laughs]Â  You know, itâ€™s just a different world for young  startup companies.</strong></p>
<p><strong>Tish Shute:</strong> I think youâ€™re right.  There seem to be some VC conversations going on, we donâ€™t know what went on in the meetings, but it was noticeable in the atmosphere of excitement, and remarked on by a few people.  So I think that kind of was definitely going on.</p>
<p>And, of course, I was so busy I never even got to see the expo properly!  You said you wanted to be surprised.</p>
<p>Did anyone surprise you in any of the talks, in any of the expo?</p>
<p><strong><br />
</strong></p>
<h3 style="text-align: left;"><em><strong>AR used as interfaces for  devices</strong></em></h3>
<p style="text-align: left;"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/SeacO2are2010.jpg"><img class="alignnone size-medium wp-image-5530" title="SeacO2are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/SeacO2are2010-300x225.jpg" alt="SeacO2are2010" width="300" height="225" /></a></p>
<p><a href="http://www.flickr.com/photos/brucesterling/4673885122/" target="_blank"><em>Italian augmented robot from SEAC02 from brucesflickr</em></a></p>
<p><strong>Bruce Sterling:</strong> <strong>I have to say I was a little bit surprised to see Andrea Carignano demoing a robot.  I happen to know him because heâ€™s here in Torino.  Heâ€™s the guy that came out of Fiat and went into AR.  I am not a particularly huge robot fan, but I think itâ€™s of great interest that AR is used as interfaces for devices, as opposed to the Jesse Schell idea that AR is all about a â€œman with the X-ray eyes.&#8221;</strong></p>
<p><strong>My suspicion is that a lot of surprises will come out of mashups of AR.</strong></p>
<p><strong> </strong></p>
<p><strong>Tish Shute:</strong> I didnâ€™t get to see Andreaâ€™s robot.Â  So what did it do?</p>
<p><strong>Bruce Sterling:Â  It&#8217;s basically a sister device to that little helicopter that those Parrot AR Drone guys were doing. Â Itâ€™s a little autonomous robot and it runs around with a webcam on it.Â  You can place video into the acquisition stream coming off the robot.Â  You can play a game, and blow away imaginary monsters or whatever.</strong></p>
<p><strong>Tish Shute: </strong> Itâ€™s interesting, because did you notice Will Wright and Patrick O&#8217;Shaughnessey, <a href="http://patchedreality.com/" target="_blank">Patched Reality,</a> spend some time hacking the Parrot AR drone in the hallway?Â  Did you come across them?</p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post1.jpg"><img class="alignnone size-medium wp-image-5531" title="willpatrickparrot2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post1-300x199.jpg" alt="willpatrickparrot2post" width="300" height="199" /></a><br />
</strong></p>
<p><strong>Bruce Sterling:</strong> <strong>Rudy was there with them.Â  You know, I didnâ€™t want to watch Will Wright hack a robot.</strong></p>
<p>[laughter]</p>
<p><strong>Tish Shute: </strong> They seemed to be having fun even though as it turned out the power supply was dead.</p>
<p><strong>Bruce Sterling:Â  Iâ€™m sure Will enjoyed that. Â As a game designer, you want to go out and get your hands dirty with a plastic gizmo.</strong></p>
<p>[laughter]</p>
<p><strong>My Swiss Army knife can&#8217;t get through airport security, so I really donâ€™t want to strip anything down.Â  But yeah, what else did I see that was of particular interest?Â  I was pretty happy about the Korean guys because they are a difficult group to get close to.</strong></p>
<p><em><br />
</em></p>
<p><em><strong><br />
</strong></em></p>
<h3><em><strong>AR companies are like mini-global micro-startups.Â  Theyâ€™re <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-tonchidots-evolving-air-tags/" target="_blank">&#8220;glocal&#8221;.</a></strong></em></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Zenitumare2010.jpg"><img class="alignnone size-medium wp-image-5532" title="Zenitumare2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Zenitumare2010-300x225.jpg" alt="Zenitumare2010" width="300" height="225" /></a></strong><em> </em></p>
<p><em>&#8220;Korean elegance at the Zenitum booth&#8221; &#8211; <a href="http://www.flickr.com/photos/brucesterling/4673249423/in/photostream/" target="_blank">from brucesflickr</a></em></p>
<p><strong>Tish Shute: </strong><a href="http://www.zenitum.com/" target="_blank">Zenitum</a>.Â  What did you like from <a href="http://www.zenitum.com/" target="_blank">Zenitum</a>.Â  They were one of our sponsors, along with Qualcomm.</p>
<p><strong>Bruce Sterling:Â  I know that Seoul is like the number one center for augmented reality discussion.Â  But itâ€™s Â difficult to get behind the scenes as a journalist there and Â track whatâ€™s going on in Korea. Â Iâ€™m fine with Italian &#8220;realtÃ  aumentata.&#8221;Â Â Â And I feel like Iâ€™ve got a handle on French &#8220;rÃ©alitÃ© augmentÃ©e.&#8221; Â  The Germans were not hard to find, and the Dutch all speak English!Â  But the Koreans, and whoever the hell it is in Kuala Lumpur&#8230; Â I have no idea whatâ€™s going in Kuala Lumpur, and only the vaguest idea of whatâ€™s transpiring in Singapore! Â But I know that people there are paying a coherent interest.</strong></p>
<p><strong>So the Koreans show up, and they had some relatively predictable anime style 3D avatar conversion stuff.Â  But they had a really nice display space.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/zenitumare20102.jpg"><img class="alignnone size-medium wp-image-5533" title="zenitumare20102" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/zenitumare20102-300x225.jpg" alt="zenitumare20102" width="300" height="225" /></a><br />
</strong></p>
<p><em>&#8220;Anime figures become three-d smartphone animated avatars,&#8221; <a href="http://www.flickr.com/photos/brucesterling/4673872354/in/photostream/" target="_blank">from brucesflickr</a><br />
</em></p>
<p><strong>Tish Shute:</strong> Ah, So Zenitum created a hot spot at the exhibition?</p>
<p><strong>Bruce Sterling:Â  Yeah. Â The Koreans had Â IKEA furniture and some nifty little woven baskets.Â  Theyâ€™d really classed up their presentation. Â Most Koreans in tech tend to be kind of muscular. Â The Koreans are not known for their refined presentations.Â  On the contrary, they tend to undersell everybody else.Â  But I donâ€™t know, maybe theyâ€™ve been hanging out with Samsung and upgrading their design chops. </strong>[laughs]</p>
<p>Tish Shute:Â  Did you take some photos you could send me?</p>
<p><strong>Bruce Sterling:Â  I took a few, but Â I donâ€™t consider myself a photographer. Â Theyâ€™re all up on my Flickr set. It was interesting to see so many people from so many different nations in such a collegial atmosphere.</strong></p>
<p><strong>Tish Shute:</strong> Yes &#8211; there were many different countries represented at are2010</p>
<p><strong>Bruce Sterling:Â  Itâ€™s the beginningâ€¦Â and so global at such a young stage.</strong></p>
<p><strong>Tish Shute:</strong> Yes. As you said, it was 9 AM, so everyone was actually super excited to be gathered together from across the globe to start a new day together.Â  As you mentioned, there was a very warm affirmative vibe &#8211; everyone sharing a passion.</p>
<p><strong>Bruce Sterling: Â  They have an online commonality. They seem to be aware of one anotherâ€™s work through the Internet.</strong></p>
<p><strong>Clearly they had all heard about one another. Â That&#8217;s a departure from earlier models of tech startup, where you usually have like three hippies in a local garage.Â  Now youâ€™ve got German-American-Korean outfits like <a href="http://www.metaio.com/" target="_blank">Metaio</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> has a Russian affiliate. Â They&#8217;re inherently multinational, both inside the company and out.</strong></p>
<p><strong>Tish Shute:</strong> It was the multinational garage, wasnâ€™t it?</p>
<p><strong>Bruce Sterling:Â  Yeah. Â AR companies are like mini-global micro-startups.Â  Theyâ€™re <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-tonchidots-evolving-air-tags/" target="_blank">&#8220;glocal.&#8221; </a> Thereâ€™s something quite new to me about that.Â  I donâ€™t find itâ€™s shocking, because in Europe today it&#8217;s common to find startup teams who are multinational.Â  But to see such intense globalism at such an early stage of an industry is really different.</strong></p>
<p><strong>Tish Shute: </strong> Yes it made for a fun atmosphere?Â  It was wonderful running into Iguchi Takahito, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>.Â  You have a great rapport with each other despite the language barrier?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Iguchiandbrucepost.jpg"><img class="alignnone size-medium wp-image-5534" title="Iguchiandbrucepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Iguchiandbrucepost-300x199.jpg" alt="Iguchiandbrucepost" width="300" height="199" /></a></p>
<p><strong>Bruce Sterling:Â  Yeah. Â That guy from Tonchidot, heâ€™s very charismatic.Â  Heâ€™s punchy.Â  That&#8217;s reflected in the very strong graphic design from his company.</strong></p>
<p><strong>Tish Shute:</strong> Using minimal English to make the case for Sekai No Camera at the Auggies,Â Iguchi Takahito still got through to the audience.</p>
<p><strong>Bruce Sterling:Â  Well, his visuals were good.</strong></p>
<p><strong><br />
</strong></p>
<h3><em><strong>What AR means for artistic practice&#8230;</strong></em></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/cloudd.jpg"><img class="alignnone size-medium wp-image-5535" title="cloudd" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/cloudd-300x232.jpg" alt="cloudd" width="300" height="232" /></a><br />
</strong><em>Picture of</em> <a href="http://www.monkeysandrobots.com/" target="_blank">Eric Gradman&#8217;s</a> <a href="http://www.monkeysandrobots.com/cloudmirror" target="_blank">Cloud  Mirror</a>, <em>from James Alliban post</em><em> <a href="http://jamesalliban.wordpress.com/2010/06/10/are2010/" target="_blank">ARE2010 â€“ Augmented Reality utopia in SiliconÂ Valley</a> &#8211; </em><em>see for more on the are2010 ARt Gala</em><br />
<strong> </strong></p>
<p><strong>Tish Shute:</strong> So before I move on to wider themes, Iâ€™m going to wrap up on some of the different aspects of the conference.Â  I was chairing the technology track but you were more free roaming, was there anything that went on in the sort of hallway discussions and the presentation rooms that struck you?</p>
<p><strong>Bruce Sterling:Â  Well, I did get collared by artists. Â  They really wanted to talk to me. Â We got into someÂ serious discussions on Â what ARÂ meansÂ for artistic practice. Â How you can do this and reach that, how can one sharpen up oneâ€™s presentation? Â I mean, they really wanted some art criticism.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s very interesting.Â  Did you come up with anything that you hadnâ€™t been thinking about already through the conversations?</p>
<p><strong>Bruce Sterling: </strong> <strong>Iâ€™ve seen augmented reality installations before, and I certainly know many electronic artists.Â  But I donâ€™t know. Â People in the AR art space, they are looking for guidance and trying to find fellow spirits. Â In their own way, they have the same pioneer spirit as the business people.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/helenare2010post.jpg"><img class="alignnone size-medium wp-image-5541" title="helenare2010post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/helenare2010post-300x199.jpg" alt="helenare2010post" width="300" height="199" /></a><br />
</strong></p>
<p><em><a href="http://www.aliceglass.com/" target="_blank">Helen Papagiannis</a> shows Iguchi Takahito, Tonchidot, her AR Wonder Turner, an exquisite  corpse inspired installation</em></p>
<p><strong>Tish Shute:</strong> Yeah, itâ€™s interesting, because we wanted the art gala to be even bigger, but it turns out, because of the logistics of putting up art in a conference space is fabulously expensive, because it has to be all installed and hungâ€¦</p>
<p><strong>Bruce Sterling:Â  Iâ€™m keenly aware of that. Â At Share Festival in Turin we bring in six installations, and itâ€™s very heavy work. Â It really takes a lot of logistics. Â It was like a Battle of the Bands. Â It&#8217;s like doing a rock concert.</strong></p>
<p><strong>Tish Shute:</strong> One of the installations I was really sad to not have there was <a href="http://heaid.com/blog/" target="_blank">Uber geeks&#8217;Â  &#8220;Steve&#8221; H.E.AI.D installation</a> that Brady Forrest &amp; Co. took to Burning Man.</p>
<p>So I was very happy that we actually did get the number of artists we did.</p>
<p><strong>Bruce Sterling:Â  Well, there aren&#8217;t a million AR artists in the world, so itâ€™s hard to judge. Â  I didnâ€™t see many business people rushing up to have me critique their business plans.</strong></p>
<p><strong>Tish Shute: </strong>[laughs]Â  They were all in the meeting rooms.</p>
<p><strong>Bruce Sterling:Â  Maybe itâ€™s for the best.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>V<em>C and AR Startup Action</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671266724_7b7f1361d2.jpg"><img class="alignnone size-medium wp-image-5549" title="4671266724_7b7f1361d2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671266724_7b7f1361d2-300x199.jpg" alt="4671266724_7b7f1361d2" width="300" height="199" /></a><br />
</em></strong></p>
<p><a href="http://www.flickr.com/photos/chcameron/4671266724/in/photostream/" target="_blank"><em>The Zenitum Booth, are2010, photo from Chris Cameron&#8217;s Flickr stream</em></a></p>
<p><strong>Tish Shute: </strong> Do you know that why your talk started a few moments late is because we had 50 people who arrived from the Silicon Valley neighborhood I guess!</p>
<p><strong>Bruce Sterling:Â  Did they not preregister?</strong></p>
<p><strong>Tish Shute: </strong> No. They all stood in the line for the same day registration!</p>
<p><strong>Bruce Sterling: </strong> <strong>It &#8216;ll be interesting to see what transpires there, if there is a little wave of startup action.Â  God knows they need some place to put their money, because the VC scene in the US is pretty much moribund.</strong></p>
<p><strong>Tish Shute:</strong> Ogmento is the first US AR Games startup to get VC, I think.Â  I think there was some VC action at are2010 for sure.Â  And Qualcomm obviously seems to have commercialization plans for their AR technology, and to be scouting talentÂ  and ways to deliver new AR experiences.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/JayWrighte23games.jpg"><img class="alignnone size-medium wp-image-5542" title="JayWrighte23games" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/JayWrighte23games-300x199.jpg" alt="JayWrighte23games" width="300" height="199" /></a></p>
<p><span style="color: #1f497d;"><em>Jay Wright, Qualcomm presents Joe Dunn, e23 Games, winner of the are2010 StartUp Launch Pad with a check</em><br />
</span></p>
<p><strong>Bruce Sterling: Â Some Â people donâ€™t need venture capital.Â  I mean, Google Goggles isnâ€™t going to be hurting for VC money, obviously [ see Chris Cameron&#8217;s RWW post, <a href="http://www.readwriteweb.com/archives/google_goggles_coming_soon_to_iphone.php" target="_blank">Google Goggles Coming Soon to iPhone</a>] . Â AR mayÂ come up through other methods, like people allying themselves with Hollywood, or peeling off of advertising companies. Â  Thereâ€™s a lot of outfits who might conceivably want in-house AR skills. Â Then when people set up a specialty AR shop, Â they Â peel off the list of clients. Â I donâ€™t know.Â  Those old days Â of Silicon Valley venture capital seem like a lost world.</strong></p>
<p><strong>Tish Shute:</strong> Yes.Â  I, again, didnâ€™t see anything really of the business tracks and production tracks.Â  Did you get back and forth between the tracks?</p>
<p><strong>Bruce Sterling:Â  I went to the Hollywood tracks.Â  I mean, to the extent that I could.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>Is Hollywood stirring? Who&#8217;s going to have the first breakout AR property?</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.05.55-PM.png"><img class="alignnone size-medium wp-image-5562" title="Screen shot 2010-06-16 at 5.05.55 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.05.55-PM-300x162.png" alt="Screen shot 2010-06-16 at 5.05.55 PM" width="300" height="162" /></a><br />
</em></strong></p>
<p><strong>Tish Shute: </strong> So what did you see fromâ€¦Is Hollywood stirring?Â  Is it waking up?Â  I mean I know <a href="http://www.imdb.com/name/nm0218033/" target="_blank">Kent Demaine,</a> <a href="http://www.ooo-ii.com/" target="_blank">Oooii</a>,Â  and Brad Foxhoven, <a href="http://ogmento.com/" target="_blank">Ogmento</a>, spoke about the Hollywood AR scene.</p>
<p><strong>Bruce Sterling:Â  There were guys there from LA who were sort of saying, lookâ€¦they are aware of us, but they just want AR to promote their properties to some particular niche.Â  They realize that AR is potentially a mass medium and that you could do some real AR entertainment. Â So they were batting around some ideas as to where that might happen.Â  Like, could it come out of a console gaming scene? Â Whoâ€™s going to have the first breakout AR property? Â A popular hitÂ AR property, as opposed to like a neat way to sell shoes, or whatever.Â Â  Really, anybodyâ€™s guess is as good as theirs or mine. Â But at least they were actively guessing.</strong></p>
<p><strong>Tish Shute:</strong> I know the breaking the fourth wall discussion has been going on for a while and now the question is, whether AR is going to take down the fourth wall and bring interactive storytelling into the mainstream.Â  Did you hear any of that?</p>
<p><strong>Bruce Sterling:Â  Well, I always shy away from discussions of that kind because I donâ€™t think thereâ€™s any &#8220;final thing.&#8221; Â Practically everything that AR is involved in right now isÂ  a transitional technology. Also, because I am a storyteller, I get alarmed whenever people in technology start saying, â€œOh well, itâ€™s all about telling stories.â€Â  Because obviously it isnâ€™t.</strong></p>
<p><strong>People can tell stories perfectly well orally, and absolutely nobody does that. Â AR is not at all about telling stories.Â  Itâ€™s about a great many other things, such as user bases, niche audiences, Â media saturation, urban informatics, Â convergence culture, and the language of digital media. Â  I could list these factors until the world looks level. Itâ€™s really becoming pretty chaotic. Â As I was saying in my speech, AR companies are media startups who almost never use the old-fashioned word &#8220;media.&#8221;</strong></p>
<p><strong>Tish Shute: </strong> Oh, thatâ€™s interesting.Â  Yes.Â  So why do you think that has happened that way?</p>
<p><strong>Bruce Sterling:Â  Well, itâ€™s because they are trying to do a different thing than media does. Â I mean, they are trying to &#8220;augment reality.&#8221; Â They donâ€™t want you to know that you are using a medium. Â They don&#8217;t want you to realize that you&#8217;re watching computer animation overlaid on some video acquisition stream. Â That would defeat the whole point of AR. Â Itâ€™s entirely different from an analog medium like television, where you turn on the television and thereâ€™s a constant stream of station identification alerts. Â  Thatâ€™s like: â€œDonâ€™t touch that dial!Â  Youâ€™re on channel 13! Â Stay with us!â€ Â Then itâ€™s like, â€œAnd now a few words from our friendly sponsors!â€ Â That medium was engineered to keep your eyeballs locked to a single stream that theyâ€™re feeding you.</strong></p>
<p><strong>In AR, itâ€™s much more participative, more geolocative. Â Iâ€™m not particularly interested in station-identification branding from my AR provider. What I really want to see is the interactivity of the augments theyâ€™re bringing to me. Â Itâ€™s like Â FlickR, the photo sharing site. You donâ€™t have any TV-style splash page for FlickR. Â &#8220;Hi! Weâ€™re FlickR! FlickR, bringing your photos to you!&#8221; No, FlickR is all about &#8220;you, you, you,&#8221; your photos, your tags, your friends, your activity around you. Â  Itâ€™s immediately trying to be very participative.</strong></p>
<p><strong>Tish Shute:</strong> Will Wright got to that point, didnâ€™t he. He was trying to move us into an idea of blended reality. That the game is about the world, not about the dragons or the overlays per se.</p>
<p><strong>Bruce Sterling: Right. I think thatâ€™s true. But see, the world isnâ€™t a medium. A medium is something like this interview, Â where Iâ€™m connecting to you and thereâ€™s a video Skype channel between us. Â Whereas AR is more about spatial 3-D, Â about 3-dimensional impositions. Â Pieces of media: sound, vision, information visualization, tags, floating tags, air tags, icons, arrows, warning signs, warning sounds, tactility, whatever, being brought into the environment around us.</strong></p>
<p><strong>Thatâ€™s why it&#8217;s properly called &#8220;augmented reality&#8221; instead of just augmented media. Â  If you call your work &#8220;augmented media,&#8221; youâ€™re really in trouble. Because if itâ€™s all about augmenting somebody elseâ€™s media, why doesn&#8217;t that medium just buy you, and augment their own selves? Â Â Â If you think that way, instead of augmenting the world, you&#8217;ll just be a modest little plug-in for old-school media.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>The World as the Platform</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271578_50ef3396f5.jpg"><img class="alignnone size-medium wp-image-5548" title="4671271578_50ef3396f5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271578_50ef3396f5-300x199.jpg" alt="4671271578_50ef3396f5" width="300" height="199" /></a><br />
</em></strong></p>
<p><strong> </strong></p>
<p><em>Blaise Aguera y Arcas, Microsoft, Santa Clara, are2010, <a href="http://www.flickr.com/photos/chcameron/4671271578/in/photostream/" target="_blank">photo from Chris Cameron&#8217;s Flickr stream</a></em></p>
<p><strong>Tish Shute: </strong>Yes, which is why Blaise so generously gave the technical underpinningÂ  for augmenting reality in his tech talk &#8211; about the trellis and the grapes,Â  he really explained how the world can become a platform for augmented reality.</p>
<p><strong>Bruce Sterling: I wish I could have seen that. I did not see Blaiseâ€™s speech.</strong></p>
<p><strong>Tish Shute:</strong> Weâ€™re going to put the videos up in better quality.Â  People in the front row have <a href="http://gigantico.squarespace.com/336554365346/2010/6/6/mobile-ar-ooh-and-the-mirror-world.html">put it up on the web already</a>.Â  He really went into some of the challenges of mapping for augmented reality.</p>
<p><strong>Bruce Sterling: His visual-mapping technique is important. Â Registration is super important for AR.</strong></p>
<p><strong>Tish Shute: </strong>I think it was a really generous talk actually because he went step by step on how we will do this.</p>
<p><strong>Bruce Sterling: I rather imagine thatÂ Microsoft has patented those steps.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes, I guess so!</p>
<p><strong>Bruce Sterling: I could be wrong. Maybe theyâ€™ll open-source it. You never know.</strong></p>
<p><strong>Tish Shute: </strong>You never know. Because the world as a platform isn&#8217;t something one company can own, or go it on their own to exploit.</p>
<p><strong>Bruce Sterling: I expect there to be a thorny path, but sometimes Iâ€™m surprised. Sometimes people really do try to fertilize the tech field in the hope of getting a good corn crop before they start fighting.</strong></p>
<p><strong>Tish Shute: </strong>Weâ€™ll I keep hearing that we may even see the unlikely marriage of Apple and MicrosoftÂ  &#8211; maybe wishful thinking, but there are motivations beyond AR for this unlikely match, and certainly between them these titans have what it takes to realize the grand visions of AR ? [laughs] But who knows&#8230;</p>
<p><strong>Bruce Sterling: Well, yeah, it depends on where the thing catches fire.</strong></p>
<p><strong>Tish Shute:</strong> Yes. You mean whether AR catches fire in the form ofÂ  AR and mapping..</p>
<p><strong>Bruce Sterling: Itâ€™s hard to say, but Iâ€™m convinced now that thereâ€™s more going on than I once thought. I thought that Bruno Uzzan made a very good speech for his company when he talked about how he worked on AR for eleven years. Â Eleven years is no flash in the pan. Â  He has his long list of clients and successful applications. I thought he was right in his impatience with the press for not catching on. Itâ€™s gone on for quite awhile. The mere fact that youâ€™re not aware of it, doesnâ€™t mean it doesnâ€™t exist.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>The Illusive AR eyewear</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Origoggles.jpg"><img class="alignnone size-medium wp-image-5550" title="Origoggles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Origoggles-300x199.jpg" alt="Origoggles" width="300" height="199" /></a><br />
</em></strong></p>
<p><em>My <a href="http://augmentedrealityevent.com/" target="_blank">are2010</a>co-chair, Ori Inbar, CEO and co-founder of the hottest new AR game development  start-up, Ogmento, donning his goggles to open <a href="http://augmentedrealityevent.com/" target="_blank">are2010</a> &#8211;  <a href="http://www.flickr.com/photos/chcameron/4671264048/sizes/m/in/photostream/" target="_blank">picture from Chris Cameron&#8217;s Flickr stream </a></em></p>
<p><strong>Tish Shute:</strong> Yes. So, the other theme you brought up in your opening keynote and I would be interested to know if anything you saw at are2010 changed your view is the illusive AR eyewear, andÂ  if we actually got AR Goggles that worked they would bring AR&#8217;s gothic sister, VR, back from the grave right? [laughs]</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute: </strong> It took quite a lot of work, but we pulled together a six-company HMD panel, right?</p>
<p><strong>Bruce Sterling: Yeah. I was impressed to see so many of them there.Â  And I was chagrined to see how prototype-like all their gadgets were. But that doesnâ€™t surprise me, because if any of those head-mounts were remotely working, they would be hyped out the wazoo. Everybodyâ€™s been waiting for them and hoping for the best. Theyâ€™re obviously not ready for prime time. [laughs] Maybe in certain limited applications. Like maybe a diving mask. [laughs]</strong></p>
<p><strong>Tish Shute: </strong>No, I think what was nice though they got inspired and they all got together on the last day. I saw them having a meeting about standards. They got inspired to actually work together.</p>
<p><strong>Bruce Sterling: Yeah, well, unless theyâ€™re going to invent mechanical eyeballs that those machines can fit onto, itâ€™s going to be tough. OK, Iâ€™m a skeptic, but Iâ€™m prepared to be surprised. Iâ€™m also a skeptic in Artificial Intelligence, but as soon as they bring me an AI that can write a decent novel, Iâ€™m going to get it and review that book.</strong> [laughs]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting. Re AI, Iâ€™m totally in agreement with you. In terms of the way computers turned out, it wasnâ€™t AI per se that they turned out to be good for, not in the way everyone had dreamed of it, rather it was the harvesting of human intelligence that turned out to be the big thing. But what is interesting is that despite all of that, AI or machine learning, as it is now called, permeates our whole society now from the stock market to how many businesses make many of their decisions.</p>
<p><strong>Bruce Sterling: Well, thereâ€™s a lot of so-called collective intelligence. Â But Marvin Minsky-style hard AI, no way. Alan Turing-style AI, forget about that.</strong></p>
<p><strong>Tish Shute:</strong> Yeah. So, thatâ€™s an interesting comparison with the HMDs.</p>
<p><strong>Bruce Sterling: People stretch the definitions. Â Itâ€™s like, well, my car engine is Artificial Intelligence. Yeah, so is your wall transistor. No, I donâ€™t really think so.</strong></p>
<p><strong>And AR is a similarly big tent. I mean, Uzzan had to admit that he had denied that AR was AR, unless it was using his favorite technology. And he felt embarrassed to be rubbing shoulders with people who put AR into cell phones. And I can understand his feeling there, because, gee whiz, thatâ€™s certainly not what AR pioneers had in mind. But he had to admit heâ€™d become more ecumenical about it. Obviously, theyâ€™re Â there and doing business like gangbusters. You canâ€™t very well ignore success, right?</strong></p>
<p><strong>I had a similar feeling about the goggles. Obviously, the goggles would be great, should they work. But if they did work, I rather think virtual reality would come very strongly to the fore. Â Youâ€™d see people doing all kinds of elaborate immersive-style stuff. Â  A truly immersive technology doesn&#8217;t need to &#8220;augment&#8221; much of anything.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, youâ€™re right.</p>
<h3><strong><em>Social Augmented Experiences</em></strong></h3>
<p><strong>Bruce Sterling: I think many of the most interesting AI aspects are not personal in the way goggles are.Â  Theyâ€™re not about guys walking around with personal tech. Theyâ€™re about big, communal, social-media experiences, like stage shows, and urban informatics, things where large numbers of people can interact with the same augmented reality. The projection mapping, which I go on and on about. Augmented public spectacles.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, projection&#8217;s our best example of a social augmented experience right now because we are yet to have an easy way to do networked social augmented experiences easily &#8211; but that is of course the thrust of my interest in <a href="http://arwave.org/" target="_blank">ARWave </a> [see the slides for my presentation, <a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank">AR Wave:Â  Federation,  Game Dynamics, Semantic Search, Mobile Social Communications</a> here].</p>
<p><a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank"><img class="alignnone size-medium wp-image-5563" title="Screen shot 2010-06-16 at 5.12.05 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.12.05-PM-300x225.png" alt="Screen shot 2010-06-16 at 5.12.05 PM" width="300" height="225" /></a></p>
<p><strong>Bruce Sterling: I think of Edisonâ€™s early days, when he wanted to sell movies to people for a nickel a clip. Â You had to bend over and put your eyes on this visor and turn this crank. That coin-op device was easy for Edison to monetize, as opposed to getting a bunch of people to sit in theater seats. But people laugh at movies when theyâ€™re together in the seats. Â  Cinema is a more social, involving experience in a crowd situation.</strong></p>
<p><strong>Tish Shute: </strong>But it started with them, didnâ€™t it, Hollywood &#8211; the movie biz? Basically Nickelodeons, right?</p>
<p><strong>Bruce Sterling: Thatâ€™s right. They were Nickelodeons. They were a lot like the goggles because they isolated the user.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, thatâ€™s a really important point that the goggles are not Nirvana because of this question of whether they actually detract from the social augmented experience and blended realities, by drawing us into VR experiences?</p>
<p><strong>Bruce Sterling: Iâ€™m tempted to claim that theyâ€™re more a VR technology than an AR technology.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s a very interesting point becauseâ€¦</p>
<p>[thunder]</p>
<p><strong>Tish Shute:</strong> Wow! What was that?</p>
<p><strong>Bruce Sterling: Thunder storm.</strong></p>
<p><strong>Tish Shute:</strong> Oh, my God, how very Gothic! [laughs]</p>
<p><strong>Bruce Sterling:</strong> <strong>It can get pretty loud up here in the mountains.</strong></p>
<p><strong>Tish Shute:</strong> Oh, you live in the mountains, better still!</p>
<p><strong>Bruce Sterling: Â TorinoÂ is in the foothills. This is Piemonte. So the Apennines are over there. The Alps are over here. We do get some rather spectacularly unstable weather</strong>.</p>
<p>Tish Shute: It sounded like a bomb to my NYC ears. [laughs]</p>
<p><strong>Bruce Sterling: Yeah, it didnâ€™t hit the building, but it was maybe half a kilometer away. I saw the flash.</strong></p>
<p><strong>T</strong><strong>ish Shute: </strong>Oh, you did? Â Â Well, I hope you donâ€™t lose your power midstream here. Â  Â I was really happy to hear of that connection between Rudy Rucker and LayarÂ  [Rudy was touched when Maarten Lens-FizgGerald from <a href="http://www.layar.com/" target="_blank">Layar</a> said that he met  the Layar  co-founder at a Rudy Rucker lecture].</p>
<p><strong>Bruce Sterling: That was very fun, yes.</strong></p>
<p><strong>Tish Shute: </strong>Wasnâ€™t that wonderful? What was that experience like going around the conference with Rudy?</p>
<p><strong>Bruce Sterling: Well, you know, Rudyâ€™s very into graphics. Heâ€™s a mathematician, so he understands the underpinnings of this stuff. But heâ€™s a skeptic. He thinks theyâ€™re kid toys. Heâ€™s not a gamer. Heâ€™s a good old-fashioned computer-science hacker. So he wanted to tell me all about his new eighth-order, fifth-dimensional fractals. He showed me a great many of them. Theyâ€™reÂ psychedelic. Rudyâ€™s fractals are considerably trippier than most apps that help you find a barber or a train station. [laughs] Rudy really is a visionary. Heâ€™s into some very weird stuff.</strong></p>
<h3><strong><em>Gamer Guys at are2010</em></strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Brad-booth.jpg"><img class="alignnone size-medium wp-image-5552" title="Brad-booth" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Brad-booth-300x211.jpg" alt="Brad-booth" width="300" height="211" /></a></p>
<p><em>Brad Foxhoven, </em><span><em>Chief Marketing Officer, Co-Founder, <a href="http://ogmento.com/" target="_blank">Ogmento </a>at are2010</em><br />
</span></p>
<p><strong>Tish Shute:</strong> At are2010 there was a lot of discussion about how game dynamics and AR are going to intersect, right? Anything that you saw of interest there?</p>
<p><strong>Bruce Sterling: Well, obviously, there are gamer guys there. Ori&#8217;s a gamer. The gamer guys are getting some money. The big buzz right now in gaming is, of course, social gaming. Â Farmville has kicked everybodyâ€™s ass because itâ€™s not even a game and yet it has more users than the entire gaming industry.</strong></p>
<p><strong>Tish Shute: </strong>I know, right! [laughs]</p>
<p><strong>Bruce Sterling: Obviously thatâ€™s kind of humiliating. For a long time, I&#8217;ve seen people trying to do giant multiuser games on cell phones. Itâ€™s difficult to do because the interface on cell phones is crap, right? People arenâ€™t going to run around responding to SMSs.</strong></p>
<p><strong>I can imagine people running around with little Wii-style bats that have audio and visuals on them. It makes a very large native AR game seem more plausible.</strong></p>
<p><strong>Tish Shute:</strong> Yes. that would be cool!</p>
<p><strong>Bruce Sterling: Again, it&#8217;s not very gamelike to use those little fiduciary markers.</strong></p>
<p><strong>Tish Shute:</strong> No.</p>
<p><strong>Bruce Sterling:</strong> <strong>Moving little cardboard chips, around like with card games&#8230;. It would be pretty easy to set up a little AR chess game. Â Star Trek style hologram chess pieces, Â and so forth. But itâ€™s just cumbersome.</strong></p>
<p><strong>Tish Shute:</strong> And also, from what weâ€™ve seen from things like Foursquare, the proximity based social gaming doesn&#8217;t have to offer very much [a crown badge, a mayorship] to get some mind share.. the social is the primary game dynamic&#8230;</p>
<p><strong>Bruce Sterling: Â Iâ€™ve seen a lot of different philosophies of gaming over the years. Whoâ€™s to say that Second Life doesnâ€™t have the best idea? They built a little scene and then slammed their gate shut behind them. Â But at least theyâ€™ve got a really nicely-paying little cult stuck in there. Itâ€™s different. And itâ€™s manageable and itâ€™s really theirs, theirs, theirs. Â They donâ€™t have to call in outside experts to try and run the monster. Â Â They havenâ€™t blown it up to the scale of Yahoo! where theyâ€™ve lost control of the enterprise, and gone into a tailspin of management overhead. Second Life has a very intense, almost a cultish atmosphere among the player-slash-developers.</strong></p>
<p><strong>Tish Shute:</strong> One thing that helped them was the thing they were always criticized, that the barrier of entry was so high. But once they got people they never left, right?</p>
<p><strong>Bruce Sterling: Â Thatâ€™s not a bug, thatâ€™s a feature.</strong></p>
<p><strong>Tish Shute:</strong> One of the best features!</p>
<p><strong>Bruce Sterling: Yeah, itâ€™s like being in Mensa. Why donâ€™t you lower your barriers to entry and get in some interesting stupid people?</strong></p>
<p><strong>Tish Shute: </strong>[laughs]</p>
<p><strong>Bruce Sterling: In Mensa, weâ€™d rather sit here making puns about neutrinos and fourth-order quadratic equations. [laughs] OK, thatâ€™s a business model, if thatâ€™s what you want.</strong></p>
<h3><strong><em>The Man With the X-Ray Eyes!</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271624_d63b9bff7a.jpg"><img class="alignnone size-medium wp-image-5553" title="4671271624_d63b9bff7a" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271624_d63b9bff7a-300x199.jpg" alt="4671271624_d63b9bff7a" width="300" height="199" /></a><br />
</em></strong></p>
<p><em>Jesse Schell&#8217;s during his keynote, &#8220;Seeing,&#8221; at are2010, <a href="http://www.flickr.com/photos/chcameron/4671271624/in/photostream/" target="_blank">picture from Chris Cameron&#8217;s Flickr stream</a></em></p>
<p><strong>Tish Shute: </strong> Ok!Â  Now to unpack the man with the x-ray eyes idea, Jesse Schell&#8217;s keynote theme.Â  This is a root metaphor for AR &#8211; making the invisible visible, seeing through walls. To me. I think you kind of wrote the book on this because all my ideas on what radical transparency might be come from you &#8211; your idea of Amazon.org is key to how I understand this..</p>
<p><strong>Bruce Sterling: Oh, really? Thatâ€™s funny. Â Â I was touched that Jesse brought up that famous Corman film, because I was a judge in a fantasy film conference in Trieste earlier this year.Â  And Roger Corman was there.Â  He was the guest of honor. Â Â &#8220;X: the Man with the X-ray Eyes&#8221; was one of the films shown during the conference, and I saw it.Â  I even had dinner with Roger Corman.Â  I had never met him before, so that was quite amusing.Â  The difficulty with a film of that kind is that what we science fiction writers call a &#8220;House of Cards Ending.&#8221; Â In that story structure, Â you ramp the thing up until the protagonist sees God, and then he has to be destroyed by the falling pillars of the temple. Â Thatâ€™s a classic science fiction structure: Â like Frankenstein. Â For the sake of the drama, Corman evades the issue of whatâ€™s really going on. For instance, letâ€™s just suppose &#8220;the Man with the X-ray eyes&#8221; is not in fact a psychopath.Â  Letâ€™s say he gets a grant from the Department of Health and Human Services, and he acts like a real scientist, not a stock B-movie &#8220;mad scientist.&#8221; So he has, like, backup guys, and some placebos, and a large group of people to test it on, trusted colleagues, and so forth. Â You wouldnâ€™t get any of that movie&#8217;s wild activity out of that.Â  What you would get is like a 5% improvement to peopleâ€™s vision.</strong></p>
<p><strong>Then, in a year, there would be a 10% improvement in peopleâ€™s vision. Â There would be a Â classic industrial story. Â A rising star, you know, a cash cow. Â  Real tech isn&#8217;t done by a single guy as aÂ divine curse. Â It&#8217;s created by classicÂ  tech startup culture. Â So a runaway technology really behaves in the way that personal computers do.</strong></p>
<p><strong>Tish Shute:</strong> The things that get me all Utopian and happy about this are the ideas like those you first outlined with the notion of Amazon.org.</p>
<p><strong>Bruce Sterling:Â  It would be easy to do an entirely different kind of filmÂ than &#8220;Man with the X-ray Eyes.&#8221; Â Something much less B movie, Â much less pat.Â  I mean, at the end of the film, Â he destroys his own hardware and blinds himself.Â  Why?Â  For what rational reason would he do that? Â Why doesnâ€™t anybody else know the big secret of what heâ€™s doing?Â  Why arenâ€™t there Koreans doing it?Â  Why arenâ€™t there Austrians doing it?Â  Why arenâ€™t there Italians doing it?Â  Why?Â  AR doesnâ€™t behave like that.Â  Itâ€™s not one lone guy with magic eye drops.Â  Itâ€™s entire teams of people that have been working on stuff for 17 years.Â  They all approach it in different ways.</strong></p>
<p><strong>Now, they are going to get scandals in AR.Â  I can guarantee you that.Â  They are going to get into Â hot water eventually. Â At least some people will surely come out and accuse them of being Roger Corman B movie monsters.Â  But unless they accidentally discover atomic fission or destroy the Gulf of Mexico with an oil spill [laughs], I donâ€™t think theyâ€™re going to be particularly badly off! Â  The trouble I imagine Â for AR people is very typical new media trouble. Â It&#8217;s like movies being accused of corruptingÂ our morals, or comic books being accused of leading to violence, or Google being accused of making us stupid and warping our brains.</strong></p>
<p><strong>Iâ€™m not an alarmist in that sense, but at least Iâ€™m concerned about real threats. Â Roger CormanÂ is a B-movie director whoâ€™s trying to sew up his lost plot ends by destroying his hero and his hardware. Thatâ€™s not very plausible. Itâ€™s a nice science fiction movie device, but technology isn&#8217;t a movie.</strong></p>
<p><strong>Tish Shute:</strong> Yes. Well, the other thing that you always remind us of with AR is not to be saying itâ€™s going to be this glorious moment when itâ€™s no longer gimmickey, no longer pop culture. You always emphasize that&#8217;s actually part of whatâ€™s good about it.</p>
<p><strong>Bruce Sterling: </strong> <strong>Itâ€™s not an accident that practically everybody in that audience knew about Roger Corman. Â Nobody looked surprised; not the Austrians, not the Koreans. They were all like: â€œOh, yes! Roger Corman!Â Â Love him!â€</strong></p>
<p><strong>Tish Shute:</strong> There were so many Rudy Rucker fans. Were you watching Twitter? People like Eric Gradman were succumbing to fanboyz moments..</p>
<p><strong>Bruce Sterling: â€œYeah. Rudy Rucker, heâ€™s the best.â€</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4673263249_a73568ebca.jpg"><img class="alignnone size-medium wp-image-5556" title="4673263249_a73568ebca" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4673263249_a73568ebca-225x300.jpg" alt="4673263249_a73568ebca" width="225" height="300" /></a><br />
</strong></p>
<p><em>&#8220;Rudy Rucker gripping an Augmented Reality shoe&#8221; <a href="http://www.flickr.com/photos/brucesterling/4673263249/in/photostream/" target="_blank">from brucesflickr</a></em></p>
<p><strong>Tish Shute:</strong> [laughs]Â  I noticed you inspired him to join Twitter..</p>
<p><strong>Bruce Sterling: Well, Iâ€™ve got 8,000 followers and, obviously, a lot of them are Rudyâ€™s fans. Â Of course heâ€™s going to be gang-rushed on Twitter. Thatâ€™s not really any more surprising than two motorcycle stunt guys at the same attraction. And Iâ€™m a big fan of his Rudy&#8217;s blog. Â  Heâ€™s always got interesting things to say.</strong></p>
<p><strong>Tish Shute:</strong> Yes. AR does seem to bring out some of the coolest smartest people!Â  This morning I had breakfast with <a href=" http://www.linkedin.com/in/joshuakauffman" target="_blank">Joshua Kauffman</a> in Central Park.Â  He is an advisor and entrepreneur working on design in the public sphere.Â  I was feeling rather brain dead and jet lagged.Â  I told Joshua I was wondering how to get the cottonwool out of my brains for this interview and he suggested,Â  the All Souls College one-word question interview!Â  Have you ever heard of that? &#8211; although apparently <a href="http://www.nytimes.com/2010/05/28/world/europe/28oxford.html" target="_blank">they recently scrapped it</a>.</p>
<p><strong>Bruce Sterling: Well, Iâ€™ve heard of All Souls College there in Oxford. What was their interview question?</strong></p>
<p><strong>Tish Shute:</strong> They used to use only one word, so they would only give you one word. Itâ€™s not a question. Basically, they throw out the word and then you had to spin off from there.</p>
<p><strong>Bruce Sterling: Youâ€™re supposed to free-associate on a single word?</strong></p>
<p><strong>Tish Shute: </strong>I guess so. I hadnâ€™t heard about it, but Joshua suggested it.</p>
<p><strong>Bruce Sterling:Â  Well, itâ€™s possible..</strong></p>
<p><strong>Tish Shute:</strong> Joshua came up with some good words..</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> We were talking about these proximity-based social work networks like Foursquare and Gowalla and how they may influence the emergence of social augmented experiences.</p>
<p>So Joshua&#8217;s suggestion for the first word was &#8220;territorialization&#8221; e.g. how do these new mobile social experiences like Foursquare,Â  and the observation that actually rather than breaking down territorialization, which would be a good thing, tend to support territorialization&#8230;but perhaps new forms of territorialization?</p>
<p><strong>Bruce Sterling: Yeah, theyâ€™re re-intensifying it in a very odd, electronic fashion.</strong></p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Bruce Sterling: I have noticed that. Â Itâ€™s not true of stuff like projection mapping or the webcam fiduciary display stuff. But with the handheld stuff, and especially the urban informatic stuff, it really canâ€™t help but take on a local flavor. Layar is like &#8220;Augmented Dutch Reality.&#8221;</strong></p>
<p><strong>And TonchiDot really is &#8220;Augmented Japanese Reality.&#8221; Itâ€™s hard to imagine a Layar interface going gangbusters at Tokyo. Â Whereas the TonchiDot interface, which is very clearly influenced by Anime and cartoon graphics&#8230;. Maybe it could find some niche of hipsters in Amsterdam hash barsâ€¦</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>&#8230;to be continued in Part 2</em><strong> </strong></strong></h3>
<p><strong> </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Over 40 AR companies &amp; Qualcomm, Microsoft, Google, Intel, Nvidia, and Nokia coming to Augmented Reality Event: Are Nokia back in AR?</title>
		<link>http://www.ugotrade.com/2010/05/14/over-40-ar-companies-qualcomm-microsoft-google-intel-nvidia-and-nokia-coming-to-augmented-reality-event-are-nokia-back-in-the-ar/</link>
		<comments>http://www.ugotrade.com/2010/05/14/over-40-ar-companies-qualcomm-microsoft-google-intel-nvidia-and-nokia-coming-to-augmented-reality-event-are-nokia-back-in-the-ar/#comments</comments>
		<pubDate>Fri, 14 May 2010 20:30:34 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR glasses]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR using Wave Federation Protocol]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality conference]]></category>
		<category><![CDATA[Augmented Reality Event. ARE2010]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[com.geo]]></category>
		<category><![CDATA[EarthMine]]></category>
		<category><![CDATA[Gate 5]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[iPhone SDK 4]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Maemo]]></category>
		<category><![CDATA[Meego]]></category>
		<category><![CDATA[Mirascape]]></category>
		<category><![CDATA[Mobilizy]]></category>
		<category><![CDATA[N8]]></category>
		<category><![CDATA[Nokia and Augmented Reality]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Open AR Web]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open standards for augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Patched Reality]]></category>
		<category><![CDATA[Polaris]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[TagWhat]]></category>
		<category><![CDATA[Verizon Droid]]></category>
		<category><![CDATA[Where 2.0]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5450</guid>
		<description><![CDATA[I have been so busy working on Augmented Reality Event, Jun 2nd, 3rd, Santa Clara, CA, in recent weeks that I have barely had time to post!Â  But it&#8217;s getting hard to contain my excitement about ARE2010.Â  If you haven&#8217;t already seen the sneak preview of the scheduleÂ  &#8211; see here. Augmented Reality Event will [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.youtube.com/watch?v=xrXHXin9Iio&amp;feature=player_embedded#!" target="_blank"><img class="alignnone size-medium wp-image-5460" title="Screen shot 2010-05-14 at 9.41.29 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/05/Screen-shot-2010-05-14-at-9.41.29-AM-300x183.png" alt="Screen shot 2010-05-14 at 9.41.29 AM" width="300" height="183" /></a></p>
<p>I have been so busy working on <a href="http://augmentedrealityevent.com/" target="_blank">Augmented Reality Event, Jun 2nd, 3rd, Santa Clara, CA,</a> in recent weeks that I have barely had time to post!Â  But it&#8217;s getting hard to contain my excitement about ARE2010.Â  If you haven&#8217;t already seen the sneak preview of the scheduleÂ  &#8211; <a href="http://augmentedrealityevent.com/2010/04/10/sneak-preview-of-are-2010-schedule-packed-with-augmented-reality-goodness/" target="_blank">see here</a>.</p>
<p>Augmented Reality Event will be an unique opportunity to see how the   complete vision of AR is emerging, one that will include visual   recognition of real life objects, sensors to enable interaction with   physical objects, and the long anticipated, comfortable, sexy, AR   eyewear.Â  Six companies with be presenting and exhibiting AR glasses and   HMDs at Augmented Reality Event.</p>
<p>Keynote luminaries will include: Bruce  Sterling, â€œAR Prophet,â€ Will Wright  (Spore, The Sims &amp; now AR and  The Stupid Fun Club), Jesse Schell  (who will give the AR follow up to  his now viral Dice talk), and Blaise  Aguera y Arcas (Microsoft, Bing),  whose talk on augmented reality maps  was one of the most popular talks  at TED this year.</p>
<p>Over 40 AR  companies from all over the world are represented, and there will be<strong> </strong>speakers from Qualcomm, Google, Microsoft,  Nokia, Intel, and  Nvidia.Â  Over 80 augmented reality experts and entrepreneurs are presenting,  including a AR titans, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> and <a href="http://www.metaio.com/" target="_blank">Metaio</a>.Â Â  <a href="http://www.wikitude.org/team" target="_blank">Mobilizy</a>, <a href="http://www.layar.com/" target="_blank">Layar</a>,  <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://www.tagwhat.com/" target="_blank">TagWhat</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.zenitum.com/" target="_blank">Zenitum</a> (just to name a few of the stellar augmented reality  start ups coming &#8211; <a href="http://augmentedrealityevent.com/speakers/" target="_blank">for more, and a list of speakers see here</a>) will demonstrate AR is not just a vision, but actually drives  significant  businesses today,</p>
<p>Five Augmented Reality  industry start ups will be on the hot seat in a Launchpad competition  with $10,000  prize for the winner, thanks to the generosity ofÂ  Qualcomm,Â a leading  developer and innovator of advanced  wireless technologies, and the  featured sponsor of Augmented Reality Event.Â  <a href="http://www.zenitum.com/" target="_blank">Zenitum</a>, a  leading  augmented reality company, with a technology portfolio that  includes  both location and advanced vision-based sensory input is the  Gold  sponsor for ARE2010, and Adobe are sponsoring the ARt Gala &amp; Reception.Â  Many thanks to the sponsors for making ARE2010 possible.</p>
<p>Readers  of this post can use my discount code TISH245 to <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">register here for a $245</a> price for the whole 2 day event.</p>
<p>But just to add some balance to my total enthusiasm for the rocket like trajectory of augmented reality in recent months, this post will also look at an area that has been, to date, disappointing for AR developers. Â  While Nokia will have a couple of speakers at ARE2010, and certainly they have a very impressive history in augmented reality research, they have not delivered the magic brew to attract the augmented reality  developer community.Â Â  Will this change with <a href="http://events.nokia.com/NokiaN8/" target="_blank">the release of the N8</a> (see video opening this post)?Â  I  wanted to find out if AR developers see a light at the end of  the tunnel with N8 as a platform for AR.Â  So I asked some AR developers  and entrepreneurs, <strong>â€œHow attractive is  the N8 for the AR  community?â€<br />
</strong></p>
<p>Nokia&#8217;s apparent nonchalance about augmented reality is in sharp contrast to the bear hug shown <a href="http://www.youtube.com/watch?v=sVLJghTeKVU&amp;feature=player_embedded#!" target="_blank">in this Verizon Droid Augmented Reality Ad</a> featuring Layar!Â  As <a href="http://www.wired.com/beyond_the_beyond/2010/05/augmented-reality-verizon-droid-ad/" target="_blank">Bruce Sterling notes</a>,</p>
<blockquote><p><strong>*Hollywood-up for Layar by cooking up some oâ€™ those big-time sci-fi  production values.<br />
All magic, all the time, Mr. Roboto.</strong></p></blockquote>
<p><strong><a href="http://www.youtube.com/watch?v=sVLJghTeKVU&amp;feature=player_embedded#!" target="_blank"><img class="alignnone size-medium wp-image-5461" title="Screen shot 2010-05-14 at 10.21.50 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/05/Screen-shot-2010-05-14-at-10.21.50-AM-300x180.png" alt="Screen shot 2010-05-14 at 10.21.50 AM" width="300" height="180" /></a><br />
</strong></p>
<p>And it is clear that everyone has high hopes that <span><span><span> iPhone SDK 4 delivers fully for AR developers this time! For more on this listen to @chrisgrayson&#8217;s <a href="http://bit.ly/aJypn9" target="_blank">3rd video on Mobile AR here</a>. </span></span></span></p>
<p>But before I get back to my question about the AR community&#8217;s thoughts on N8 as a platform for augmented reality, let me crow just a little more about the upcoming awesomeness of Augmented Reality Event.Â  As the chair of the technology track, ARE2010 is on my mind night and day at the moment!</p>
<p>The tools for the kind of augmented reality we have always dreamed  ofÂ  â€“ the zero click interface to a heads up, hyperlocal view, bringing  you computing anywhere, anytime, with anything, are evolving fast.  Â Powerful algorithms for search, and machine learning, combined with  cooperating cloud data services, will soon be bringing apps that learn  by context accumulation to your AR view (see <a href="http://siri.com/" target="_blank">Siri</a>).Â  <a href="http://www.google.com/mobile/goggles/#text" target="_blank">Google Goggles</a>, who will have  a speaker at ARE2010, have already announced their intent to open APIs,  putting a vital tool of visual search in developerâ€™s hands.</p>
<p>A bunch of technologies are maturing these days and helping bring AR  to consumers.Â  Mobile devices that pack all the ingredients for AR,  location-based services, see thru video goggles, visual search,  innovations in 3D mapping, Simple Geo&#8217;s innovative a approach to aggregating geo data, and the next generation of &#8220;Street View&#8221; such  as Earthmine, and, very importantly, open standards and software for AR  that will bring many new opportunities for creativity, and  monetization. Â  These are just some of the aspects of this energetic technology  convergence that will be represented atÂ  Augmented Reality  Event.</p>
<p>In the area of open standards, I am especially excited<strong> </strong>about the debut of Polaris at Augmented Reality Event.Â  Polaris is<strong> </strong>an AR browser, from Blair MacIntyre and the Georgia Tech team, based as much as possible on existing standards and tech.Â  More on this and <a href="http://arwave.org/" target="_blank">ARWave </a>in another post!Â  Also, I am excited to be <a href="http://www.com-geo.org/program_techtalks.htm" target="_blank">demoing ARWave at the Com.Geo 2010 conference, June 21 &#8211; 23, in Washington</a>.Â  Again more on this later.</p>
<p>But for now back to my question opening this post:</p>
<h3><strong>&#8220;Are Nokia back in the AR?&#8221;Â  and <strong>â€œHow attractive is  the N8 to the AR  community?â€ </strong></strong></h3>
<p><strong><strong> </strong></strong>First I asked, my co-chair on Augmented Reality Event, Ori Inbar of <a href="http://ogmento.com/" target="_blank">Ogmento</a> &#8211; one of the augmented reality start ups  whoâ€™s star is on  the rise, and Ori (also known for his writing on <a href="http://gamesalfresco.com/" target="_blank">Games   Alfresco</a>) is one of the key thought leaders in the emerging  augmented  reality industry.<strong><strong><br />
</strong></strong></p>
<p><strong>Ori  Inbar:</strong></p>
<blockquote><p><strong>&#8220;Well, the hardware spec is impressive â€“ but thatâ€™s not what  makes it a good AR device. Whatâ€™s really important is reach,  distribution, and ease of development â€“ areas in which the N8 still has a  lot to prove.</strong></p>
<p><strong>- Will the N8 become as prevailing as the iPhone or  lead Android phones or will the Nokia landscape remain fragemented with  too many variants of phones to support?</strong></p>
<p><strong>- Will the Ovi store pick  up steam and become a lucrative chanel for selling apps?</strong></p>
<p><strong><strong>While  we wait for these questions to be answered â€“ weâ€™ll keep developing  games for Android and the iPhone </strong><img src="../wp-includes/images/smilies/icon_wink.gif" alt=";)" />&#8220;</strong></p></blockquote>
<p>So  there you go.</p>
<p>Ori is right that augmented  reality developers like the hardware specs.Â  Here are some comments  from Patrick Oâ€™Shaughnessey of <a href="http://patchedreality.com/" target="_blank">Patched Reality</a>, Peter Meier, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, and  whurley of <a href="http://chaoticmoon.com/" target="_blank">Chaotic Moon</a> and Robert Rice, <a href="http://www.mirascape.com/" target="_blank">Mirascape</a>.</p>
<p><strong>Patrick  Oâ€™Shaughnessey:</strong></p>
<blockquote><p><strong>â€œLooks like it has everything one would want to make  both looking glass and magic mirror AR, and built-in face recognition  software to boot! Â I wonder if they allow you to use both cameras at  once making AR video conferencing possilbe?â€</strong></p>
<p><strong>â€œDedicated graphics  processor with OpenGL 2.0 enables 3D graphicsâ€ nice!</strong></p>
<p><strong>â€œLooks like  it also does Flash (Flash Lite 4.0, anyway). Not sure if Flash Light  provides access to the camera. Could be a nice way to do quick and dirty  POCâ€™s on the deviceâ€</strong></p></blockquote>
<p><strong>Peter Meier: </strong></p>
<blockquote><p><strong>&#8220;Metaio thinks the N8 is a  very promising device for AR. Especially in Europe we expect a lot of  reach.Â  Symbian is hard to develop for, but on the other hand is much  more efficient in terms of real time performance than Android and less  restrictive on APIs than iPhone.&#8221;</strong></p></blockquote>
<p><strong>whurley:</strong></p>
<blockquote><p><strong><strong> </strong> &#8220;It looks really great and perfect for ar in many regards.&#8221;</strong></p>
<div><strong>&#8220;clarity  of a 12 MP camera with Carl Zeiss optics will give the Nokia N8 some  clear advantages in the AR world.Â  The clarity of this camera will allow  for more advanced applications of augmented reality such as image/object  recognition&#8221;.</strong><strong> </strong></div>
</blockquote>
<div><strong>Robert Rice:</strong></div>
<blockquote>
<div>
<p><strong> </strong></p>
<blockquote><p><strong>&#8220;At first glance this is  definitely very impressive from a hardware specification. I am surprised  though, at the lack of buzz about the device. I have heard virtually  nothing about it in the usual sources I monitor, and I wonder about the  marketing. </strong></p>
<p><strong>Given the technical superiority and the difficulty Nokia has  had penetrating at least the North American market, maybe something  catchier like the GR-8 (Nokia Great) or M8 (Nokia Mate) might be a bit  more user friendly and easier to market. I particularly like &#8220;Nokia  M8&#8243;&#8230;take it with you wherever you go, can&#8217;t live without it, etc. </strong></p>
<p><strong> Anyway, as awesome as the specifications are, I would like to echo some  of the other comments here&#8230;tools, community, ease of development, etc.  are critical parts of the ecosystem that would make this ideal for AR  development, above and beyond the pure hardware specs. I think Nokia has  to really re-think some of its strategy to engage consumers, attract  the developers, and establish a new beachhead in North America.&#8221;</strong></p></blockquote>
</div>
</blockquote>
<p><strong><br />
</strong></p>
<h3><strong>Talking with Michael Halbherr at Where 2.0</strong></h3>
<p><strong> </strong></p>
<p>I talked to Michael Halbherr briefly at <a href="http://en.oreilly.com/where2010" target="_blank">Where 2.0</a> before the release of details on the N8, so, of course, he was unable to be as explicit as he might have been about the N8.Â  Below is a lightly edited transcript.Â  At Where 2.0, Michael HalbherrÂ  gave a clear explanation ofÂ  Nokia&#8217;s emphasis on a global strategy.Â Â   And, as <a href="http://www.gearthblog.com/blog/archives/2010/03/where_20_-_day_2.html" target="_blank">this blow by blow account</a> on the Google Earth Blog noted, &#8220;They&#8217;re doing some cool stuff with their handheld maps. In particular, their Ovi Maps (turn-by-turn navigation) are now completely free and becoming very popular.&#8221;</p>
<p>But the question remains:Â <strong> Where is AR is on the Nokia roadmap?</strong></p>
<p><strong><a href="http://www.flickr.com/photos/oreillyconf/4479897269/in/set-72157623619935511/" target="_blank"><img class="alignnone size-medium wp-image-5454" title="4479897269_97e027cb5f" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/04/4479897269_97e027cb5f-300x199.jpg" alt="4479897269_97e027cb5f" width="300" height="199" /></a></strong></p>
<p><strong><strong> </strong></strong></p>
<p><em>Michael Halbherr (Nokia) on stage at the O&#8217;Reilly Where 2.0 Conference  2010 in San Jose, California. Photo by <a rel="nofollow" href="http://duncandavidson.com/">James  Duncan Davidson</a>. See <a href="http://www.flickr.com/photos/oreillyconf/4479897269/in/set-72157623619935511/" target="_blank">Where 2.0 Flickr set here</a>.</em></p>
<p><strong>Tish Shute:</strong> Can I ask you a few questions about Augmented Reality?</p>
<p><strong>Michael Halbherr: </strong> <strong>So for me augmented reality  and mixed reality is essentially a way you search.Â  So when you look at  where we come from, as the company that acquired <a id="oipx" title="Gate 5" href="http://apb.directionsmag.com/archives/1880-Nokia-acquisition-of-Gate5-a-sign-of-things-to-come.html">Gate 5</a>, we believe the phone is  essentially the gate that basically combines or connects your virtual  and your real world in both ways.Â  So we can have virtual overlays on  top of reality, and you can also record and push it back.</strong></p>
<p><strong>But when you look at the way we do  it, I think it is actually an extension of mapping, right?Â  So you will  have the map, which is an abstraction of reality, and then basically  with a flick you can go to real reality.Â  And I think to really  understand what you see, you need to actually really know what you see,  and an image doesnâ€™t do that.Â  But if you have compass, and GPS, and 3D  models, which is what Navteq is capturing, we can actual do line of  sight calculations and you can really say, â€œOK.Â  He is looking at that  building.â€ And you can click on the building because you have that  understanding.Â  And you can combine that.Â  And that leads to a  completely new way for how people interact with their environment.</strong></p>
<p><strong> So the here and now, and me here  and now, and what next is a key innovation in our focus hereafter.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I suppose for AR enthusiasts we were all disappointed when we saw the  N900 with Maemo, which was very exciting as an operating system, didn&#8217;t  have a compass.</p>
<p><strong><strong>Michael  Halbherr:</strong> Yeah, but you know, that is just a matter of timing.Â  At  the end of the day, <a title="Meego" href="http://meego.com/">Meego</a> is our, I think, high-end push.Â  You will see this Mego device have  what you need it to have.Â  So I wouldnâ€™t worry too much about that.Â  We  are building all theÂ  phones, and everything has everything.</strong></p>
<p><strong>I think  it is not just Meego.Â  It is the whole thing around it.Â  It is the QT,  it is the web run time, it is the whole Meego system.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>Yes we have used QT for some development on the ARWave project, it is very good.</p>
<p><strong><strong> </strong></strong></p>
<p><strong><strong>Michael Halbherr:</strong> Yeah, and the web run times,  itâ€™s openâ€¦So Nokia is about open and emotional.Â  Others are maybe open  and more utilitarian or closed, and we are more open and more  emotional.Â  And I think that is what we want to basically do with this  platform.</strong></p>
<p><strong>And we are working  very hard.Â  And so whatever I say has been built almost a year ago.Â  So  there is a lot that we are building that we donâ€™t talk about before we  announce.Â  And we try to do this as a global offer, so we can&#8217;t launch a  little service in the US, which maybe is a problem for us.Â  But we are  basically building complete solutions for global markets.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes.Â  And when  I heard you talk just now on Nokia&#8217;s global strategy just now, it all  fits in.Â  But from the point of view from someone who lives here in the US, it  seems Nokia is disappointing us.</p>
<p>So is Nokia  enabling augmented reality in any ways that noone else is?Â Â  I mean if  you are interested in augmented reality, the backend  and data is vital, e.g. the Navteq part.Â  And Apple has excited us with the  amazing performance of their hardware, and they have taken steps like  controlling the manufacture of their chips, powerful cpu and gpu is very  important for augmented reality experiences too? Is there a need for a  chip set geared to augmented reality specifically?</p>
<p><strong><strong>Michael Halbherr: </strong> Iâ€™m not sure you have to go  that far.Â  I think at the end of the day it is the data, right?Â  The map  data and the 3D data that is actually extremely important.</strong></p>
<p><strong><strong> </strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yeah, I  agree with you that the data is vitally important.</p>
<p><strong><strong>Michael Halbherr:</strong> Because  at the end of the day, when you have a real abstract view, it is really  important, because the camera or the view is only a viewer, right?Â  And  people donâ€™t really know what they see.Â  It is just they plaster on it  believing that the person sees what he sees.Â  So I think real augmented  reality or mixed reality is really basically having a real picture, but  having, actually, the understanding of the 3D logic of the picture.Â  And  for that, you need to capture the data which Navteq is doing.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The original  dream of augmented reality is 3D media and graphics tightly registered  to the physical world.Â  But actually, it is the dataâ€¦being able to  integrate the streams of data and sensor fusion techniques that will  make this interesting and possible right?</p>
<p><strong><strong> </strong></strong></p>
<p><strong><strong>Michael Halbherr: </strong> Again, it is overlaying  virtual elements on real elements.Â  It is like you are taking your  glasses and now you can see information attached.Â  And of course, you  need to make sure it is not overloaded, and that everybody works with  it, and it is precise.Â  So that is basically it.Â  So I think that is why  we are investing a lot in the precision of the maps and the 3D elements  of the maps.</strong></p>
<p>At this point we had to leave our seats has the set was being broken down for the next event at Where 2.0.Â <a href="http://www.youtube.com/view_play_list?p=7560B263F3C6B849" target="_blank"> Check out all the videos for Where 2.0 here</a>,Â  it was a watershed event for all things hyperlocal, geo, and augmented!<strong><br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/05/14/over-40-ar-companies-qualcomm-microsoft-google-intel-nvidia-and-nokia-coming-to-augmented-reality-event-are-nokia-back-in-the-ar/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality, and Physical Hyperlinks for Playfulness, Not just Purchases: Talking with Paige Saez about ImageWiki</title>
		<link>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/</link>
		<comments>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/#comments</comments>
		<pubDate>Fri, 19 Mar 2010 03:25:17 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[Augmented reality Magician]]></category>
		<category><![CDATA[Augmented Reality Meetup]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Chris Grayson]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[Imagwik]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[Jason Kolb]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[linked data and augmented reality]]></category>
		<category><![CDATA[Makerlab]]></category>
		<category><![CDATA[Marco Tempest]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open Frameworks]]></category>
		<category><![CDATA[open Frameworks and augmented reality]]></category>
		<category><![CDATA[OpenCV]]></category>
		<category><![CDATA[OpenCV and augmented reality]]></category>
		<category><![CDATA[optical character recognition]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[physical hyperlinking]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[RDF and Augmented Reality Search]]></category>
		<category><![CDATA[semantic web and augmented reality]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[Social Commons for Augmented Reality]]></category>
		<category><![CDATA[SPARQL]]></category>
		<category><![CDATA[SPARQL and ARWAVE]]></category>
		<category><![CDATA[SPARQL and Wave]]></category>
		<category><![CDATA[SPARQL and XMPP]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubiquity]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5262</guid>
		<description><![CDATA[The video above, The Imawik commercial, is a collaboration between In The Can Productions and Paige Saez for Makerlab &#8220;The Imawik (ImageWiki) is a visual search tool for mobile devices. It allows for the ability to turn images into physical hyperlinks, conflating visual culture with a community-editable universal namespace for images.&#8221; Paige Saez is an [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em>The video above, <a href="http://www.vimeo.com/2818525" target="_blank">The Imawik commercial</a>, is a collaboration between <a href="http://www.inthecanllc.com/" target="_blank">In The Can Productions</a> and <a href="http://makerlab.com/who.html" target="_blank">Paige Saez</a> for <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a></em></p>
<p>&#8220;The Imawik (<a href="http://imagewiki.org/" target="_blank">ImageWiki</a>) is a visual search tool for mobile devices. It allows for the  ability to turn images into physical hyperlinks, conflating visual  culture with a community-editable universal namespace for images.&#8221;</p>
<p>Paige Saez is an artist, designer and researcher.Â  In 2007 she founded <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a> with <a href="http://www.hook.org/" target="_blank">Anselm  Hook</a>, an arts and technology incubator focused on civic and  environmental projects.</p>
<p>Paige and Anselm (see my interview with Anselm Hook here, <a title="Permanent Link to Visual Search,  Augmented Reality and a Social Commons for the Physical World Platform:  Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>) have been asking a very important question:<strong></strong></p>
<p><strong>&#8220;Who Will Own Our Augmented Future?&#8221;</strong></p>
<p>But most importantly, they have been actually developing applications (again<a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank"> see my interview with Anselm</a> for more background on this), to allow people to play with, hack and explore and create with the physical world platform, and to imagine new possibilities for physical hyperlinking and augmented realities.Â  This is pretty important stuff, and kudos to Paige and Anselm for beginning this work before the big players &#8211; <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>,  and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> came hurtling into the field of visual search and physical hyperlinkingÂ  &#8211; <a href="http://techblips.dailyradar.com/video/translation-in-google-goggles-prototype/" target="_blank">see this demonstration of translation and optical   character recognition</a> in Google Goggle&#8217;s.Â  Also check out Jamey Graham&#8217;s (Ricoh Research) Ignite presentation at Tools of Change, 2010 &#8211; <a href="http://www.toccon.com/toc2010/public/schedule/detail/13370" target="_blank">Visual Search: Connecting Newspapers, Magazines and Books to Digital Information without Barcodes</a>, for more see <a href="http://ricohinnovations.com/betalabs/visualsearch">ricohinnovations.com/betalabs/visualsearch</a>.</p>
<p>We are only just beginning  to get a glimpse of how contested the social commons of the physical  world platform is going to be &#8211; see the Yelp <a href="http://blogs.wsj.com/digits/2010/03/17/small-businesses-join-lawsuit-against-yelp/" target="_blank">controversy.</a> <strong> </strong></p>
<p>As Paige points out:</p>
<p>&#8220;<strong>The lens that you are actually  looking through was as important as what you were looking at. And  democratizing that lens became the most important thing that we could  possibly do.&#8221;</strong></p>
<p>I<strong> </strong>am in total agreement.Â  One reason I have so much enthusiasm for <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> (note: if you are interested in following the developer conversations there are several public Waves) is I see this open framework playing an important role in the democratization of our augmented views, by creating an open, distributed, and universally accessible platform for  augmented reality that will allow the creation of augmented reality content and games to be as  simple as making an html page, or contributing to a wiki.</p>
<p>Federation, real time collaboration, <a href="http://linkeddata.org/" target="_blank">linked data</a> &#8211; ARBlips that contain metadata that is usable for semantic searches, and modified wave servers that can listen to and respond toÂ <a href="http://www.w3.org/TR/rdf-sparql-query/" target="_blank"> <span> </span>SPARQL</a> HTTP  requests properly (see Jason Kolb&#8217;s <a href="http://jasonkolb.com/" target="_blank">many interesting posts </a>on XMPP and Wave).Â <span> These are just some of the reasons why </span>ARWave could revolutionize augmented reality  searches and more! (see<a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"> my presentation at MoMo13</a> &#8211; video <a href="http://www.youtube.com/watch?v=Y7iqg8X24mU" target="_blank">here</a>)</p>
<p>For more on real time social augmented experiences see our panel, <a href="http://en.oreilly.com/where2010/public/schedule/detail/11046" target="_blank">The Next Wave of AR: Exploring Social Augmented Experiences</a> at <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0 2010</a>, and don&#8217;t miss the <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0</a> conference which has been the crucible for the emergence of location technologies.</p>
<p>Augmented realities, proximity- based social networks,  mapping &amp; location aware  technologies, sensors everywhere, <a href="http://linkeddata.org/" target="_blank">linked data</a>, and human  psychology are on a collision course in what <a href="http://www.schellgames.com/" target="_blank">Jesse Schell</a> calls the &#8220;Gamepocalypse&#8221; Â  See <a href="http://g4tv.com/videos/44277/dice-2010-design-outside-the-box-presentation/" target="_blank">Jesse Schell&#8217;s Dice 2010  talk here,</a> and check out his <a href="http://www.gamepocalypsenow.blogspot.com/" target="_blank">Gamepocalypse Now</a> blog.Â  As Bruce Sterling&#8217;s notes in <a href="http://www.wired.com/beyond_the_beyond/2010/02/jesse-schell-future-of-games-from-dice-2010/" target="_blank">his post here</a>:</p>
<p><strong>*Another  precious half hour out of your life.Â   However: if youâ€™re into   interaction design, ubiquity, social networking, and trendspotting, in   the gaming biz or out of it, youâ€™re gonna wanna do yourself a favor and   listen to this.</strong></p>
<p>And don&#8217;t forget to <a href="http://augmentedrealityevent.com/register/" target="_blank">register now</a> for <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event (ARE2010 in 2-3 June, 2010 â€“ Santa Clara, CA</a><a href="http://augmentedrealityevent.com/" target="_blank">)</a><strong>.</strong></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://www.stupidfunclub.com/" target="_blank">Will Wright</a>, and Jesse Schell <a href="http://augmentedrealityevent.com/speakers/" target="_blank">will be keynoting, and there is a totally awesome line up of AR innovators and industry leaders</a>, including Paige and Anselm!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling.jpg"><img class="alignnone size-thumbnail wp-image-5289" title="bruce_sterling" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling-150x150.jpg" alt="bruce_sterling" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright.jpg"><img class="alignnone size-thumbnail wp-image-5290" title="will_wright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright-150x150.jpg" alt="will_wright" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost.jpg"><img class="alignnone size-thumbnail wp-image-5291" title="Jesseschellpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost-150x150.jpg" alt="Jesseschellpost" width="150" height="150" /></a></p>
<h3>And:</h3>
<p>You are in luck!</p>
<p>Here is a discount code for the first 100 folks to register to the  event (before the end of March). Go to the <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">registration page</a>, type in code AR245 and &#8220;youâ€™ll be  asked to pay onlyÂ $245 for 2 full days of AR goodness.&#8221;</p>
<p>&#8220;Watching AR prophet Bruce Sterling, and gaming legend Will Wright, visionary game designer Jesse Schell  deliver keynotes for this price â€“ is aÂ magnificentÂ steal.Â  And on top,  participating in more than 30 talks by AR industry leaders will turn  these $254 into your best investment of the year,&#8221; as OriÂ  put is so well on Games Alfresco!</p>
<p>If you want a preview of just how exciting it is to be involved in augmented reality right now check out <a href="http://gamesalfresco.com/2010/03/17/magic-games-education-and-live-coding-at-the-augmented-reality-meetup-in-nyc/" target="_blank">Ori Inbar&#8217;s great round up</a> on our latest monthly <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">Augmented Reality Meetup NY</a> (or as, Ori notes, we fondly like to  call itÂ <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a>.)Â  There is lots of video up now (much thanks to <a href="http://www.chrisgrayson.com/" target="_blank">Chris  Grayson</a>, whoÂ  <a href="http://armeetup.org/001_arny/video/index.html" target="_blank">live  streamed it</a>).Â  <a href="http://www.marcotempest.com/" target="_blank">Augmented Reality Magician, Marco Tempest</a>, is an absolutely <strong>must</strong> see.Â  (developers note this is an awesome use of <a href="http://www.openframeworks.cc/" target="_blank">open Frameworks</a> and <a href="http://opencv.willowgarage.com/wiki/">OpenCV</a>).Â Â  The video of the show includes a rare explanation of how it  all worksÂ  &#8211; see <a href="http://www.youtube.com/watch?v=6TluCaxz7KM&amp;feature=player_embedded" target="_blank">here</a>.</p>
<h3>Talking with Paige Saez &#8211; &#8220;Software is candy now!&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg"><img class="alignnone size-full wp-image-5266" title="paige_headshot_sq135" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg" alt="paige_headshot_sq135" width="135" height="135" /></a><br />
<strong> </strong></p>
<p><strong>Tish  Shute:</strong> What interests me about ImageWiki is that you have thought  about physical hyperlinking beyond the obvious of where to get your  next good hamburger and beer, right?</p>
<p><strong>Paige Saez:</strong> Right. It was interesting for  me in just thinking about the two things. How do you design a tool to  work in a way that people are getting value from it? And also, how do  you make it work in a way where people can explore and hack it? I think  the most interesting technologies, and this is probably something  somebody else said sometime, are the ones that disappear, that we don&#8217;t  see, instead we see <em>through</em>. They become just the  intermediaries.Â  They don&#8217;t interfere with what we are trying to do.</p>
<p>It&#8217;s a struggle whenever you are developing a new way for  people to get information or make something happen, because you are  playing with magic a little bit. And you have to make it vanish the way a  good magic trick makes an experience a magical one. But at the same  time you also need to reveal just enough that you let people in and they  can see how to change it and make it their own. That is the interesting  tension for this space right now, the idea of augmented reality begins  to lead the idea of a social commons for physical things. The Imagewiki  project was a locus of just this tension. Tish you and I have previously  discussed how difficult it was to even get people to understand the two  concepts independently.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png"><img class="alignnone size-full wp-image-5269" title="dhj5mk2g_515dwxtjnds_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png" alt="dhj5mk2g_515dwxtjnds_b" width="642" height="163" /></a></p>
<p><strong>Tish Shute:</strong> Right, until  recently most people hadn&#8217;t even heard the term augmented reality and I  am not sure that a particularly high percentage of people would  recognize it now despite the recent interest in smart phone apps.</p>
<p><strong>Paige Saez:</strong> It&#8217;s very  difficult to get people to understand the two concepts, and now you are  adding in the third level of participation as well. So I don&#8217;t think it  is impossible, but I do think it requires narrative. It is interesting  that you were talking about the stories you heard this morning from the  creatives at the event [Tish mentioned David Curcurito, Creative  Director, Esquire gave an excellent presentation at Sobel Media event  NYC] because it&#8217;s narrative and the attention to telling a story that  help you walk through all of the ways you can understand how completely  expansive this area is right now.</p>
<p>So I think we have to play with it, play with the space and the  tools. I think we need to have an idea of what we want people to use  the tool for, and we need to not only introduce them to the tool and the  technology, but also introduce them to the concepts as well. So I see  it as a three part process.</p>
<p>I&#8217;m really excited to be there with people,  helping them do that. I think we need to do this face to face. I don&#8217;t  think this can be only through a social network. The ImageWiki website  is like one quarter of the entire picture, you know? The website is the  resource center and the place where you can see people adding images,  but what value is it to you to see an added image? It is more valuable  for you to be interacting with the image or interacting with the object  in the real world.</p>
<p>Designing for the experience of using the  ImageWiki got very complicated very fast. I was trying to figure out the main  thrust of the design for the UI for the ImageWiki and at a certain point  I had to take a step back and say â€œOkay, this has to be good enough for  now because we can lay it out and prototype as long as we want on the  Web or mobile UI. What we need to be doing is going outside and actually  aggregating and putting images into the database in order to see what  exactly happens when we are adding.â€Â  It&#8217;s not just like you are taking a  picture of something and adding it to Flickr. Using the tool is very  context specific and the information is context specific, and you can&#8217;t  necessarily make that all happen at the exact same time. I think these  are really fascinating spaces to be struggling in and I&#8217;m so glad to be  working in this space.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2.jpg"><img class="alignnone size-medium wp-image-5300" title="imagewiki_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2-300x225.jpg" alt="imagewiki_2" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1.jpg"><img class="alignnone size-medium  wp-image-5299" title="imagewiki" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1-300x225.jpg" alt="imagewiki" width="300" height="225" /></a></p>
<p><em>Images by Chris Blow of <a href="http://unthinkingly.com/" target="_blank">unthinkingly.com</a></em></p>
<p><strong>Tish  Shute:</strong> Could you explain why we need ImageWiki? I mean I think I  have ideas on this, but perhaps you can explain to me from you point of  view why we need an ImageWiki, as opposed, to say, extending the image  space of Wikimedia or something added on to Flickr.Â  I mean maybe  something leveraging the geotagged photos sets and APIs we already have?</p>
<p><strong>Paige Saez:</strong> Yes, definitely. It&#8217;s a really good question, I mean it really is. Like,  do you need an entirely new place to be holding images outside of the  places that we are already holding images? That&#8217;s a huge question;  enormous. Especially when you take a look at the problems around that.  Its&#8217; exhausting for an end user. Who the heck wants to go and reload  everything into <em>yet another place</em>, right?</p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Paige Saez:</strong> Moreover, who is going to  really bother? Another problem would be what happens to the existing  datasets that people have already committed to? And then of course there  is the problem of authority and explanations why&#8230;.Gaining interest  and authority in a space when nobody even understands why that space  should exist in the first place. And those are just three, you know, off  the top of my head problems with that idea.</p>
<p>And yet at the same time, I don&#8217;t actually know  how else to go about thinking about the ImageWiki unless I think about  it as it&#8217;s own thing. Then you start thinking about models of large  independant image databases that exist already, examples of this from a  product standpoint- references to consider. The Getty Foundation comes  to mind. There are many other historical centers that have huge  resources and images that are licensed out and used. So here we have a  working example of people already doing this. But succesfully? I don&#8217;t  know. We do have a ton of intellectual property rights and copyright  issues and ownership and use issues with images currently. As a working  artist these issues for me were a major red flag to consider. Working on  the social commons for augmented reality starts paralleling issues  found in digital rights management and intellectual property.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png"><img class="alignnone size-full wp-image-5274" title="dhj5mk2g_518gpgpr7gd_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png" alt="dhj5mk2g_518gpgpr7gd_b" width="441" height="606" /></a></p>
<p><strong>Tish Shute:</strong> But one good thing about Wikimedia, why I focused on Wikimedia, is Flickr and Wikimedia already use a creative commons licensing, right?</p>
<p><strong>Paige Saez:</strong> Creative commons, you know they have their own resource center, too. But you know they haven&#8217;t been successful as great databases for images so far.</p>
<p><strong>Tish Shute:</strong> What would you like to see that they don&#8217;t have? Like say maybe start with Wikimedia, right?</p>
<p><strong>Paige Saez:</strong> There&#8217;s just still a lot of issues with how to encourage people to want to contribute. It&#8217;s hard to show the value to someone who doesn&#8217;t already understand the value for some reason. At least for me personally this is something I have run into frequently. I don&#8217;t know if it is necessarily what Wikimedia doesn&#8217;t have, I think it is a lack of understanding of what creative commons really means. And there is still a very strong sense of ownership and concern about creative property rights. Being paid to be creative is a tremendously difficult thing to do. People fear losing their livelihoods. They think this is possible. Is it? I dunno.</p>
<p>For example : Look at me, I take a photograph of something, I can sell that.  And there&#8217;s a question about whether or not, as an artist, I want to have my photographs in a pool of images that is open and accessible when I could be making money on it instead. Now that is just an example. Me personally, I can see the value. But that is a common concern. The gist of the question being, &#8216;what value does it bring to give something away versus holding on to it?&#8217; A hugely popular discussion right now.</p>
<p>This is the same crux of the problem we are dealing with when we talk about thinking about images in the social commons for the real world. It&#8217;s a conversation about ownership. It&#8217;s about, who does this belong to really? If I take a photograph of a Levi&#8217;s billboard, does that photograph belong to me or does it belong to Levi&#8217;s? We know the boundaries of that. But when the image becomes a living image, an image capable of transmutation; an image that provokes an action or hyperlinks to a product, experience, information&#8230;.where are the boundaries in that?</p>
<p><strong>Tish Shute: </strong>But how is ImageWiki handling that differently from Wikimedia, I suppose is my question.</p>
<p><strong>Paige Saez:</strong> We haven&#8217;t solved the problem.</p>
<p><strong>Tish Shute:</strong> Yes, I suppose it is not like we have fully solve the problem of a creative commons for images on the internet let alone the issues of a social commons for the real world! So neither one has solved the problem, right?</p>
<p><strong>Paige Saez:</strong> Exactly. To be honest, it made my head spin. I realized we were building a web application and a mobile tool doing augmented reality, real time feedback on the world and suddenly we weren&#8217;t. Suddenly we were dealing with DNS and talking about physical hyperlinks and ownership and property. And basically at that point you just have to sit and really start looking at catching up on IP issues and figuring out how to deal with that space in a much more wholistic way. It became so important that we had to take a step back and go</p>
<p>â€œOh my god I think we have really uncovered a real problem here.â€</p>
<p>At the point when we were building out the tools we realized something was really going on with our project. Here we were thinking that this was just a beautiful experience of learning about the world around us. We reallyâ€¦Anselm and I both just really wanted this tool to exist. It was something that we both just really wanted to happen in the world, something that we felt really just thrilled to make. And we looked at and used it and realized that instead of it just being a beautiful experience, it was a fundamental shift in how we understood everything. That it impacted our world in the same way the Internet impacted our world. It was a fundamental shift in understanding. A sea-change.</p>
<p>So I put down the prototype and went back to researching, read a ton of books on IP and went and presented to friends, family, schoolmates and co-workers trying to explain the project and then the larger conceptual framework that had emerged from the project. I began using the metaphor of thinking about Magritte&#8217;s &#8220;Ceci n&#8217;est pas une pipe.&#8221; Thinking about a pipe that isn&#8217;t actually a pipe.</p>
<p><strong>Tish Shute:</strong> Oh, yes!</p>
<p><strong>Paige Saez: </strong>..to try to help explain to people that the image that you see is actually not, you know, it&#8217;s not an image of a thing. It&#8217;s an image. And that image has a tone and that image has a voice, and that image was chosen. And there were decisions that were made through the interface of the camera, specific decisions that defined the view of what you were looking at. And that that wasn&#8217;t being acknowledged and that that was a fundamental part of what the ImageWiki was aiming to do. The lens that you are actually looking through was as important as what you were looking at. And democratizing that lens became the most important thing that we could possibly do.</p>
<p><strong>Tish Shute:</strong> So the emphasis for you on ImageWiki was in fact the lens, even though you found obstacles to creating the interface, right?</p>
<p><strong>Paige Saez:</strong> Yes. Definitely. That&#8217;s what I fell in love with first. I really wanted to be able to use my phone to learn about what kind of tree this was or to buy tickets for the band on the poster I just saw, or see a hidden secret. For me it was very much a story, a narrative experience that I just thought was magical. And that is how I fell in love with it, which is not where I ended up.  Where I ended up was realizing it was a fundamental shift in not only my own understanding of how to use the world around me, but in our understanding of looking at the world.</p>
<p><strong>Tish Shute: </strong>It would be pretty scary if an image DNS was basically in the hands of either one or very few people, right?  I mean even ImageWiki would be stuck with this problem, that if you set up a bunch of servers, you are going to be holding a very, very large image database. I mean, whatever your motivation, right?  I think at the minute that is why I am very into seeing everything through the lens of federation, I see that unless we have federation, these giant central, databases are inevitable aren&#8217;t they?</p>
<p><strong>Paige Saez: </strong>Essentially, yes. I mean I wasn&#8217;t able to walk through it as quickly as that. It kind of just overwhelmed me. Looking back on it, it seems perfectly obvious. I was just like â€œOh my god, what have we done? Like what is going on?â€ Particularly for me because so much of my life has been spent in art, it was really easy to immediately understand the connection between the view, the viewer, and whatâ€™s being viewed as all just different layers of ownership and understanding that it is a gaze. Right? We know that we are never able to look at something without passing judgment on it, but to see that become a part of the interface in a real-time fashion just blew my mind.</p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Paige Saez:</strong> I think you are right. Getty Images, Flickr images, no matter what you are always holding on to something and you have to be responsible for it. Right? So how do you deal with the responsibility but don&#8217;t take on too much ownership? Where is the boundary with that?</p>
<p><strong>Tish Shute: </strong>And for me, the simple answer to that is loosely connected small parts, distributed systems and federation.  Because there is only one way to be able to utilize these things is to have them distributed so that no one holds all the cards. Right?</p>
<p><strong>Paige Saez: </strong>Definitely and I personally agree with you wholeheartedly. However, the idea of distributed power is a concept that most people just don&#8217;t know how to deal with.</p>
<p><strong>Tish Shute:</strong> And it&#8217;s easier said than done because actually the root problems that you are talking about aren&#8217;t got rid through federation, because if someone really holds the, sort of, all the good image databases just because they have the potential to be federated, they may not choose to open them up on many levels.</p>
<p><strong>Paige Saez:</strong> And even then you have to think about, sort of, like the next level of it, which is we want it to be all open and accessible, but everything is owned by somebody. Like, what really is public anymore, in general?</p>
<p><strong>Tish Shute:</strong> And what is interesting though, regardless of what we speculate conceptually on this, we already set off down the road. I mean we have already several largeâ€¦they are all in beta I suppose, Google Goggles, Point and Find, right? But we have applications that are beginning to implement this. They are beginning to implement search on it, and it is geo-located even if it&#8217;s not in an augmented view, right? So it is proximity based.</p>
<p><strong>Paige Saez: </strong>Right, right. I mean maybe the solution is that if we follow that line of thinking then Flickr will be partnering with Google Goggles. And then my images would stay under my ownership through the authority of Flickr. And I would use Flickr as my place to add images and they would just be responsive via my devices via AR.</p>
<p><strong>Tish Shute:</strong> That&#8217;s very interesting.</p>
<p><strong>Paige Saez:</strong> Definitely I think so. It is also the shortest distance between things.</p>
<p><strong>Tish Shute:</strong> Yes, and as Anselm kept pointing out, basically it is going to happen in the simplest way possible, really, regardless of the implications of that. But OK, getting back to ImageWiki. As you say neither Wikimedia nor Flickr were really designed to take this role, right?</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute:</strong> With ImageWiki, you&#8217;ve had these ideas and a concern with the social implications of physical hyperlinking  in your mind since it&#8217;s inception. Are there any design ideas you&#8217;ve come up with that you know, as opposed to sort of, as you say, connecting Flickr to Point and Find, or who knows, Google Goggles.  How is ImageWiki going to be different, do you think? Is that a hard question at this point?</p>
<p><strong>Paige Saez:</strong> It is, and it&#8217;s a great question, and it&#8217;s a question I really love to think about. I think we have to introduce the politics with the tools. It has to be acknowledged that it&#8217;s not just a place to hold information, that&#8217;s what I feel in my heart.</p>
<p>At the same time, is that too much for people to really grasp at one time? In my experience it really has been, so the design of the experience needs to allow for an understanding of the power of the tool and the level of authority that the tool offers, while not getting in the way of it; just using it.  Because ultimately, at the end of the day, nobody will use anything if it isn&#8217;t valuable to them. And so I could talk for miles and miles and miles about how important it is that corporations don&#8217;t own all of the rights to all of the visual things in my life, right? For the rest of my life I could talk about that. The idea that advertising is dominating all of our views of anything in the world around us is horrifying. It doesn&#8217;t matter unless I can show somebody why it matters to them or how it affects them. It&#8217;s just that that is a tremendously difficult thing to explain through a user interface.</p>
<p>And I actually think that it&#8217;s great that tools like Google Goggles and Nokia Point and Find are here to do a lot of the hard work of showing people how it works. Recently somebody explained to me their experience of using Google Goggles. They went through this process of saying how the Google Goggles took a picture and then did this really complicated visual scanning thing over the image and it took a full minute.</p>
<p>And I said, â€œWell of course they did it that way.â€  And they said, â€œWell what do you mean?&#8221; I said, â€œWell, what they are really doing there when they are doing all these fancy graphics, is they are showing you how it works.â€ And even if it isn&#8217;t actually related at all to how it functionally works, algorithmically, that&#8217;s not the point. The point is that this gesture of the time taken to make it look like it&#8217;s scanning an image and going back and forth with pretty colors is giving people the time to process that as an experience. That&#8217;s a metaphor for what&#8217;s really happening. And these kinds of metaphors are crucial with user experience design. We have lots and lots of examples of them and how they work, and many of them aren&#8217;t necessary. Like you know, for example, the bar that shows you the time it&#8217;s taking for something to process.There is no relationship between that and reality. But it is really important.</p>
<p><strong>Tish Shute:</strong> Yes those bars often have no relationship between the actual time..</p>
<p><strong>Paige Saez:</strong> And that&#8217;s the thing. Like the idea of time versus our perceived understanding of time. Right? The length of time it takes for your Firefox browser to open and load your last 30 tabs, versus the reality of what&#8217;s actually happening. When you are doing that sort of research you are actually accessing millions and millions of places and points of interest all over the world, so we need more of that. We need more of the process shown. Anselm and I worked with a film maker named Karl Lind from In the Can Productions here in Portland to try and make a video about the ImageWiki. We made this little video and I can try to show it to you or send it to you if you want.</p>
<p><strong>Tish Shute:</strong> One of the issues with this kind of visual search is that it is inherently dependent on large databases, regardless of where they are federated, are going to be very large. Right? I mean someone is going to have something big, and aggregated there.   I suppose someone will figure out the challenges of federated search eventually but that is quite a big challenge!</p>
<p>So I suppose I am still trying to understand what ImageWiki can offer that we can&#8217;t get with any other existing service?  How will their be a social commons and even a social contract for the world as a platform for computing and physical hyperlinks?</p>
<p>Eben Moglen  brought up something when I talked to him about virtual worlds, he said we need code angels to let us know what was going on in the virtual space &#8211; who was gathering data and how, for example.</p>
<p><strong>Paige Saez:</strong> Tell me more about that, I want to hear more about that.</p>
<p><strong>Tish Shute: </strong> Eben suggested this metaphor for when I was asking him about privacy in virtual worlds. The fact that people just didn&#8217;t know that when they were pushing avatars around virtual worlds what metrics were being gathered on their behavior.  And he basically said that what we need is code angels when we enter these spaces because having the rules of the game buried in a TOC was ridiculous.</p>
<p><strong>Paige Saez:</strong> That is a really interesting idea.</p>
<p><strong>Tish Shute: </strong> Maybe ImageWiki needs to be our code angel to navigate the augmented world. I mean that&#8217;s what I want to see it as. And when I hear you talk, what I hear is you talking in broad categories about what a code angel might be in the space of images and image links to the physical world. I mean that is what I hear from you.</p>
<p><strong>Paige Saez:</strong> Yeah. No, I definitely agree with that. It is interesting. In that sense, it is kind of a protection layer. Is that what you are thinking?</p>
<p><strong>Tish Shute: </strong>Yes, I suppose because we can&#8217;t be navigating a lot of complicated opt-ins and opt-outs just to get around our neighborhood safely (in terms of privacy (also see Eben Moglen&#8217;s definition of privacy hereâ€¦)  We will need a code angel that is sort of keeping up with you in real time!</p>
<p><strong>Paige Saez:</strong> Right, right. I wonder how that would work in regards to images, though. That is a really interesting thing to try and put on an image. I guess why I am having such a hard time being specific about it, is I am <strong>just trying to work it in my head, thinking of a specific use case, like what would be an example of that?</strong></p>
<p><strong>Tish Shute: </strong>Well I suppose the example, and this is a crude one, is when you point your Google Goggles to the book jacket, the code angel, this is very crude, would say â€œYou are right now drawing images from the Amazon database &#8211; they are collecting data such and such data from your search.</p>
<p>And then of course the ability to have crowd sourced tagging and corrections..</p>
<p>There was a wonderful book that came out last year on how we can have commercial intelligence -Dan Golemanâ€™s new book: â€œEcological Intelligence: How Knowing the Hidden Impacts of What We Buy Can Change Everything&#8221;&#8230;</p>
<p>how corporations various different stakeholders, including their customers will drive corporations to do the morally right thing because they will lose the commercial support of customers who wonâ€™t support them unless they are more green, fairer, do the things we would like them to do whatever that happens to be &#8211; physical hyperlinking and tagging I guess would be a big part of this.</p>
<p><strong>Paige Saez:</strong> Sort of a transparency issue.  And that almost becomes a page rank algorithm in and of itself. I mean now we are really talking about search more than anything, and what tool becomes the dominant search tool. Anselm and I talked a lot about one platformâ€¦  I mean eventually we will have a unified platform. It willâ€¦No matter what, for the Internet and for physical objects and visual objects in the real world. It will just be a matter of, literally, who can find the best and most valuable, most relevant information on a thing. Currently we just have it very proprietary.</p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Paige Saez: </strong>That definitely won&#8217;t last. It just can&#8217;t, because of the exact problem that you are raising. And we already know too much about resources and information as they pertain to products for us to ever go back to a time where we are not considering other ways of getting information about it anyway. Right?</p>
<p>Like I have the same concerns nowadays when I look at fruit. I look at a piece of fruit in the store. I would never just assume that the person who put the sticker on that fruit, anymore, is the ultimate authority necessarily. I would always assume at this point I could go online and go find out more information about a company. Issues about like eco-footprint or how much toxicity, or pesticides or whatnot are now totally accessible already.</p>
<p>So I am thinking when you look at that piece of fruit and that sticker for Google, say what you are describing, do we just go immediately to the company&#8217;s website, or is it even more specific? Do we know that the sticker on that piece of fruit is going to tell us specific information about that? Or are we just getting back the nutritional resources, or are we getting a listing of all of the different options out of a page rank algorithm that shows us, â€œWell this is the website for the fruit.  Here is the nutritional information.  Here are the last 15 comments on it.â€  It&#8217;s basically just a basic search.</p>
<p>Have you heard of Good Search?</p>
<p><strong>Tish Shute:</strong> you mean http://en.wikipedia.org/wiki/GoodSearch</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute: </strong>A code angel interface would have to give you options, wouldn&#8217;t it on possible views available?</p>
<p><strong>Paige Saez:</strong> Yes. You are then talking about filtering your view. Then it really gets really interesting, of course. I don&#8217;t even know if we have a choice in that. I think we are really kind of hitting a wall with who owns the space and the platform. Is it just a basic search because we are already familiar with search? If you had an option to choose, say, â€œI want to look at this apple sticker and I only want to getâ€¦programmatically only looking at my friend&#8217;s opinions of this company.â€</p>
<p>Or I have a safety valve on it that only shows me certain information based on what the code angel knows about me, my preferences, my age, things like that. Then that gets really, really interesting, because we are trying to do all that work right now just with social media and the Internet. We are already overwhelmed with too much information. It is already past the point of comprehension. So to think that we would actually drill down even more specifics is very interesting.</p>
<p><strong>Tish Shute:</strong> That was a point Anselm made about the fact that once you are into this mobile, just in time, one view kind of situation, it is quite different than the Internet where you can bring up all these different screens and go to another website.</p>
<p><strong>Paige Saez: </strong>Well yes, mobile is a different level of engagement. Very contextual. Much less information. Much more about timeliness. I don&#8217;t want to look an apple and get back a Google search. Oh my God no. Thatâ€™s the last thing I want. I would love to be able to look at an apple and my phone already knows exactly what I want, information-wise, to get back from that apple. But I don&#8217;t know. It&#8217;s all contextual and personal.  So I think the code angle concept you are talking about is really interesting because you still need to think about who is the person that is adding or creating those level filters- is it you, a filtered friend network, an algorithm? How much work is too much work? Where do we draw the line? How much of this are we willing to let the machine do for us?</p>
<p><strong>Tish Shute: </strong>Right.</p>
<p><strong>Paige Saez: </strong>And then of course once you have those filters in place, you need control over them. You will need to dial them up and dial them down, be able to choose and add new ones, so on and so forth. It becomes very modal at that point. For example, I want to change my view: To walk into a grocery store and instead of finding out information, Iâ€™d want to see where the hidden Easter egg puzzles were that my friends left last week because weâ€™re playing a game.</p>
<p>Iâ€™m still really attracted to the creative opportunities with the ImageWiki. Iâ€™m really attracted to changing this experience from being a one-to-one relationship (from Corporation to Consumer) to an open-ended relationship (From Person to Person). If I look at a book jacket, sure I can find out where to buy the book, but thatâ€™s boring. Who cares? Iâ€™d like to find out a link to a story or an adventure or a movie or something unthought-of before.</p>
<p>How do we build that in? How do we encourage serendipity? Mystery? I think the ImageWiki is the space for building that in, actually. Not how, that would be the one place, right? Thatâ€™s my really big fear is that this relationship just stays one-to-one. Click an image of consumable object, get back objects retail value. How completely dull. We have to do better than this.</p>
<p>Additionally, what if I want to take a photograph of a book, an apple, or something and I donâ€™t want to pull back data. Instead, I want to pull back music, or I want to pull back a video, or I want to pull back a song, or lyrics, or a story, or another image. Itâ€™s just a hyperlink at the end of the day, you know? Thatâ€™s all weâ€™re really doing. Hyperlinks can pull back so many different things.</p>
<p><strong>Tish Shute:</strong> And thatâ€™s one of the reasons I&#8217;m into mobile social interaction utility building, because without that, if we donâ€™t have that way to do that in mobile technologyâ€¦thatâ€™s very available on the Internet, as weâ€™ve seen, with Twitter. These applications are very easy to do on the Internet. Theyâ€™re not easy to do natively in a mobile application..</p>
<p>hey, Iâ€™m just promoting AR Wave again. I should shut up.</p>
<p><strong>Paige Saez:</strong> Oh, no.  I think itâ€™s a fascinating concept, I really do. I totally agree. As weâ€™ve talked about it before, itâ€™s amazing that marketing and advertising are helping push forward AR, and itâ€™s great. Itâ€™s fantastic.</p>
<p>But itâ€™s also the worst possible thing that could ever happen because it is such a singular way of looking at an overall ubiquitous computing experience. There are other ways.</p>
<p>The best experience I ever had was trying to explain to people about physical hyperlinks. I had to walk them through it. Good interactive isnâ€™t something you present or show, itâ€™s something you do. Nothing beats just walking around and showing people with a device or a tool or something else.</p>
<p>I mean, God forbid it always stays in our computers and our phones. I really hope we donâ€™t have to be stuck living our entire lives with these horrible interfaces.  But for the time being, we will. Having an AR app show you a puzzle, or a mystery, or a game, or an adventure is a magnificent experience, totally overwhelming, and people get it right away. Thereâ€™s no question; they totally understand.</p>
<p><strong>Tish Shute:</strong> Yes, I agree.</p>
<p><strong>Paige Saez:</strong> You walk them through the experience with a physical hyperlink and then you say, â€œHere, I could use this device and I could show you where to buy this thing, or I could use this device and we could start playing a game.â€ Then everybody gets it.</p>
<p><strong>Tish Shute:</strong> So then I have a question, because one of the things Anselm said to me when he wanted me to refer back to you is that he feels that the direction for ImageWiki should be perhaps to focus less on the technology and more on just the actual, I suppose, gathering of the images, how theyâ€™re going to be annotated, the metadata, right? But my question to him was, the problem if you do that, without the platform, thereâ€™s no experience or motivation for people to do that. Right? Is there?</p>
<p><strong>Paige Saez: </strong>Yeah, I agree with you on that one. Iâ€™m curious what hisâ€¦I think the reason why he wants to do that is he wants to be able to show people examples via the resources. Like to be able to show someone a library, essentially, which I think makes sense with some people. I definitely think that some audiences would really relate to that. For me, it doesnâ€™t make sense because Iâ€™m just very experiential. I need to do it and I need to show other people how to do it and I need to grow that way. I think that at the end of the day, those are great ways to go about doing it. Itâ€™s just itâ€™s a huge thing to do in either direction.</p>
<p>What Anselm&#8217;s really thinking on, I believe, is more about exemplifying how we read and understand images culturally. Then youâ€™re really getting into Visual Studies and Critical Theory which is what I did for my Masters at PNCA. I worked on the ImageWiki while I was in grad school, it was something I was doing for fun. Independently of my studies, the project lead to issues on democracy and objects and property and I ended up right smack in the middle of what I was studying; the nature and cultural analysis of images Questions like, &#8216;what exactly do we get out of images?&#8217; and how all these different things are happening in an image, and people get tons of totally different things out of an image depending on many factors.</p>
<p>The questions I began to ask myself got very philosophical. Questions like â€œIs this apple red? Is this apple red-orange? Is this a small apple? Whatâ€™s my understanding of small versus your understanding of small?â€</p>
<p>Because you supposed that you needed a text backup to the search, how would I be able to search for an apple? Because what if my understanding of apple is red and your understanding of apple is green. And so if Iâ€™m looking for a green apple, am I looking for the same green apple as you? Itâ€™s all semantics, sure.  But at the same time, it gets bigger and bigger, and itâ€™s fascinating.</p>
<p><strong>Tish Shute: </strong>Google Goggles seem to work best on book jackets, basically.</p>
<p><strong>Paige Saez: </strong> But book jackets are actually perfect for this.  Book jackets are perfect for this problem, because book jackets are specifically designed art.  So at the end of the day, we are still talking about creative works, artistic works, that have been designed as a communication tool.  But that is not something that people can own.  Creative works that are designed are a communication tool, with varying levels of skill to be sure, but still something anybody can do.  What we need to do is we need to be using that language.  We donâ€™t need to be trying to reach as far as facial recognition.  We need to develop our own logos, our own brand, our ownâ€¦I mean not brand.  Brand is a bad way of saying it.  Another way of saying it would be like, just use it.  Develop a visual language that we can use that is as effective and as well utilized as book jackets or the movie posters or something.</p>
<p><strong>Tish Shute:</strong> What are some of the use cases for ImageWiki you would like to develop first?</p>
<p><strong>Paige Saez:</strong> My dreamâ€¦I have like four or five use cases that I want to see happen.  One of them is I walk down the street and there is a new poster for my favorite band.  And I can just go up to the poster and I use my device, whatever it looks like, and I download the latest album. It&#8217;s transactional. I am able to just plug in my headset and walk down the street and the transaction is done. I saw something I wanted. It was beautiful. I was able to get it and I was able to move on in my life.  And that is totally possible.</p>
<p>Another one would be I walk down the street and there is a piece of graffiti.  And I am able to use my device to find out who the artist was that made it and to give them props, and to point my other friends to the fact that the piece is there and it will most likely be there only for a short period of time- information retrieval and socialization.</p>
<p>Or, use my device to find an Easter egg, to find a narrative puzzle that ends up going on for weeks, and everybody is involved, and we are all playing this game together. Adventure-based, non-linear experiences. I want playfulness, not just purchases.</p>
<p><strong>Tish Shute: </strong> Did you think of piggybacking on the Flickr API for geo-tagged photos as a way to work with those databases or not?</p>
<p><strong>Paige Saez:</strong> Yeah, we definitely thought about that.</p>
<p><strong>Tish Shute: </strong> And why did you decide not to, for any reason orâ€¦?</p>
<p><strong>Paige Saez:</strong> Ultimately, we justâ€¦we were such a small group, we just had to tackle certain things at a certain time.</p>
<p><strong>Tish Shute:</strong> Right.  And you were so prescient, you were working slightly before we had the mediating devices, werenâ€™t you?  You were just before the mobile devices really got adequate for this.</p>
<p><strong>Paige Saez:</strong> Yeah.  We started on itâ€¦I believe it was Januaryâ€¦No. December 2007. Basically, the iPhone had just launched like maybe six months prior or something like that.</p>
<p><strong>Tish Shute:</strong> But not 3G and not 3GS, right?</p>
<p><strong>Paige Saez: </strong>Not 3GS. It was the first generation iPhone. We built the ImageWiki before the App Store existed.</p>
<p>We knew that the App Store was coming out.  And we knew that the App Store was going to be the biggest thing in the whole world. I remember getting into multiple fights with friends about how revolutionary the iPhone and the App Store were going to be and people thinking I was totally crazy; people just thinking I was absolutely nuts for being so excited about it.</p>
<p>It sucks that it is a closed proprietary system, but the App Store has done something for software that nothing has ever done in the whole world.  Software is candy now.  It&#8217;s candy.  It is like when you are waiting at the grocery store at the checkout line and you are stuck behind somebody, and you have got all these little tchotchka&#8217;s, candy bars, magazines, nail-clippers and things. That is the equivalent of software now.  It&#8217;s become an impulse buy, which is amazing.  Nobody would ever have thoughtâ€¦that is actually revolutionary. That&#8217;s huge.</p>
<p><strong>Tish Shute:</strong> <a href="http://www.cs.columbia.edu/~feiner/" target="_blank">Steven Feiner</a>, who is one of the founding fathers of augmented reality said to me during a conversations at the ARNY meetup that one reason that augmented reality, despite the hype, is manifesting very differently from how virtual reality burst onto the tech scene is that it is about affordable apps on affordable readily available hardware.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>The Physical World Becomes a Software Construct: Talking with Brady Forrest about Where 2.0, 2010</title>
		<link>http://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/</link>
		<comments>http://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 05:37:24 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial Life]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Phones in Africa]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[ARCommons]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[crisis management]]></category>
		<category><![CDATA[Crisis Mappers]]></category>
		<category><![CDATA[CrisisCamp]]></category>
		<category><![CDATA[CrisisMapping]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Federation Protocol]]></category>
		<category><![CDATA[H.E.AI.D]]></category>
		<category><![CDATA[human energized artificial intelligence]]></category>
		<category><![CDATA[hyperlocal search]]></category>
		<category><![CDATA[hyperlocal view]]></category>
		<category><![CDATA[image links]]></category>
		<category><![CDATA[iPad]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[local search]]></category>
		<category><![CDATA[location based analysis]]></category>
		<category><![CDATA[location based technologies]]></category>
		<category><![CDATA[Mike Liebhold]]></category>
		<category><![CDATA[Mixer Labs]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction]]></category>
		<category><![CDATA[Nathan Torkington]]></category>
		<category><![CDATA[O'Reilly Media]]></category>
		<category><![CDATA[Open CV]]></category>
		<category><![CDATA[Open Street Map]]></category>
		<category><![CDATA[OpenAR]]></category>
		<category><![CDATA[Ovi]]></category>
		<category><![CDATA[People Finder]]></category>
		<category><![CDATA[physical hyperlinks]]></category>
		<category><![CDATA[proximity-based social networking]]></category>
		<category><![CDATA[real-time social location aware applications]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Steve the Robot H.E.AI.D]]></category>
		<category><![CDATA[Twitter and geolocation]]></category>
		<category><![CDATA[Uber Geek]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[Vernor Vinge]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[Yelp Monocle]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5137</guid>
		<description><![CDATA[&#8220;The internet eats everything it touches,&#8221; write Brady Forrest and Nathan Torkington, Oâ€™Reilly Media, Inc., in their must read 2006 companion essay The State of Where 2.0 (PDF).Â  Now in 2010 that statement is more true than ever. Last week,Â  I talked to Brady about what we can look forward to at Where 2.0, 2010,Â  [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://heaid.com/" target="_blank"><img class="alignnone size-medium wp-image-5138" title="Screen shot 2010-02-08 at 11.05.18 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-08-at-11.05.18-PM-300x202.png" alt="Screen shot 2010-02-08 at 11.05.18 PM" width="300" height="202" /></a></p>
<p>&#8220;The internet eats everything it touches,&#8221; write <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrest</a> and <a href="http://nathan.torkington.com/" target="_blank">Nathan Torkington</a>, Oâ€™Reilly Media, Inc., in their must read 2006 companion essay <a style="border-width: 0px; margin: 0px; padding: 0px; color: #a43000; text-decoration: none;" title="Opens link in a new browser window." href="http://assets.en.oreilly.com/1/event/4/state_of_where_20.pdf" target="_blank">The State of Where 2.0</a> (PDF).Â  Now in 2010 that statement is more true than ever.</p>
<p>Last week,Â  I talked to Brady about what we can look forward to at <a href="http://en.oreilly.com/where2010" target="_blank">Where 2.0, 2010</a>,Â  and what he thinks will be the &#8220;internet eating&#8221; trends emerging this year.Â  Brady is uniquely positioned to get a glimpse of things to come.Â  His job for Oâ€™Reilly Media is tracking changes in technology and organizing large scale events, including Where 2.0 which he chairs, and Web 2.0 Expo in San Francisco and NYC which he co-chairs.Â  Brady also runs <a href="http://ignite.oreilly.com/" target="_blank">Ignite</a>, and previously worked at Microsoft on Live Search.Â  And, when not doing his day job, he participates in such Uber Geek activities as <a id="swtp" title="Steve the Robot H.E.AI.D - A Human Energized Artificial Intelligence Device...with lasers and generative sound." href="http://heaid.com/?page_id=5">Steve the Robot H.E.AI.D &#8211; A Human Energized Artificial Intelligence Device&#8230;with lasers and generative sound,</a> (click on pic above or see <a id="qvff" title="video here" href="http://vimeo.com/7153320">video here</a>).Â  Look out for <a id="swtp" title="Steve the Robot H.E.AI.D - A Human Energized Artificial Intelligence Device...with lasers and generative sound." href="http://heaid.com/?page_id=5">Steve the Robot H.E.AI.D,</a> at <a id="sfnk" title="Augmented Reality Event, June 2nd and 3rd, Santa Clara, CA" href="http://augmentedrealityevent.com/">Augmented Reality Event, June 2nd and 3rd, Santa Clara, CA</a>,Â  and a presentation from Brady.</p>
<p>As <a href="http://en.wikipedia.org/wiki/Vernor_Vinge" target="_blank">Vernor Vinge</a> pointed out in his intro to <a href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> &#8211; the &#8220;possibilities are both scary and wondrous&#8221; as &#8220;the physical world becomes much more like a software construct.&#8221;Â  Brady Forrest has taken a lead role, since 2004 &#8211; when &#8220;&#8216;local search&#8217; was interesting but not yet real,&#8221; in shaping this transformation.</p>
<p><a id="j70w" title="Where 2.0" href="http://en.oreilly.com/where2010">Where 2.0</a>, together with <a id="y46x" title="WhereCamp" href="https://wherecamp.pbworks.com/session/login?return_to_page=FrontPage">WhereCamp</a> (this year at Google) constitutes WhereWeek &#8211; a crucible for emerging trends in web mapping platforms, and location based technologies.Â  This year augmented reality, proximity-based social networking, local search, and the rapidly maturing field of Crisis Management are in theÂ  mix along with the huge and long established GIS industry which has moved rapidly into the Where 2.0 space.</p>
<p>But what business models will oxygenate the system is still a key question &#8211; one Brady discusses in the interview below.Â  Certainly, the usefulness of location based analysis, mapping, new interfaces, and bringing this data to every application is clear.</p>
<p>Crisis management is center stage this year <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2345">Jeffrey Johnson</a> (Open Solutions Group), <a href="http://en.oreilly.com/where2010/public/schedule/speaker/67704">John Crowley</a> (Star-Tides), <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2118">Schuyler Erle</a> (Entropy Free LLC) who will present on, <a id="d4lf" title="Haiti: CrisisMapping the Earthquake" href="http://en.oreilly.com/where2010/public/schedule/detail/13201">Haiti: CrisisMapping the Earthquake</a>.Â  And Chris Vein &amp; Tim O&#8217;Reilly will &#8220;discuss how cities and application developers will benefit from open data and what these programs will look like in the future&#8221;Â  in the plenary <a id="pv3i" title="City Data" href="http://en.oreilly.com/where2010/public/schedule/detail/14124">City Data</a>.</p>
<p>Mobile social, proximity- based social networking, which may soon emerge as a challenger to web based social networks, and augmented reality are the sexy rockstars ofÂ  the Where 2.0&#8242;s 2010 showcase of potentially disruptive technologies.Â  Augmented Reality has had a breakthrough year, and this is reflected in its strong showing on the Where 2.0 schedule.Â  But, as Brady notes, AR awaits the killer app, that will drive it to the next levelÂ  Of course, we hope to unveil thatÂ at<a href="http://augmentedrealityevent.com/" target="_blank"> are2010</a>!</p>
<p>At Where 2.0, I am presenting on <a id="mknx" title="The Next Wave of AR: Exploring Social Augmented Experiences" href="http://en.oreilly.com/where2010/public/schedule/detail/11046">The Next Wave of AR: Exploring Social Augmented Experiences</a> panel.Â  We will look at how social augmented experiences will be key to the next wave of mobile augmented reality.Â  <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2119" target="_blank">Mike Liebhold</a>, in a complementary presentation, looks at <a id="e0_a" title="Truly Open AR." href="http://en.oreilly.com/where2010/public/schedule/detail/11096">Truly Open AR.</a> If you have been reading Ugotrade, you already know I am an advocate for an open, distributed, real time communications framework for AR &#8211; see <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a>.Â  Wave Federation Protocol is an open fast, compact, federated, communications protocol that is a dream come true for AR.Â  And, I would hazard a guess that in 2010, real time communications plus location will become oxygen.</p>
<p>But also key to the next wave of AR, as I discussed with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a> in this post on <a id="it3q" title="Visual Search, Augmented Reality and a Social Commons for the Physical World Platform" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons for the Physical World Platform</a>, will be a view constructed through complex â€œhybrid tracking and sensor fusion techniquesâ€ (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation.&#8221;</p>
<p>And as Brady notes in the interview below,Â  a key step forward would be<strong> &#8220;to take advantage of your location, but it doesnâ€™t need to have been mapped before.&#8221;</strong></p>
<p>For some interesting news on the mapping front (<em>and a discount code for Where 2.0 for Radar readers</em>) see Brady&#8217;s post, <a href="http://radar.oreilly.com/brady/" target="_blank">Flickr Photos in Google Street View</a>. These kind of human built maps have the potential to develop into â€œphoto-based positioning systemsâ€ that could create new opportunities for augmented reality.Â  Brady asks:</p>
<p><strong>&#8220;how often the Flickr photos get updated, where else these Flickr photos are going to show up in Google&#8217;s services (Google Goggles perhaps?) and will they show up in new search partner <a href="http://www.bing.com/maps/">Bing</a>? I am doubly curious if Facebook will ever let its photos be used in a similar way.&#8221;</strong></p>
<p><a id="ooyl" title="Lion Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743"><em> </em><em><em> </em></em></a><em> </em><a id="ooyl" title="Lion Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743">Lior Ron</a> of Google Goggles will be at Where 2.0 to tell us all about, <a id="oy8v" title="Looking into Google Goggles" href="http://en.oreilly.com/where2010/public/schedule/detail/14123">Looking into Google Goggles</a>.Â  And if you want to learn more about how our view of the physical world will be &#8221; rooted in powerful computing, pervasive connectivity, and the cloud&#8221; don&#8217;t miss this one.Â  I will be there.Â  And I very much hope there is a Q and A with this session.</p>
<p>During our conversation (see the full conversation below) Brady gave me his short list for breakthroughs that he sees as having big significance in 2010:</p>
<p><strong>&#8220;Well, I think Google Goggles is one of the most exciting things to me.Â  Having access to a visual search&#8230;having someone actually release a visual search engine in that way, to consumers, I think is huge.Â  You know, you see stuff like that in the labs. But I donâ€™t see it&#8230; itâ€™s rare to see it out. </strong></p>
<p><strong>I think Android is huge.Â  And the way Google is pushing hardware to show off the platform; so the Nexus One being another example and the fact that itâ€™s breaking free from the carriers.Â  Because I think when we get away from the carriers we are able to see more innovation, it&#8217;s whatâ€™s going to allow people or developers and companies to really innovate.</strong></p>
<p><strong>And I think Twitter adding geo-location to their APIs and buying <a href="http://www.crunchbase.com/company/mixer-labs" target="_blank">MixerLabs</a> is a huge move. I think Twitter may end up becoming the end-all be-all of location services. They are going to be updated constantly by people; they are going to have a really good grasp, real-time, of what is happening in any one place, at least based on the people. </strong></p>
<p><strong>And then with the addition of the MixerLabs data, they&#8217;re going to have more datasets at their ready, as well as any data that they start to collect from the clients themselves, like from TweetDeck.</strong></p>
<p><strong>So there are global clients that are updating Twitter.Â  I think those are some of the most exciting things.Â  And again, just to come back to Yelp, I think Yelp&#8217;s Monocle is also pretty significant, just because it&#8217;s an AR [augmented reality] app that&#8217;s being pushed into consumers&#8217; hands. </strong></p>
<p><strong>And we&#8217;ll see how useful they find it.&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><strong>Talking With Brady Forrest</strong></strong></h3>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/bradyandgenomepost.jpg"><img class="alignnone size-medium wp-image-5141" title="bradyandgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/bradyandgenomepost-300x199.jpg" alt="bradyandgenomepost" width="300" height="199" /></a></strong></strong></p>
<p><em>Pic above from WhereCamp 2009, Brady Forrest, facing camera, checks out Mark Powell&#8217;s <a id="a-:n" title="Food Genome Project.Â  Check it out here" href="http://www.foodgenome.com/home">Food Genome Project</a>.Â  <a id="a-:n" title="Food Genome Project.Â  Check it out here" href="http://www.foodgenome.com/home">Check it out here</a> &#8211; it just woke up!</em></p>
<p><strong>Tish Shute:</strong> So last year when you were <a id="q5wp" title="interviewed for WebMonkey" href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web">interviewed by Michael Calore for WebMonkey</a> before Where 2.0 you said, â€œLocation is no longer a differentiator itâ€™s going to become oxygen.â€ And after attending Where Week 2009, I agreed with you and <a id="k.gp" title="wrote about it here" href="../../2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/">wrote about it here</a>.Â  But, in what ways did this prediction exceed expectations, and what ways were you disappointed now as we get close to Where 2.0, 2010?</p>
<p><strong><strong>Brady Forrest:</strong> Well, it exceeded expectations in that there are now five different mobile OSâ€™s where you can load on third party applications that active usersâ€™ locations that can then be shared out.</strong></p>
<p><strong>And so, what it is making is the possibility of real-time social location aware applications.Â  And this is something that hasnâ€™t truly been possible in years past.  Looking back three years ago when the iPhone launched, it was the first major phone, especially in the US, to be location aware.Â  And a year later, the Apps Store launched, giving developers full access to location, which previously had been held onto very, very, incredibly tightly by the carriers.</strong></p>
<p><strong>And now, a year and a half later, you have Android, you have Palm Pre, you have Blackberry working on their SDK to make it better, but it still is there.Â  You have Windows Mobile working on their SDK.Â  And, you know, who knows?Â  Maybe even BREW will get into the mix. </strong></p>
<p><strong>And AT&amp;T is opening up their own interactive store.Â  And so, AT&amp;T and Verizon and all their smart phones may now be looking at BREW. </strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Right. It was very exciting <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">last year at Where 2.0,</a> where we had all these new toolsets announced and then the iphone being location aware.  What were the best implementations of these new capabilities that became available in 2009, do you think?Â  What, in your view, was the most creative, surprising and disruptive?</p>
<p><strong><strong>Brady Forrest:</strong> Well, I am a huge fan of <a href="http://www.youtube.com/watch?v=jHEcg6FyYUo" target="_blank">Yelp Monocle.</a> I think, you know, that is just a real life example of using Augmented Reality.Â  You are on a street.Â  You have got a bunch of restaurants.Â  You have got a bunch of businesses.Â  And just to be able to swing through and look for peopleâ€¦I mean and look for ratings and reviews. </strong></p>
<p><strong>They have just started to institute check in, so you will be able to know where your friends are and where your friends have gone.Â  And that type of real-time, incredibly useful data is what will make augmented reality a standard part of the landscape. </strong></p>
<p><strong>I think it is that type of data, more so than, say, reference data, that will make people want to have all the possible sensors.Â  So, what do you need for that?Â  You need a camera.Â  You need a compass for orientation.Â  You need a GPS or, at least, a decent location service.Â  And then you need a screen where you can actually see the data, and then you need an Internet connection. </strong></p>
<p><strong>So it is not like any phone can handle this.Â  And so, you are going to need those killer apps to actually drive people to the type of phones that can support this.Â  I donâ€™t think AR is quite there yet. </strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I agree, for true AR you need more that compass, camera, and GPS.Â  There are some missing pieces for the real deal experience &#8211; and not just a pair of sexy AR spec.Â  As you mention, hybrid tracking and sensor fusion techniques that can combine computer vision technology withÂ  compass and GPS are vital.Â  We need the compass.Â  We need the GPS.Â  We definitely need the camera!Â  But we need this combined with computer vision techniques to get the tracking, mapping and registration for true AR, or even to deliver a stable experience with the post-it/geonote AR that we see emerging with Layar, Wikitude, and others. At the moment we need to put together the tools for a true AR hyper-local experience.</p>
<p>And, of course, another aspect of this is the kind of physical hyper-links that applications like Google Goggles are building.</p>
<p>Do you have a speaker from Google Goggles at Where 2.0.Â  I would be absolutely fascinated to hear more about their road map?</p>
<p><strong>Brady Forrest: I was loading Google Goggles onto the program yesterday.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Oh, you did?Â  Oh, fantastic. And you have <a id="namh" title="Lior Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743">Lior Ron speaking</a>!</p>
<p><strong><strong>Brady Forrest:</strong> It is actually possible it is not up on the website, but I talked to them and got them to agree to do a talk on it.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>I very much want to hear more about their road map.Â  Google Goggle&#8217;s is a very, very significant step towards the physical internet and this integration of computer vision with sensor fusion techniques necessary for true AR.</p>
<p><strong><strong>Brady Forrest:</strong> I mean that combination with Computer Vision is going to be incredibly valuable, because,Â  and then the other issue you have there is like is it on the client,Â  or is it on the server?Â  And right now, Google Goggles is definitely on the server, and that is not fast enough in real-time AR.Â  So that is like more of a 10 blue links IO interface. </strong></p>
<p><strong><strong>Tish Shute:</strong></strong> And also, they havenâ€™t got an open API, have they?</p>
<p><strong><strong>Brady Forrest:</strong> No, not yet.<br />
<strong><br />
Tish Shute:</strong> </strong>Maybe they will announce that.Â  Can you nudge them?Â  For true AR,Â  we need to move forward in several areas &#8211; of course, there is the mediating device issues, like access to the video buffers in the iphone, and the development of cool AR eye wear would be nirvana!</p>
<p>But my recent obsession has been working on a real-time communications infrastructure for AR, because that is quite doable now, yet we donâ€™t really have that real-time infrastructure, i.e. a real-time mobile social utility that is really up to the real time requirements of AR [see more about this <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a> and on <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> wiki].</p>
<p>But we certainly donâ€™t have the integration of computer vision and sensor techniques, and the access to the big image databases we need, let alone the clients we need to put it all together either!</p>
<p><strong><strong>Brady Forrest:</strong> Google has done work to help out the community with their support of <a href="http://opencv.willowgarage.com/wiki/" target="_blank">Open CV</a>. </strong></p>
<p><strong>It is based out of <a href="http://www.willowgarage.com/" target="_blank">Willow Garage</a>, but I believe that Google has done quite a bit of work on it.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Could you talk a bit more about Open CV?</p>
<p><strong><strong>Brady Forrest: </strong><a href="http://oreilly.com/catalog/9780596516130" target="_blank">O&#8217;Reilly hasÂ  a 500 page book</a> on it.Â  It came out of the Darpa Project, or the  Darpa Contest, where unmanned vehicles are raced.Â  And that has since become, at least in my mind, the primary computer vision library that people work with. </strong></p>
<p><strong>I actually used itâ€¦or, one of the teammates did, on our project we did this summer.Â  We implemented an Open CV pretty quickly that detected where people were, and then we would play music based on that. </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3185351345_67e3514d36_o.jpg"><img class="alignnone size-medium wp-image-5144" title="3185351345_67e3514d36_o" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3185351345_67e3514d36_o-300x225.jpg" alt="3185351345_67e3514d36_o" width="300" height="225" /></a></strong></p>
<p><a href="http://www.flickr.com/photos/55361487@N00/3185351345/" target="_blank"><em>Uber Geek Meeting from ShellyShelly&#8217;s photostream</em></a><br />
<strong>Tish Shute:</strong> Is that your Burning Man project? Do you have a link for that, and some pictures, video?</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah.Â  <a id="riim" title="Heaid.com" href="http://heaid.com/">Heaid.com</a>.Â  Human Enhanced Artificial Intelligence Dancing.<br />
</strong></p>
<p><strong>Tish Shute:</strong> Thank you! This year the augmented reality story has been fairly basic &#8211; relying on basic sensors, compass, gps, accelerometers.Â  But it has also been an exciting year becauseÂ  we hadnâ€™t even hadÂ  smart phones with the camera, and GPS, and compass before this.</p>
<p>But now, the big adventure is to hook this all these sensor fusion techniques up with computer vision so that we can actually do reverse positioning for example from photos from what we are looking at, right?</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah, and start to use it in a more ad-hoc manner so that as you are traveling around, yes, it will take advantage of your location, but it doesnâ€™t need to have been mapped before.</strong></p>
<p><strong>Tish Shute:</strong> Right &#8211; moving from mapping to context awareness.Â  Could you give like a quick explanation of what you did in your Burning Man project and how that relates to this kind of,Â  ad-hoc, on the fly, beginning to know what you are looking at without it having been mapped before, that is fascinating.</p>
<p><strong>Brady Forrest:</strong> <strong>Sure.Â  So we mounted a camera about 30 feet off the ground.Â  And as people would move underneath or dance, they would move from block to block.Â  And we had kind of created kind of bitmap of the area underneath and set up different sound zones.Â  So as people moved from zone to zone, it would play different music.</strong></p>
<p><strong>And we used Maxim FP to handle the computer vision, although it has Open CV library to handle the computer vision part and to handle determining which of the audio to fire off.Â  And then, also, we had a laser that would play at the same time.</strong></p>
<p><strong>And then we used Ableton Live, which is a very popular DJ software to actually handle the music.Â  So as someone moved from, say, square A to square B, it would fire off various MIDI signals and Ableton would interpret that.Â  And each person who went in, up toâ€¦well, theoretically, up to 4- 8 people.Â  But because of how small the stage was and how the sounds are played, realistically, more like 4-6 people.</strong></p>
<p><strong>Each person had there own set of sound.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3921063406_db4fbee6af_b.jpg"><img class="alignnone size-medium wp-image-5145" title="3921063406_db4fbee6af_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3921063406_db4fbee6af_b-300x168.jpg" alt="3921063406_db4fbee6af_b" width="300" height="168" /></a></p>
<p><em>Pic from <a href="http://www.flickr.com/photos/extramatic/"><strong>extramatic</strong></a>&#8216;s Flickr </em><a id="sgdt" title="stream here" href="http://www.flickr.com/photos/extramatic/3921063406/sizes/l/"><em>s</em><em>tream here</em></a></p>
<p><strong>Tish Shute: </strong> Wow! Awesome.</p>
<p><strong>Brady Forrest:</strong> <strong>We would be able to detect different people, assign them a sound, or a set of sounds, so, like bass, drums, vocals.Â  And then we would have clips that played well together that were 3-5 seconds in length.</strong></p>
<p><strong>Tish Shute:</strong> At what distance could you detect people?</p>
<p><strong>Brady Forrest: </strong> <strong>We had a 22 foot  area underneath the camera.Â  That was mostly based on what the lens could capture.</strong></p>
<p><strong>Tish Shute:</strong> OMG I love this!Â  This is really the next step for augmented realities &#8211; not just attaching reference data to the world but exploring new shared &#8220;cosensual realities&#8221; (see Anselm Hook&#8217;s interview part 2 upcoming).</p>
<p>I am very interested in how in something you talk about a lot in your &#8220;State of Where 2.0&#8243; essay, about lifestyle coming first for a potentially disruptive technology, not commercial considerations.Â  I still have to post the second half to my interview withÂ  Anselm Hook but Anselm has some brilliant ideas in this area.Â  He is working on a project called <a href="http://makerlab.org/news/21" target="_blank">Angel</a>, where part of the vision is for people to actually find what they need without explicitly having to ask for it having to ask for it.</p>
<p>And this brings me to something that is very, to me, noticeable about Where 2.0 this year, and very exciting.Â  This is that location aware technology and crisis management basically has matured, hasnâ€™t it?Â  We are beginning to see really useful stuff in this area now.</p>
<p>What is different this year that has brought crisis management and location aware technology together, a world in crisis?</p>
<p><strong>Brady Forrest: </strong> <strong>Well, I think the primary thing that has brought all these technologies together is Haiti.Â  Without Haitiâ€¦A lot of times, future crises benefit from the current one, because people put in a lot of work.Â  And so, there is new infrastructure being laid with things such as <a href="http://www.ushahidi.com/" target="_blank">Ushahidi</a>, which is an open source platform for trackingâ€¦well, originally for tracking election violence in, but now is being used to track people and their locations and food requests in Haiti.</strong></p>
<p><strong>Also, Haiti did not have solid, accessible, good maps at the time of the of the earthquake.Â  And there have been two volunteer projects that have sprung up to help with that.Â  One being headed by the <a href="http://www.harrywood.co.uk/blog/2010/01/21/haiti-earthquake-on-openstreetmap/" target="_blank">Open StreetMap Wood Foundation</a> and many volunteers.Â  And then the other, Google Map Maker.  And in both cases the activity around Haiti on these programs went up exponentially&#8230;or, I donâ€™t know about exponentially, but a lot.Â  In the case of Map Maker, it was up 100 times and was the most worked on country for that week.Â   And one of the most downloaded for that week.</strong></p>
<p><strong>Tish Shute:</strong> Yes the work being done in <a href="http://crisiscommons.org/" target="_blank">CrisisCamps</a> around the country is very encouraging.</p>
<p><strong>Brady Forrest: And then also, you know, not just Ushahidi or Open Street Map, but also the<a href="http://haiticrisis.appspot.com/" target="_blank"> People Finder</a> which had open API so that different organizations could share their data, thus learning from Katrina.Â  There are all these different pieces of technology will be used in the future and hopefully be able to save more lives.Â  I didnâ€™t see&#8230;there are iPhones apps that were released.Â  But Iâ€™m not aware of any Android apps.Â  Iâ€™m not aware of any AR apps.</strong></p>
<p><strong>Tish Shute:</strong> We donâ€™t have smart phones devices distributed widely enough for them to be appropriate, do we, in a lot of areas where crisis strikes.</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah and there was criticism that they shouldnâ€™t have been on iPhone.Â  You know, that iPhones were a waste of time. Because they arenâ€™t&#8230;a lot of on the ground agencies arenâ€™t going to have iPhones.Â  However, a lot of people who are going from the States will, and if the apps are there, then people will start to have them.</strong></p>
<p><strong>But relatively speaking, an iPhone is not that expensive.</strong></p>
<p><strong>Tish Shute:</strong> One thing I noticed and actually I discussed this in the second half of the interview I did with Anselm which I am getting ready to post.Â  But one of the aspects of the crisis filter was having people working as curators looking at messages coming out of Haiti, and while integrating the streams that would be useful is still probably a challenge, many curators will be on iPhones because they are based in the US.</p>
<p>We need to work across all platforms probably.<br />
<strong><br />
Brady Forrest:</strong> <strong>Yes.Â  Patrick Meier of Ushahidi, who runs <a href="http://www.crisismappers.net/forum/topics/task-force-haiti-earthquake" target="_blank">Crisis Mappers</a>, he ran a 24/7 emergency room.  It was out of the Fletcher School in Boston.</strong></p>
<p><strong>They had volunteers all over the States and Canada.Â  They had volunteers in Vancouver that were translating Creole messages in under ten minutes.</strong></p>
<p><strong>Tish Shute:</strong> Yes and another point that is interesting in terms of the reconstruction and rebuilding ofÂ  Haiti isÂ  the whole idea of leap frogging, and the idea that you can really&#8230; thereâ€™s always, as weâ€™ve seen in other parts of the world, opportunity, when you miss pieces of basic infrastructure, to skip a whole stage and go onto the next one, like how virtual banking took off in Africa because of the absence of brick and mortar infrastructure.</p>
<p><strong>Brady Forrest:</strong> <strong>To skip to a topic that been in my head, Iâ€™m just so bummed that the iPad does not have a camera.</strong></p>
<p><strong>Tish Shute:</strong> I was bummed is barely the word I would use.Â  Particularly as we had just been planning our ground breaking AR/next generation ebook in the days leading up to the announcement!</p>
<p>I suppose there is the hope theyâ€™re going to put it in the next one.Â   But I suppose the play for conventional content delivery is so big that everything else is trivial in comparison &#8211; especially in seems jump starting the emerging augmented reality industry!</p>
<p>So we might get thrown a camera and compass in the next round but will we get access to the video buffers?Â  AR enthusiasts may have to live on table scraps from Apple a bit longer it seems.</p>
<p>But what blows my mind is why hasnâ€™t the iTouch got a camera, been AR enabled?Â  AR gaming would get an enormous boost from that alone. My son loves even the simple minded AR games available now on the iphone, and he loves iphone games &#8211; he has 110 games downloaded!</p>
<p><strong>Brady Forrest:</strong> <strong>Ridiculous.Â  Yeah.Â  I donâ€™t know what they donâ€™t like about cameras.Â  And I plan on getting an iPad, but because of the limitations I plan on using it for base content and will probably get the bottom line model. I canâ€™t imagine&#8230;I donâ€™t know.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>It is very interesting, who actually puts together the big enabling mediating device for AR is still an open question, isnâ€™t it?Â  I mean, thatâ€™s the truth; we have sort of mediating devices but we donâ€™t have the magic brew yet do we?</p>
<p><strong><strong>Brady Forrest:</strong> No. Not yet.</strong></p>
<p><strong><strong>Tish Shute:</strong></strong> Good enough in some ways, and certainly a start but not quite the real deal.Â  For me, Where 2.0 this year covers the groundwork for true AR, mobile social proximity-based social networking, visual search, computer vision and sensor fusion techniques&#8230;.Â Â  And because all these things have a chicken and egg relationship laying the groundwork is basically as important as having the mediating device otherwise you canâ€™t do interesting things when we get the mediating device, right?</p>
<p>Is this the year we get the magic brew for AR, i.e., the business model, the killer app, and the mediating device?</p>
<p><strong><strong>Brady Forrest:</strong> This is not the year.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Then I should ask you. Are you in the Goggles camp? That is do you think AR needs eyewear to go mainstream?</p>
<p><strong><strong>Brady Forrest:</strong> I think this may be where we get&#8230;we start to see what is going to be the killer app that gets people to buy the hardware that will support AR.Â  You see what I mean?Â  And then from there the apps will come out and the hardware will advance in that direction.</strong></p>
<p><strong>I donâ€™t think AR has made that leap yet.Â  It hasnâ€™t, to use almost a clichÃ©, it hasnâ€™t crossed the chasm yet and it hasnâ€™t proven that it will.Â  Because I donâ€™t know if&#8230;I think itâ€™s difficult to tell right now.Â  Is it going to be games?Â  Is it going to be data layers? What is going to drive people to an AR device, especially one fully dedicated to it?</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think in terms of AR games taking off a bit of help from the mediating device e.g. access to the iphone video buffers would probably be enough to stoke up AR games into being a hot commodity.Â  But in terms of AR data layersÂ  going mainstream, we need some of the other players in the location space to put together the magic brew on the business model, donâ€™t we?</p>
<p><strong><strong>Brady Forrest:</strong> Thatâ€™s why Iâ€™m so curious though&#8230;thatâ€™s why I gave Yelp their own talk.Â  They are&#8230;Those guys are gang busters, theyâ€™re a consumer company, very consumer facing website.Â  Theyâ€™ve got amazing data stores.Â  They do a lot of interesting stuff with their data.Â  And I donâ€™t think people always give them the geek credit they deserve.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>You began Where 2.0 back in 2004, when as you point out, &#8220;&#8216;local search&#8217; was interesting but not yet real&#8221; and you have always stressed something thatâ€™s proven to be absolutely true which is lifestyle before commerce, right?Â  And that if location based services were going to be big it was because they meant something in terms of our lifestyle, not just because they told us where to get another good burger.Â  Right?</p>
<p>I think thereâ€™s been a lot of breakthrough in that area this year in terms of what location based services and proximity based social networks are to us now, how theyâ€™re changing our lifestyle.Â  What do you see as the breakthroughs for in 2009 and what are you hoping for in 2010?</p>
<p><strong><strong>Brady Forrest:</strong> Well, I think Google Goggles is one of the most exciting things to me.Â  Having access to a visual search&#8230;having someone actually release a visual search engine in that way, to consumers, I think is huge.Â  You know, you see stuff like that in the labs. But I donâ€™t see it&#8230; itâ€™s rare to see it out.</strong></p>
<p><strong>I think Android is huge.Â  And the way Google is pushing hardware to show off the platform; so the Nexus One being another example and the fact that itâ€™s breaking free from the carriers. Because I think when get away from the carriers we are able to</strong><strong> see more innovation, it&#8217;s whatâ€™s going to allow people or developers and companies to really innovate.</strong></p>
<p><strong>And I think Twitter adding geo-location to their APIs and buying MixerLabs is a huge move. I think Twitter may end up becoming the end-all be-all of location services. They are going to be updated constantly by people; they are going to have a really good grasp, real-time, of what is happening in any one place, at least based on the people.</strong></p>
<p><strong>And then with the addition of the MixerLabs data, they&#8217;re going to have more datasets at their ready. As well as any data that they start to collect from the clients themselves, like from TweetDeck.</strong></p>
<p><strong>So there are global clients that are updating Twitter. I think those are some of the most exciting things. And again, just to come back to Yelp, I think Yelp&#8217;s Monocle is also pretty significant, just because it&#8217;s an AR app that&#8217;s being pushed into consumers&#8217; hands.</strong></p>
<p><strong>And we&#8217;ll see how useful they find it.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong><a href="http://en.oreilly.com/where2010/public/schedule/speaker/24907" target="_blank">Gary Gale, Yahoo! Inc.,</a> is going to talk on overcoming the business, social, and technological hurdles so we can reach the long promised [Laughs] Hyperlocal Nirvana. I think you&#8217;ve outlined some of these obstacles in relation toÂ  AR, where there are obstacles are in terms of mediating device, and bringing all the pieces together including computer vision techniques in order to have an AR view. That&#8217;s the AR side of it. But the layer below that, which is the layer where actual location based apps that are beginning to go mainstream now,Â  are these presenting successful business models for location-based services.</p>
<p>So in short, in your view, what are the big hurdles to Hyperlocal Nirvana before we get to AR, even just for these location-based services?</p>
<p><strong><strong>Brady Forrest:</strong> Well, how do you make money?</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yeah, to put it bluntly. I like <a href="http://battellemedia.com/" target="_blank">John Battelle&#8217;s</a> way of putting it [laughs] how do we oxygenate the system!</p>
<p><strong><strong>Brady Forrest:</strong> So are location-based services something that you can make money in the long-term? Nokia bought NavTec for $8 billion. And then two years later, they&#8217;re giving it away free as part of Ovi Maps.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>Right.</p>
<p><strong><strong>Brady Forrest: </strong>I&#8217;m assuming that that&#8217;s actually part of the plan.Â  And that although their hand may have been forced by Google with their release of Turn-By-Turn thatâ€¦but it&#8217;s still got to be a hard nut to swallow that this huge investment in location ends up becoming a loss leader to sell more phones.</strong></p>
<p><strong>So, can you make money through subscriptions, through selling apps? And I think that is still being proven. The other one is, can you use advertising? And it&#8217;s kind of scary to see that Apple is restricting the use of advertisers to use location.</strong></p>
<p><strong>It came out yesterday or two days ago that advertisers cannot use location, or app developers cannot use location for ads. They can only use location to show something interesting or useful to their customers.</strong></p>
<p><strong>And there&#8217;s a lot of speculation that it&#8217;s because Apple wants to control the location-based ads that go on the iPhone.</strong></p>
<p><strong>Tish Shute</strong>: Yes. I heard a strange rumor.Â  Actually its an un-strange rumor, a likely rumor in fact,Â  that Apple and MS are getting together to replace some of the Google aspects of the iPhone like search and maps?</p>
<p><strong><strong>Brady Forrest:</strong> Yes, &#8230;. Microsoft employees get 10% off at the Apple store. There&#8217;s a longstanding relationship between those two companies.</strong></p>
<p><strong>And Android is definitely more of a competitive threat than Windows Mobile is.Â  And it&#8217;s well-known what the relationship between PCs and Macs are. So I donâ€™t thinkâ€¦I donâ€™t find that to be that surprising of a rumor.Â  I do wonder if it would hurt the iPhone, but it doesnâ€™t surprise me that they would consider it.</strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I do know, certainly from the AR point of view, Microsoft has recently hired some of the key researchers, including Georg Klein. And they are looking for more people in the image recognition area so it seems currently MS is going to be making a bigger push not just with PhotoSynth, but with image ID.</p>
<p>So it could be a pretty powerful combo between the iPhone, and Microsoft &#8211; they have some of the key computer vision research that would be needed for full AR.</p>
<p><strong><strong>Brady Forrest</strong>: Oh, yeah. Microsoft has amazing research depth. They&#8217;ve got an amazing team.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>But it is a bit of a mystery to me why Microsoft haven&#8217;t done more with Photosynth.Â  As I noted in myÂ <a id="jyr:" title="previous post" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">previous post</a>, <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokiaâ€™s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.Â  And â€œphoto-based positioning systemsâ€ -Â  3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing could be hugely important to AR.Â  But that brings me to another mystery why haven&#8217;t we seen more from Nokia in this space  yet &#8211; the N900 doesn&#8217;t have a compass?</p>
<p><strong><strong>Brady Forrest:</strong> Yeah, I donâ€™t know why Nokia hasnâ€™t made more of a space for themselves in these things. They did a lot of early work in these areas. I think they are trying toâ€¦my guess is that they&#8217;re trying to restructure themselves. They made some pretty big changes on the web-Ovi made its own division. And they&#8217;ve been doing a lot of location-based acquisitions: Places, Gate 5 several years ago, Gossler, just the past six months.  And so I think that&#8217;s really been their focus&#8230;</strong><strong>and the research team.</strong></p>
<p><strong>And a large company, since they havenâ€™t found a business model, which is what we&#8217;ve been discussing here, they are hesitant to launch it, or toâ€¦they donâ€™t really know if this is a business that they need to launch, or if this is an app that they should have there out for fun.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yeah. And that&#8217;s back to the oxygenation of the system and location.Â  We really still have some work to do to with the business models</p>
<p>Final question!Â  At the core of many of today&#8217;s business model is the idea of hoarding data &#8211; that&#8217;s an underpinning.</p>
<p>But ultimately, for open AR, we want a situation where we can really share data so that we donâ€™t really have the data all locked inside one particular browser or app. The current crop of AR browsers arenâ€™t really browsers in the sense that we understand a browser on the web today, because the data&#8217;s locked inside each service, Wikitude, Layar, Acrossair etc.</p>
<p>I have become very interested with Federation as a model for solving this, so that we can begin to have an opportunity to build consensual relations around data,  sometimes sharing, sometimes not. Federation is my big dream at the moment.Â  And now we even have something to work with in the Wave Federation Protocol. But how do we get from here to there, where we really have a federated world of data for AR and location-based services? But you think people need to solve the question of business models first?<strong><br />
<strong><br />
Brady Forrest:</strong> I think people needâ€¦I think one potential is ads; so serving up content.Â  And by ads, I also mean coupons, meals, the Foursquareâ€¦. what it looks like Foursquare&#8217;s going to do, featured content, which is Layar&#8217;s.</strong></p>
<p><strong>So we need to see, is that the way we&#8217;re going to sell these? The other is to have the best viewer, which in some ways is a race in selling that, but that&#8217;s potentially a race to the bottom, price-wise.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Right. Do you think Google Wave Federation Protocol has a chance of taking off and changing the game for real-time communications, federation, real-timeâ€¦<strong><br />
<strong><br />
Brady Forrest:</strong> Quite possibly with the real-time. I think they need to work on the UI.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Oh dear we can&#8217;t discuss the Wave UI right at the end of the interview &#8211; of course I believe it would do better in an AR view!Â Â  I know you have to goÂ  now but I have to say Google Wave not standardizing the client/server interface &#8211; so we could seem some new UIs for Wave [we are working with PygoWave for ARWave because of this], andÂ  iPad&#8217;s lack of camera were two huge disappointments in recent months.</p>
<p><strong><strong>Brady Forrest: </strong>Yeah. It&#8217;s [the Wave client] is very difficult to use.</strong></p>
<p><strong>Tish Shute: </strong>But the Wave Federation Protocol is an open fast, compact protocol that is a dream come true for AR.Â  Open, distributed, real time communications is a very big enabler for AR.Â  I would hazard a guess that in 2010 real time communications plus location becomes oxygen.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook</title>
		<link>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/</link>
		<comments>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/#comments</comments>
		<pubDate>Sun, 17 Jan 2010 17:05:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Commons]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY Meetup]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave Wiki]]></category>
		<category><![CDATA[augmented reality conference]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality social commons]]></category>
		<category><![CDATA[brightkite]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Davide Carnivale]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[federated search]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[graffitigeo]]></category>
		<category><![CDATA[hacking maps]]></category>
		<category><![CDATA[Head Map manifesto]]></category>
		<category><![CDATA[imageDNS]]></category>
		<category><![CDATA[imagemarks]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Map Kiberia]]></category>
		<category><![CDATA[Mikel Maron]]></category>
		<category><![CDATA[mobile internet]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[Muku]]></category>
		<category><![CDATA[neo-viridian]]></category>
		<category><![CDATA[Nokia's ImageSpace]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open distributed AR]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[photo-based positioning systems]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[placemarks]]></category>
		<category><![CDATA[Planetwork]]></category>
		<category><![CDATA[Platial]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social cartography]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[SpinnyGlobe]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[trust filters]]></category>
		<category><![CDATA[Viridian]]></category>
		<category><![CDATA[viridiandesign]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[whurley]]></category>
		<category><![CDATA[yelp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5050</guid>
		<description><![CDATA[Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like Google Goggles, Point and Find, and SnapTell &#8211; see this post by Katie Boehret.Â  And while there may be no truly game changing augmented [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook.jpg"><img class="alignnone size-medium wp-image-5051" title="anselmhook" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook-300x225.jpg" alt="anselmhook" width="300" height="225" /></a></p>
<p>Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>, and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> &#8211; <a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank">see this post by Katie Boehret</a>.Â   And while there may be no truly game changing augmented reality goggles for a while, make no mistake, key aspects of our augmented view, factors that will have a lot to do with what we will actually see when an augmented vision of the world is a commonplace, are already in the works.Â  And, as Anselm Hook (pic above <a href="http://www.flickr.com/photos/caseorganic/2994952828/" target="_blank">from @caseorganic&#8217;s flickr</a>) notes:</p>
<p><strong>&#8220;There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see?&#8221;</strong></p>
<p>Cooperating services, e.g., Google Earth, Maps, Streetview, Google Goggles, and leader in local search like Yelp (<a href="http://www.huffingtonpost.com/ramon-nuez/google-is-getting-ready-f_b_426493.html" target="_blank">see here</a>) would have an enormous ability to filter and control a mobile, social, context aware view of the physical world, and Google themselves see an ethical quandary.</p>
<p><strong> &#8220;A Google spokesperson says this app has the ability to use facial recognition with Goggles, but hasnâ€™t launched this feature because it hasnâ€™t been built into an app that would provide real value for users. The spokesperson also cites â€œsome important transparency and consumer-choice issues we need to think throughâ€ </strong><strong> (quote from Wall Street Journal Column</strong><a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank"> by Katie Boehret)</a>.</p>
<p><a href="http://www.hook.org/" target="_blank">Anselm Hook</a> and <a href="http://paigesaez.org/" target="_blank">Paige Saez</a>, with great prescience, have been advocating a social commons for the placemarks and imagemarks to our physical world platform through a number of pioneering projects, including <a href="http://imagewiki.org/" target="_blank">imagewiki</a>.Â Â  I have interviewed both Anselm and Paige (upcoming) in depth, recently.Â  My talk with Anselm was nearly three hours long!Â  So I am publishing the transcript in two parts.</p>
<p>Understanding what it means to have a social commons forÂ  our physical world platform, and augmented reality, are key questions for all of us to think about, but especially important for those of us involved in the emerging industry of augmented reality.</p>
<p>Anselm <a href="http://blog.makerlab.org/2009/11/augmentia-redux/">notes</a> :</p>
<p><strong>â€œThe placemarks and imagemarks in our reality are about to undergo that same politicization and ownership that already affects DNS and content. Creative Commons, Electronic Frontier Foundation and other organizations try to protect our social commons. When an image becomes a kind of hyperlink â€“ thereâ€™s really a question of what it will resolve to. Will your heads up display of McDonalds show tasty treats at low prices or will it show alternative nearby places where you can get a local, organic, healthy meal quickly? Clearly thereâ€™s about to be a huge ownership battle for the emerging imageDNSâ€</strong></p>
<p>The mobile internet is moving beyond the internet in your pocket phase of mobility with mobile, social, proximity-based, context aware networks like <a href="http://www.foursquare.com/">FourSquare</a>, <a href="http://gowalla.com/" target="_blank">Gowalla</a>, <a href="http://brightkite.com/" target="_blank">Brightkite</a> and <a href="http://www.geograffiti.com/">GraffitiGeo</a> (see <a href="http://smartdatacollective.com/Home/23811">Smart Data Collective</a>) likely, soon, to start to take precedence over other forms of social network.</p>
<p>Regardless of the timeline for true augmented reality &#8211; 3D images &amp; graphics tightly registered to the physical world,Â  proximity-based social networking and real time search are already taking us into a hyper-local mode and the realm of augmented reality which is <strong><strong>&#8220;inherently about who you are, where you are, what you are doing, and what is around you&#8221; </strong></strong>(<a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> &#8211; see <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">here</a>).<strong><strong> </strong></strong>The ground is being prepared for augmented reality now.<strong><strong><br />
</strong></strong></p>
<p>If you have been reading Ugotrade, you will know I have been actively involved in developingÂ  an open, distributed AR platform/mobile social interaction utility for geolocated data based on the Wave Federation Protocol &#8211; AR Wave a.k.a Muku &#8211; &#8220;crest of a wave&#8221; (see my posts <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a>, <a href="http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">here</a> for more on this project, and the <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki</a> here).Â  Federation is, I believe, one vital aspect to developing a social commons for augmented reality and the physical world platform.</p>
<p>Also, a bit of news, I am co-chairing the upcoming <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Augmented Reality Event (are2010)</a> with <a href="http://gamesalfresco.com/about/" target="_blank">Ori Inbar</a> of <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://whurley.com/" target="_blank">whurley</a>.Â  Sean Lowery, <a href="http://www.innotechconference.com/pdx/Details/other.php" target="_blank">Prospera</a>, is the event organizer, and <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> has the support of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. Â  The <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> web site is live and there is an <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Open Call For Speakers</a>.Â   You can submit your proposals and demos for one of the three tracks, business, technology, or production <a href="http://augmentedrealityevent.com/speakers/call-for-proposals/" target="_blank">on the web site here</a>.</p>
<p><a href="http://augmentedrealityevent.com/" target="_blank"><img class="alignnone size-medium wp-image-5101" title="are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/are20101-300x60.png" alt="are2010" width="300" height="60" /></a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a> &#8220;prophet&#8221; ofÂ  augmented reality and more, &#8220;will deliver the most anticipated <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality keynote</a> of the year.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost.jpg"><img class="alignnone size-medium wp-image-5105" title="bruces-brasspost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost-300x225.jpg" alt="bruces-brasspost" width="300" height="225" /></a></p>
<p>It didn&#8217;t surprise me when Anselm mentioned that Bruce Sterling was a key influence for his work on the geospatial web and augmented reality.Â  Anselm explained:</p>
<p><strong>&#8220;Iâ€™d seen <a href="http://www.viridiandesign.org/notes/151-175/00155_planetwork_speech.html" target="_blank">a talk by Bruce Sterling</a> at an event called Planetwork [May, 2000]. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about.</strong> <strong>So, his influences is a pretty significant one to me at that exact moment.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> -Â  seems it is time for a &#8220;Neo-Viridian,&#8221;  revival!</p>
<p>This <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">post by Bruce Sterling on Pachube Feeds</a>, and Thomas Wrobel&#8217;s <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">prototype design for open distributed augmented reality on IRC</a>, were key inspirations for me when I began thinking about the potential of Google Wave Federation protocol for augmented reality.Â  I had been exploring <a href="http://www.pachube.com/" target="_blank">Pachube</a> and deeply interested in <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">the vision of Usman Haque</a>, but I had a real <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">aha moment</a> when I read this :</p>
<p><strong>â€œ(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))â€</strong></p>
<p>I think the AR Wave group will earn the extra credit and more very soon!Â  <a href="http://need2revolt.wordpress.com/about/" target="_blank">Davide Carnovale, need2revolt</a>, and <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a><strong> </strong>have been leading the coding charge, and there will be a very early AR Wave demo soon, perhaps as soon as the <a href="http://www.meetup.com/arny-Augmented-Reality-New-York/" target="_blank">Feb 16th ARNY Meetup</a>.Â  <strong><br />
</strong></p>
<p>Open access to the creation of view that will eventually find its way into AR goggles, will depend not only on the power ofÂ  an open distributed platform for collaboration like the AR Wave project.Â  Our augmented reality view will be constructed through complex &#8220;hybrid tracking and sensor fusion techniques&#8221; (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation will drive our augmented experiences, and at the moment, these kind of resources, at least at scale, are for the most part in private hands.</p>
<p>In the interview below, Anselm&#8217;s discussesÂ  how trust filters, and <span id="zuat" title="Click to view full content">being able to publicly permission your searches so that other people can respond and so that people can reach out to you, and the democratization of data in general, are even more of a concern </span>with augmented reality and hyper local search<span id="zuat" title="Click to view full content">.</span> The task of understanding what it means to haveÂ  a social commons for the outernet remains an open, and pressing question.</p>
<p>Anselm explains (see full interview below):</p>
<p><strong><span id="e18n" title="Click to view full content">&#8220;as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</span></strong></p>
<p><strong><span id="e18n" title="Click to view full content"> And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.&#8221;</span></strong></p>
<h3>Augmented Reality will emerge through sensor fusion techniques &amp; cooperating cloud services</h3>
<p>In 2010, sensor fusion techniques, computer vision technology in conjunction with GPS and compass data will create data linking that can enable the kind of augmented reality that has been the stuff of imagination for nearly four decades (see <a href="http://laboratory4.com/2010/01/the-reality-of-augmented-reality/" target="_blank">Jarrell Pair&#8217;s post).</a></p>
<p>Putting stuff in the world in 3D is of course key to the original vision of augmented reality, and one of its biggest challenges.Â  Augmented reality is going to be implicated in a real time mapping of the world at an unprecedented scale and granularity.Â  We have barely an inkling of the implications of this now.</p>
<p>Anselm and Paige have been working in the heart of the social cartography movement for nearly a decade.Â  The vision and experience of this community is vital to understanding how augmented reality and the world as a physical platform can evolve into something that benefits people and allows them &#8220;to have a better understanding of the opportunities around them.&#8221;</p>
<p>We have been hacking maps for millenia â€“Â  â€œfrom conceptual story mapping, to colloquial mapping in European development and the cartographic renaissance created by the global voyages and rediscovery of Ptolemyâ€™s mapsâ€ (<a href="http://highearthorbit.com/" target="_blank">Andrew Turner</a>).Â  And, recently, initiatives on a public-provided GIS, like <a href="http://opengeo.org/" target="_blank">OpenGeo</a>, have led the way toward more open, interoperable, geospatial data.</p>
<p>Mapping takes on a new an crucial role to augmented reality.Â  <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokia&#8217;s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.</p>
<p>And, if we see these kind of projects developed into a &#8220;photo-based positioning systems&#8221; -Â  &#8220;3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing&#8221; (Thomas Wrobel), we would find augmented reality leap forward over night.</p>
<p>It is time to take very seriously the vast opportunities and potential pitfalls of an augmented world.</p>
<p><strong><span id="vix9" title="Click to view full content">&#8220;when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist.Â  It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></strong></p>
<p><strong><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.&#8221;</span></strong></p>
<p>There are some initiatives emerging.Â  <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a> (who <a href="http://www.techcrunch.com/2009/12/08/tonchidot-sekai-camera-funding/" target="_blank">closed on $4 million of VC for augmented reality </a>last December) has helped create the <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> in Japan.Â  <a href="http://www.tonchidot.com/corporate-profile.html" target="_blank">CFO of Tonchidot</a>, <a href="http://www.linkedin.com/ppl/webprofile?action=vmi&amp;id=499984&amp;pvs=pp&amp;authToken=r8TF&amp;authType=name&amp;trk=ppro_viewmore&amp;lnk=vw_pprofile" target="_blank">Ken Inoue</a> explained in <a href="http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/" target="_blank">an interview with me in September 2009</a>.</p>
<p>&#8220;<strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing â€“ quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> would like to tackle in the not so distant future.&#8221;</strong></p>
<p>But<strong> </strong>the task of building a social commons for the physical world platform has only just begun.<strong><br />
</strong></p>
<p><strong><span title="Click to view full content"><br />
</span></strong></p>
<h3>Interview with Anselm Hook</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31.jpg"><img class="alignnone size-medium wp-image-5085" title="anselm3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31-300x225.jpg" alt="anselm3" width="300" height="225" /></a></p>
<p><em>photo from <a href="http://www.flickr.com/photos/anselmhook/3832691280/in/set-72157621946362509/" target="_blank">Anselm&#8217;s Flickr stream here</a></em></p>
<p><span id="u2mq" title="Click to view full content"><strong>Tish Shute:</strong> We <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">first met last year </a></span><span id="zjlm" title="Click to view full content"><a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">at Wherecamp</a>. </span><span id="suh4" title="Click to view full content">The start of 2009 was I think</span><span id="e_r5" title="Click to view full content"> the &#8220;OMG finally&#8221; moment for augmented reality and</span><span id="wo16" title="Click to view full content"> in less than a year AR, at least in proto forms, AR is breaking into the mainstream now! You are one of the founding visionaries/philosophers/hackers of the geo web and you have been thinking about geo web and AR for a long time &#8211; <a href="http://hook.org/headmap" target="_blank">all the way back to the legendary Head Map Manifesto</a>, and before.Â  Mostly recently you led the way in the very successful <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">ARDevCamp</a> in Mountain View. </span><span id="kn-y" title="Click to view full content"> Could you start by telling me a little bit about the history of your pioneering work with geolocated data?</span></p>
<p><strong>Anselm Hook: </strong>I am a long time Geo fanatic. I&#8217;m really interested in social cartography and what some people call public-provided GIS, thatâ€™s some language that people use. Anyway, my personal interest, when I talk to people who are non-technical (and it&#8217;s been a long term interest in the way I phrase it) is that I want to help people see through walls. So, the goal is very simple. I want people to have a better understanding of opportunities around them, the landscape around them. I always get frustrated when people make bad decisions because of a lack of information, especially when it&#8217;s related to their community and related to their environment. But, plainly put, I really just want &#8220;to help people see through walls&#8221;. It&#8217;s a very simple goal.</p>
<p><strong>Tish Shute:</strong> I know you worked on <a href="http://platial.com/" target="_blank">Platial</a>, which is really one of my favorite social mapping applications. It really broke new ground. What was the history of that? How did you get involved with Platial?</p>
<p><strong>Anselm Hook:</strong> Thatâ€™s an interesting question. It actually started at around 2000 when I saw Bruce Sterling talk. I had been writing video games for many years, and I was quite good at it, and I enjoyed it. But, the reasons I was doing it diverged from why the industry was doing it. I was making video games because I like to make shared spaces for my friends to play in and to share experience. I really enjoyed making shared environments. I worked on <a id="jrn-" title="BBS's" href="http://en.wikipedia.org/wiki/Bulletin_board_system">BBS&#8217;s</a> and my friends and I were always making these collaborative shared environments.</p>
<p>Once the video game industry kind of started to take off, I started to do high performance, 3D interactive video games and making compelling shared spaces, and it was a lot of fun. But, the frustration for me was that there was a huge industry growing around it and became very commercial. Although it paid well, it started to diverge from my values which were more centered around community environments, and shared understanding.</p>
<p><strong>Tish Shute:</strong> Yes very rapidly, the big games kind of devolved from the social aspects and became more and more into single player really, didnâ€™t they?</p>
<p><strong>Anselm Hook:</strong> It was the way, actually, because even though often you were in a many player world, you werenâ€™t collaborating, everything else became just a target.Â  I liked the idea of deep collaboration that calls the kind of playful space you see in IRC, or in the real world, where people are solving real world problems.</p>
<p>And I grew up in the Rockies, and I was always had a lot of access to the outside. So, I saw shared spaces and collaboration as a way to protect our environment. [ To step back ] I think people used different metrics <span id="gozb" title="Click to view full content">for measuring their choices in the world and many people have a value system centered around minimization of harm: making sure that the people are not hurt. But, my value system is different. I personally believe that protecting the planet is more important: to maximize biodiversity. I feel like protecting people around me comes from protecting the ecosystems they live in.</span></p>
<p><strong>Tish Shute:</strong> Thatâ€™s interesting, isnâ€™t it, because the history of Keyhole was really that, wasnâ€™t it.Â  Keyhole later became Google Earth, but I mean it began out of a project to look at what was going on in the ecosystem over Africa at that time, didnâ€™t it?<br />
<strong><br />
Anselm Hook:</strong> Yes, in fact many peopleâ€™s projects are stemming from an environmental concern. <a id="zxy9" title="Mikel Mironâ€™s" href="http://brainoff.com/weblog/">Mikel Maronâ€™s</a> works for example &#8211; heâ€™s doing <a id="euvm" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>, and he also worked on OpenStreetMaps.</p>
<p><strong>Tish Shute:</strong> Map Kiberia &#8211; that is the new project?</p>
<p><strong>Anselm Hook:</strong> Oh, yes his project is called <a id="r7ie" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>. Heâ€™s mapping a city in Africa.<br />
[For more see <a id="ngn." title="Map Kiberia's YouTube Channel" href="http://www.youtube.com/user/mapkibera">Map Kiberia&#8217;s YouTube Channel</a> &#8211; <a id="amqx" title="photo below" href="http://www.flickr.com/photos/junipermarie/4098163856/" target="_blank">photo below</a> from <a href="http://www.flickr.com/photos/junipermarie/">ricajimarie</a> ]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b.jpg"><img class="alignnone size-medium wp-image-5052" title="dhj5mk2g_487qfcv76ft_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b-300x199.jpg" alt="dhj5mk2g_487qfcv76ft_b" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> Right, great!</p>
<p><strong>Anselm Hook:</strong> When I started to look at GIS and mapping I started to meet people who had a very similar background. What happened to me is I kind of stepped away from games around the year 2000. Iâ€™d seen a talk by Bruce Sterling at an event called <a id="e8dn" title="PlaNetwork" href="http://www.conferencerecording.com/newevents/pla20.htm">PlaNetwork</a>. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about. So, his influences is a pretty significant one to me at that exact moment.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img class="alignnone size-medium wp-image-5053" title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>[For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> &#8211; seems that it is time for a &#8220;Neo-Viridian,&#8221;  revival.]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because now your paths are crossing again with augmented reality. You are on the same wavelength again.</p>
<p><strong>Anselm Hook:</strong> Itâ€™s funny, actually, Iâ€™ve had a couple of brief overlaps in that way.Â  Well, so in 2000 I<span id="mdsf" title="Click to view full content"> went to see this talk and I did a small project called &#8212; well, I called it <a id="bx3u" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a>. What I did is I mapped protests from a number of websites onto a globe to show the level of community opposition to the pending war in Iraq. It was the first time there had been a protest before a war. So, it was very interesting to me. [ See <a href="http://hook.org/headmap" target="_blank">http://hook.org/headmap</a> ]<br />
<strong><br />
Tish Shute:</strong> Thatâ€™s really fascinating. Do you have any pictures of that you could send me? </span></p>
<p><span id="r0h_" title="Click to view full content"><a href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/" target="_blank"><img class="alignnone size-medium wp-image-5054" title="dhj5mk2g_492ffct2df4_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_492ffct2df4_b-300x225.jpg" alt="dhj5mk2g_492ffct2df4_b" width="300" height="225" /></a></span></p>
<p><span id="mdsf" title="Click to view full content">photo from <a id="j05v" title="anselm's flickrstream" href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/">anselm&#8217;s flickrstream</a></span></p>
<p><strong>Tish Shute:</strong> Yes, Iâ€™ll definitely look <a id="ua2l" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a><span id="m0:j" title="Click to view full content"> up. It sounds very interesting.Â  One of the aspects of your work on geo-located data projects like this and <a id="h.gx" title="Platial" href="http://platial.com/">Platial</a> is that you really started to develop this idea of a culture of place, about how people make place. This was the wake up call to me regarding the power of networks combined with geo-data. </span></p>
<p><span id="m0:j" title="Click to view full content">We are hoping to extend this idea into augmented reality with the an open distributed platform for AR so that we can collaboratively map our worlds from the perspective of who we are, where we are, and what we are doing.Â  I know youâ€™ve just done some work recently in augmented reality.Â  I know you put the code up already. </span></p>
<p><span id="m0:j" title="Click to view full content">By the way, I love the way you take your philosophy into the way you make code &#8211; the practice of making some code, trying some things out, making it all public and publishing your findings, you know, your comments on that experience.Â  Perhaps you could recap sort of how you picked up recently on the state of play with augmented reality and what aspects you looked at, and what came out of that experience?</span></p>
<p><strong>Anselm Hook:</strong> So, itâ€™s a very simple trajectory. Coming out of the work I had done, <a id="cs18" title="Platial" href="http://platial.com/">Platial</a>, among other projects and I started to just look at the hyper-local and I suddenly realize that even those services werenâ€™t really speaking to living, and how to really see and solve local problems. What was missing was a sense of context.</p>
<p>The map doesnâ€™t know how youâ€™re feeling, it doesnâ€™t know if youâ€™re in a hurry, it doesnâ€™t know what you want, itâ€™s very static. Even the web maps are very static. And augmented reality for me I started to recognize as a combination of &#8212; well &#8212; itâ€™s probably collision of many forces, many forces that weâ€™re all a part of. Weâ€™ve also didnâ€™t realize that the real-time web is really important, itâ€™s part of<span id="bja1" title="Click to view full content"> what AR is about.</span></p>
<p>We have all started to realize that the context is important. You know, your personal disposition, your needs, if you want to be interrupted or not. That is the kind of thing that the ubiquitous computing crowd has talked about. We started to recognize that there are sensors everywhere, and the ambient sensing communities talked about that. So what is funny for me about augmented reality is I started realizing it is just a collision of many other trends into something bigger.</p>
<p>Everything else we thought was a separate thing is actually just part of this thing. Even things like Google Maps or mapping systems we think are so great are really just kind of almost an aspect of a hyper-local view. You actually donâ€™t really care what is happening 10 blocks away or 100 blocks away. If you could satisfy those same interests and needs within a single block, one block away, you would probably be really happy. You really just want to satisfy needs and interests, find ways to contribute, or get yourself fed, or whatever it is you want. And AR seemed to be the playground to really explore the human condition.</p>
<p><strong>Tish Shute:</strong> Anyway, I think one of the things that has been very amazing this year is we to have the good mediating devices that, for the first time, give us compasses, GPS, and accelerometers. But one sort of missing pieces with AR at the moment is [tracking, mapping, and registration] &#8211; the kind of things colloquial mappings of the world could be of great help with.</p>
<p>We have seen mapping coming out of the Flickr data, e.g., the University of Washington, put the maps together from the geo-tagged Flickr photos. Now if we could have that linked up with AR, then we have the kind of mapping we need to kind of really hook the geo-data onto the world in a way that goes beyondâ€¦you know, what compass and GPS can really deliver is pretty minimal at the moment.</p>
<p><strong>Anselm Hook</strong>: There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see? Are you going to see corporate advertising? Are you going to see your friendsâ€™ comments or criticisms? It is going to be an Iran or a democracy, right? It is unclear.</p>
<p><span id="vix9" title="Click to view full content">Right now there are some disturbing trends I have noticed. I am a big fan of Google Goggles. I think it is a great project. But when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist. It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></p>
<p><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.</span></p>
<p><strong>Tish Shute:</strong> Yes. And this is how we met, again, recently [over the project to create an open, distributed platform for AR using the Wave Federation Protocol]â€¦</p>
<p><span id="e18n" title="Click to view full content">This is something I feel really deeply is that, you know, basically we need the physical internet to be as open as, as the, as the internet, as the end-to-end internet has been. Or more so, actually, because the end-to-end internet has seen the trend has been to walled gardens.Â  Basically Facebook became enormous, an enormous walled garden which, I think, was despite, our predictions about them, [walled gardens] are the social experience really on the web.Â  It&#8217;s very much in walled gardens still and I, and I really feel that with the physical internet, we need to make great efforts not for it not just to be a series of small pockets of privately funded walled gardens.</span></p>
<p>There needs to be some kind of communications infrastructure that keeps it open so that was when I got interested in looking at the Wave Federation Protocol because it was a real time, you know, an open real time protocol that could possibly be a basis for that. But I think the point you&#8217;ve talked to just now, the mapping of the world and who has the &#8220;goggles&#8221;, i.e., the image data, image databases, that make the world meaningful is really, that&#8217;s still a, it&#8217;s still a BIG question [i.e. who controls the view?].</p>
<p>When I saw <a id="ewxn" title="ImageWiki" href="http://imagewiki.org/">ImageWiki</a>, [I realized] that is a piece that is vital for, for augmented reality. We need to have a huge social effort to be involved in this,Â  linking in and creating theÂ  physical internet, in creating the image hyperlinks that will make that meaningful.</p>
<p><span title="Click to view full content"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b.png"><img class="alignnone size-medium wp-image-5055" title="dhj5mk2g_493fv23rg33_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b-300x219.png" alt="dhj5mk2g_493fv23rg33_b" width="300" height="219" /></a></span></p>
<p><span id="e18n" title="Click to view full content"><strong>Anselm Hook:</strong> I think that&#8217;s a great point. The search interface, the kind of Internet that we&#8217;re used to, the way we talk to the network now, is fundamentally open end to end. Yes, you can have your oligarchies inside of it, as we see with Facebook, but you can always start your own venture up and you can do a search on something, and you can find that, that website and you can join it or you can put up your own webpage and people can find it. </span></p>
<p><span id="e18n" title="Click to view full content">The translation layer, the idea of text search and the ability to discovery power and the serendipity and the openness of that discovery, it&#8217;s pretty open right now. We do have some serious boundaries of language, which is one of the reasons I was working at the <a id="xg:8" title="Meadan.org" href="http://www.imug.org/events/past2007.htm#meadan">Meedan.org</a> [hybrid distributed, natural language translation] for a couple of years, trying to bridge that issue.</span></p>
<p>But here, as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</p>
<p>And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.</p>
<p><span id="fv3x" title="Click to view full content">We have with DNS, for example, the idea that you can register the domain name and people can search for it, and find it, and go to it. There&#8217;s no such thing as an Image DNS, or an image translation to DNS right now. What does it mean when everything is just &#8220;magic&#8221;, when there&#8217;s no way for you to be a part of the conversation, where you&#8217;re just a consumer of what people tell you, or of what one company right now, tells you, is reality? That&#8217;s a real concern.<br />
<strong><br />
Tish Shute: </strong>This, to me is the most important question at the moment. I mean, it&#8217;s the big one and it&#8217;s the place to put energy if you love the Internet [and what it can now become] right. You&#8217;ve got to put a lot of energy into this because this [a democratized view of the physical world as a platform] won&#8217;t just happen, because there&#8217;s a lot of momentum already for it to be heavily privatized, partly because, one reason is, some of the computer vision algorithms that, say, make sense of things like the geotag photographs are not open.Â  I mean, for example, the beautiful maps that have been made from the University of Washington [from Flickr geotagged photo sets], that isn&#8217;t in the public domain.</span></p>
<p><strong>Anselm Hook:</strong> Right. Tish, and in fact you&#8217;re referring to [with the maps from the Flickr photos] to ordinary maps and the fact we&#8217;ve already seen that maps lie, we&#8217;ve already, seen how much maps are reflecting a certain truth that becomes the normative truth. Google maps reflects roads, because this is roads and cars, right? Only recently have they thought about buses and walking. So the normative view that people assume is the reality, is showing off you know Starbucks, and roads, and cars, that becomes the default, those prejudices are just assumed, you know, the truth. But they&#8217;re not the truth at all.</p>
<p>I was talking to a friend of mine in Montreal, [Renee Sieber], and she said that their Indian portage routes are a bridge across land and water, they don&#8217;t think of a piece of land and a piece of water as being different things, they think of them as one thing: a route. It&#8217;s already a different kind of language we can&#8217;t even reflect it.</p>
<p>So not only is there this kind of formal, anthropological lie, in a sense, but there&#8217;s this way that we deceive ourselves because of our own prejudices.</p>
<p><strong>Tish Shute:</strong> Yes I agree and that&#8217;s why I think when I saw some of the things you had written on the ImageWiki point clearly to the need to create a social commons. We need a social commons for the real-time physical internet, we need it for the image hyperlinks that make sense of that.</p>
<p>And it&#8217;s a complicated thing in a sense, though, because we don&#8217;t actually have a good distributed infrastructure for AR yet, and I found exploring AR Wave, that at last we have the suggestion of an open, federated protocol for real-time communication &#8211; the wave federation protocol. [Real time communications is a very important part of AR].Â  It isn&#8217;t an actuality yet where lots of people are able to use it, set up their own servers, and there&#8217;s not a standard all the way throughÂ  [there is not a standard for how data is sent between the client and the server].</p>
<p>But Wave Federation Protocol does make possible truly distributed social AR.Â  I started thinking when I saw ImageWiki that to bring ImageWiki together with the social collaborative power of distributed AR.Â  This really would be the basis of creating a social commons for augmented reality and the physical world as a platform &#8211; the <span id="np6x" title="Click to view full content">start of a bottom up with deep social collaboration on how we create augmented reality colloquial maps that can inform a hyper-local of the world.</span></p>
<p><strong>Anselm Hook:</strong> Yes. When Paige Saez, John Wiseman, and myself, and a few other folksâ€¦ You know, Benjamin Foote, Marlin Pohlmann, and a couple other people started to play with this, we quickly found thatâ€¦ We started to realize, â€œOh, this kind of thing will be at least as popular as IRC. There will be at least as many people doing this as chatting in little virtual spaces. Thereâ€™ll be at least as many people decorating the world with augmented reality markup, and maybe using the real world as a kind of barcode for translating what youâ€™re looking at into an artifact, a digital artifact.</p>
<p>And<span id="csy2" title="Click to view full content"> that the size of that space was going to be huge, basically. Maybe not quite as commodifiable as Twitter, but certainly very energetic.</span></p>
<p>Many of the projects we did were just kind of looking at these kinds of issues sort of from an artistic, technical, and political point of view. We werenâ€™t so much posing complete solutions, but simply using a praxis to explore the idea with an implementation, as a foundation for this discussion. So I think we sort of opened that can of worms for sure.</p>
<p><strong>Tish Shute:</strong> Did you actually set up ImageWiki to be working as a location based app yet?</p>
<p><strong>Anselm Hook:</strong> It is a location based app. It collects your longitude, latitude, and the image and stores it. And then it uses that as a way to translate that image to anything else. It could be a piece of text or a URL.<br />
<strong><br />
Tish Shute:</strong> So there is a smartphone app, but you didnâ€™t take it as far as an AR app yet?</p>
<p><strong>Anselm Hook:</strong> No. We didnâ€™t do a heads-up view. There are apps on the iPhone store that do that, but they donâ€™t do the brute force image recognition that we were using. We used a third party off the shelf algorithm that we found on Wikipedia and downloaded the source code, and threw it on the server. And John Wiseman in LA wrote the scalable database backend so that we could scale the actualâ€¦<br />
<strong><br />
Tish Shute:</strong> So how did you set the iphone app up to work?</p>
<p><strong>Anselm Hook</strong>: The iPhone side was very simple. You take a picture of something and it tells you what it is. That is all it did. We would take the location, but the client side, the iPhone side, just rendered, returned to youâ€¦It said, â€œSomeone said that this picture of a barking dog is an advertisement for a local band.â€</p>
<p><strong>Tish Shute:</strong> Right. So basically it was a geo-tagged?</p>
<p><strong>Anslem Hook:</strong> Yes. We are just collecting the geo information. Actually, there were a whole lot of technical challenges. The whole idea of ImageWiki is actually kind of beyond our technical ability for a small team like us. It really does take a team, a group like Google, to do this kind of thing in a scalable way.<br />
<strong><br />
Tish Shute:</strong> Why is that?</p>
<p><strong>Anslem Hook:</strong> There are two sides. There is the curating the images. I think that is the job of groups like us &#8211; open source groups who can curate images <span id="vxty" title="Click to view full content">that are owned by the community. And then the searching side, the algorithm side, where you are actually matching the fingerprint of one image to images in your database, that takes a much moreâ€¦that is much more industrial.Â  We get both sides, ours is not a scalable solution. It is mostlyâ€¦proving that it could be done was important.<br />
</span><br />
<span id="a3ou" title="Click to view full content"><strong>Tish Shute: </strong>In terms of hooking Imagewiki up to the collaborative possibilities of AR Wave wouldn&#8217;t federation pose some interesting possibilities for scaling search algorithms and all that?</span></p>
<p><span id="vp27" title="Click to view full content"><strong>Anselm Hook:</strong> Yes. And what is funny also, incidentally, is that, nevertheless, we did look for some financial support for it, but we couldnâ€™tâ€¦we just didnâ€™t find the investors to scale it. Now, other companies like SnapTell took a shot at it. And they have an app in the iPhone store where you can point at a beer bottle and get back the name of the beer bottle.</span></p>
<p>The classic example everyone uses is a book. Amazon has all the image jackets of all their books. You can point SnapTell at almost any book and get back links to buy that at Amazon, the price of the book, and user comments on the book. So they are treating Amazon as the canonical voice of the book, for better or worse. That is the state of the art so far, up until Google Goggles came out a little while ago, which actually blows it out of the water. But, that is where we are now.</p>
<p><strong>Tish Shute: </strong>Right. But the point you raise about how when something like Amazon comes canonical of what is book, right, this is the whole point, isnâ€™t it?</p>
<p><strong>Anselm Hook:</strong> Is Amazon truth? Itâ€™s not bad. Jeff Bezos seems like a nice guy, but, you know.</p>
<p><strong>Tish Shute:</strong> And this is the point of having these open infrastructures for this.Â  And this should be obvious in a way, but it comes back to the thing about what made the Internet great was the fact that even though as you note, you get an oligarchy like Facebook, but people always could just go off and do something else, right? Because the fundamental infrastructure was basically open and designed to be available for everyone. And many people have championed that and fought for it hard [to maintain this openness] havenâ€™t they? They have devoted their lives to keeping it that way, even if the oligarchies have done their thing.<br />
<strong><br />
Anselm Hook:</strong> Yes. There are really some things that are underneath all of this that havenâ€™t been solved yet.</p>
<p>One is that the trust in social networks has not been built yet, so we canâ€™t do peer based recommendations very well. We canâ€™t filter noise by peers. Twitter kind of is moving there, but I donâ€™t just want to listen to my Twitter friends. I want to listen to my friends of friends. If I am getting truth from somebody, I want to get that truth from people my friends say that they trust.</p>
<p>Then the second problem is that there is a search business. My friend Ed Bice, who owns <a id="lir5" title="Meedan" href="http://beta.meedan.net/">Meedan</a>, always says that a search itself, a search request, is an opportunity to makeâ€¦is a publishing moment. It is an opportunity to say what you think. In the real world, if you are just hanging out with humans and you look somewhere, other people might look at your gaze and they might look at what you are looking at. Your gaze itself is a public act.</p>
<p>Gaze is a soft act, but it is one that is visible. With Google, the gaze<span id="zuat" title="Click to view full content"> of four billion people is invisible. We don&#8217;t what people are looking at, there is no opportunity to participate. Let me give you a real example.Â  I have taken a image of something of the bust of figure or a statue.Â  Why can&#8217;t the museum in Cairo look at my request and tell me oh yeah that is Tutankhamen, or that is Nefertiti right? Why can&#8217;t they have a chance to participate in the search and respond to me?</span></p>
<p><span id="zuat" title="Click to view full content"> Right now the the only person that responds is Google when I do a search. We need to invert the search pyramid and open up search, so that search is a democratic act, so that you can publicly permission your searches so that other people can respond and so that people can reach out to you, not just you having to do a dialogue. </span></p>
<p><span id="zuat" title="Click to view full content">The common example of this.. and we see this everywhere: I am looking for a slice of pizza right, now I am hungry I want some pizza. I have to ask Google, look find twelve websites, call twelve phone numbers, and talk to each of the twelve stores, and ask them are they open late, is the food organic, is the food in any good, do my friends like it.</span></p>
<p>Whereas what I should be able to do is just say it&#8217;s a search moment and I am interested in pizza. If those pizza places my criteria like you know my friend&#8217;s like them and they are organic, they are open, then that pizza place can call me. I have the money why should I do the search? So the whole business of search, the whole structure of search is predicated around a revenue model, but its a really short-sighted revenue model, its not a brokerage.</p>
<p>Search isn&#8217;t search, search is hand waving.Â  These should be moments for us to have a discourse. So problem we are seeing in AR with communication of the right information is actually underneath AR, at the level of the whole infrastructure.</p>
<p>Search needs to be inverted, trust filters need to be built. We need to democratically own our data institutions.Â  We don&#8217;t right now.Â  That will be more of a concern, especially with AR.</p>
<p><strong>Tish Shute: </strong>Yes, especially with AR, which is this why got all excited about federation.Â  Do you think federation has the potential, an opportunity to create [the new infrastructure you describe?]</p>
<p><strong>Anselm Hook:</strong> Absolutely,Â  its absolutely what we must do. It is much harder to do. It is absolutely critical.</p>
<p><span id="lwzk" title="Click to view full content"><strong>Tish Shute:</strong> And why is it much harder to do? Could you explain that?</span></p>
<p><strong>Anselm Hook:</strong> Well, it&#8217;s very easy for a bunch of hackers to build a service that you log into and fetch some data, it&#8217;s a single thing. They don&#8217;t have to talk anybody, they can use their own protocols, they can hack it, it&#8217;s a big black box, behind the scenes. There&#8217;s running back and forth in a giant Chinese room delivering manuscripts and scrolls to you. Whatever is behind the black box, you donâ€™t care, it just works.Â  But when you federate, you need to actually publish and have standards, and then you&#8217;re talk about semantic, everyone starts getting really excited and wave some hands. It becomes a disaster. It&#8217;s, at least, another power order, more difficult than DIY, build it yourself.</p>
<p><strong>Tish Shute:</strong> So, in terms of what Google Wave have done with their approach to federation, what do you think have been their achievements and what do you think is their obstacles? What do you think are the failings of the Wave? Because it&#8217;s the first big public major player backed approach to something federated, isnâ€™t it? In real time.</p>
<p><strong>Anselm Hook:</strong> Yes. I think the most important non-federated service on the planet today is Twitter.Â  <a id="uhg3" title="Ident.ic.a" href="http://identi.ca/group/identica">Identi.ca</a> it&#8217;s not getting any traction with respect to Twitter. [ Even though ] Identi.ca is a federated version of Twitter and is very good. [ Identica is now <a id="w05j" title="Status.net" href="http://status.net/">Status.net</a> ] . So, we see already there that small players arenâ€™t being competitive. Then look at other services like IRC. IRC is the secret backbone of the Net. All the open source projects, all the teams, all the people that work on opensource projects are all on IRC. It&#8217;s the only way they get anything done.</p>
<p>With Google Wave, and the protocols underneath Google Wave, we see an attempt to build a similar kind of real time, but distributed protocol. I think it&#8217;s the right direction. I think, people should pick up the offering and make their own servers. I think that protocol is really great, I think the fact that is compressed, its high performance, <span id="md2h" title="Click to view full content">it is small, real-time of blobs of data flying around, all exactly the way it should be done. It is getting close to this kind of rewrite of the Internet that people keep talking about, because, you know, the net protocols are so bad, it is starting to treat the idea of intermittent exchanges being more transitory, volatile, and not heavy.</span></p>
<p><strong>&#8230;.to be continued.Â  Part 2 coming soon!<br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
	</channel>
</rss>
