<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; Gene Becker</title>
	<atom:link href="http://www.ugotrade.com/tag/gene-becker/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Urban Augmented Realities and Social Augmentations that Matter: Talking with Bruce Sterling, Part 2</title>
		<link>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/</link>
		<comments>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/#comments</comments>
		<pubDate>Fri, 17 Sep 2010 21:43:35 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[3D point clouds]]></category>
		<category><![CDATA[an ARG for World Peace]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave Android client]]></category>
		<category><![CDATA[ARWave at Software Freedom Day]]></category>
		<category><![CDATA[augmented foraging]]></category>
		<category><![CDATA[augmented reality checkins]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Bertine van Hovell]]></category>
		<category><![CDATA[Biological Globalisation]]></category>
		<category><![CDATA[Boskoi]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Crisis Filter]]></category>
		<category><![CDATA[cryptoforests]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[deterritorialization]]></category>
		<category><![CDATA[difference between augmented reality and ubiquitous computing]]></category>
		<category><![CDATA[emergency response]]></category>
		<category><![CDATA[Favela Chic]]></category>
		<category><![CDATA[fightthegooglejugend]]></category>
		<category><![CDATA[Four Square]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[gardens gone wild]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[homophilies]]></category>
		<category><![CDATA[hyperlocal experiences]]></category>
		<category><![CDATA[interview with Bruce Sterling]]></category>
		<category><![CDATA[JCPT the open Android 3D engine]]></category>
		<category><![CDATA[Jesse James Garrett]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Joshua Kauffman]]></category>
		<category><![CDATA[Ken Eklund]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lightning Laboratories]]></category>
		<category><![CDATA[location based social networking]]></category>
		<category><![CDATA[Maarten Lens-FitzGerald]]></category>
		<category><![CDATA[machine intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[Mark Evin]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[NextHope]]></category>
		<category><![CDATA[NextHope AMD]]></category>
		<category><![CDATA[Occipital]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[open distributed platform for AR]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[proximity-based social networking]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[real-time information brokerages]]></category>
		<category><![CDATA[realtime information brokerages]]></category>
		<category><![CDATA[Shaping Things]]></category>
		<category><![CDATA[ShapingThings]]></category>
		<category><![CDATA[Sixth Sense for Autism]]></category>
		<category><![CDATA[SMSSlingshot]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Social Augmented Experiences that Matter]]></category>
		<category><![CDATA[social mapping]]></category>
		<category><![CDATA[Software Freedom Day]]></category>
		<category><![CDATA[Swift]]></category>
		<category><![CDATA[territorialization]]></category>
		<category><![CDATA[The Cryptoforests of Utrecht]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Ubistudio]]></category>
		<category><![CDATA[urban augmented realities]]></category>
		<category><![CDATA[Urban Edibles Amsterdam]]></category>
		<category><![CDATA[urban fallows]]></category>
		<category><![CDATA[urban forsts]]></category>
		<category><![CDATA[urban informatic mapping]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[vision assisted augmented reality]]></category>
		<category><![CDATA[vision based augmented reality]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave in a Box]]></category>
		<category><![CDATA[WaveinaBox]]></category>
		<category><![CDATA[Westraven Psychogeography]]></category>
		<category><![CDATA[Will Wright at Augmented Reality Event]]></category>
		<category><![CDATA[YDreams]]></category>
		<category><![CDATA[Zorop]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5627</guid>
		<description><![CDATA[Social Augmented Experiences leveraging geoawareness and human and machine intelligence to create real time information brokerages, combined with an augmented reality view, can create a new opportunities to reimagine our relationships with each other and our environment. This Summer, I have been on a blogging hiatus, which has meant I haven&#8217;t been sharing as frequently [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><strong><span> </span></strong></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/augmentedforaging1.jpg"><img class="alignnone size-medium wp-image-5651" title="augmentedforaging" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/augmentedforaging1-200x300.jpg" alt="augmentedforaging" width="200" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/westraven81.JPG"><img class="alignnone size-medium wp-image-5652" title="westraven8" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/westraven81-225x300.jpg" alt="westraven8" width="225" height="300" /></a></p>
<p>Social Augmented Experiences leveraging geoawareness and human and machine intelligence to create real time information brokerages, combined with an augmented reality view, can create a new opportunities to reimagine our relationships with each other and our environment.</p>
<p>This   Summer, I have been on a blogging hiatus, which has meant I haven&#8217;t   been sharing as  frequently and, unfortunately, the second half of two conversations I had earlier this year, both of which have much influence my thinking on social augmented reality, have languished in private mode -Â  part 2 of my talk with Bruce  Sterling (see <a title="Permanent Link to Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010" rel="bookmark" href="../../2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/">Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010</a>, and part 2 of my conversation with Anselm   Hook <a title="Permanent Link to Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">- Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook, Part 1.</a> Time to get caught up on some blogging!Â  The lightly edited transcript of Part 2 of <a href="#tag1">my conversation with Bruce Sterling is posted in full below</a>.</p>
<p>Bruce Sterling has been blogging all the key developments in augmented reality (amongst other topics of interest!) on <a href="http://www.wired.com/beyond_the_beyond/" target="_blank">his Wired Blog</a>, and <a href="http://www.wired.com/beyond_the_beyond/2010/08/augmented-reality-augmented-foraging/" target="_blank">he brought my attention</a> to <a href="http://libarynth.org/augmented_foraging">Boskoi</a> the <a title="http://www.ushahidi.com/" rel="nofollow" href="http://www.ushahidi.com/">Ushahidi</a> based app for Android phones, <a href="http://lib.fo.am/augmented_foraging" target="_blank">augmented foraging </a>pictured in use above &#8211; for more pics see<span> <a href="http://fightthegooglejugend.com/index.html" target="_blank">fightthegooglejugend</a>. </span></p>
<p><span><br />
</span></p>
<h3><strong><strong>Augmented Reality and Real Time Information Brokerages</strong></strong></h3>
<p><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-28-at-12.53.54-AM.png"><img class="alignnone size-medium wp-image-5630" title="Screen shot 2010-08-28 at 12.53.54 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-28-at-12.53.54-AM-300x176.png" alt="Screen shot 2010-08-28 at 12.53.54 AM" width="300" height="176" /></a><br />
</span></p>
<p><em><span>Picture above is the path the &#8220;nomads&#8221; took through the Westhaven cryptoforest with Pieter Bol,co-auteur of the book <a href="http://www.biologicalglobalisation.com/">Biological Globalisation</a> and Theun Karelse of <a href="http://urbanedibles.blogspot.com/">Urban Edibles Amsterdam</a> &#8220;who presented his &#8216;augmented foraging&#8217; app <a href="http://libarynth.org/augmented_foraging">Boskoi.</a>&#8220;Â   For more see, <a href="http://fightthegooglejugend.com/cryptoforests.html" target="_blank">The Cryptoforests of Utrecht </a>and, <a href="http://fightthegooglejugend.com/westraven.html" target="_blank">Westra</a><a href="http://fightthegooglejugend.com/westraven.html" target="_blank">ven Psychogeography, 6 June 2010.</a> </span><span> </span><span>Note</span><span>: Cryptoforests: 1) Urban forests hidden from view 2) Urban fallows that might or might </span><span> </span><span>not be considered as forests 3) Gardens gone wild)</span></em></p>
<p><strong> </strong></p>
<p>My interest in the Ushahidi family of ideas was already fired up by a conversation with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a> early this year.Â  We discussed a number of <a href="http://vimeo.com/ushahidi">Ushahidi</a> related    projects, <a href="http://swift.ushahidi.com/" target="_blank">Swift</a>, Crisis Filter and Anselm&#8217;s project <a href="http://hook.org/" target="_blank">Angel</a>, Augmented    Reality, and my own keen interest in an open, real time, distributed platform for    augmented reality &#8211; <a href="http://www.arwave.org/" target="_blank">ARWave</a>.</p>
<p>The Ushahidi platform and the related project Swift has pioneered the real  time brokerage of information with people acting in curatorial roles or  matchmaking roles coevolving with machine assisted  matching to connect wants to haves.Â  Ushahidi uses multiple gateways including SMS, and Twitter.Â  But the Ushahidi family of ideas is extremely interesting when combined with augmented reality and suggests many new possibilities for social augmented experiences, as Anselm pointed out, for human to human communications, human  to  civilization communication, and human to environment communications (e.g., perhaps, how machine intelligence can help bridge the difference in time scale that Kate Hartman explores in her, <a href="http://vimeo.com/10352604"> Research for Glacier-Human Communication Techniques).</a></p>
<p>Ushahidi, which means &#8220;testimony&#8221; in Swahili, is a website that was    initially  developed to map reports of violence in Kenya after the post-election  fallout at the beginning of 2008.  It is now an open platform with a wide range of applications and growing developer community.Â  See <a href="http://vimeo.com/7838030">What is  the Ushahidi Platform?</a> from <a href="http://vimeo.com/ushahidi">Ushahidi</a> on <a href="http://vimeo.com/">Vimeo</a>.</p>
<p><a href="http://swift.ushahidi.com/" target="_blank">Swift </a>- a project that emerged from the Ushahidi dev community, is a human sensor/real-time brokerage for dealing with emergencies, enabling the filtering and verification of real-time data from channels such as Twitter, SMS, Email and RSS feeds.</p>
<p><a href="http://libarynth.org/augmented_foraging">Boskoi</a> &#8211; <a href="http://lib.fo.am/augmented_foraging" target="_blank">augmented foraging </a><span>is the first app,Â  I have seen, to begin linking Ushahidi with augmented reality  &#8211; although I don&#8217;t think there is a full augmented view for Boskoi developed yet?</span></p>
<h3><strong>&#8220;The whole point of AR is to see things from a different point of view&#8230;&#8221;</strong></h3>
<p><strong><br />
</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus3post.png"><img class="alignnone size-medium wp-image-5705" title="ARWaveCurrentStatus3post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus3post-300x212.png" alt="ARWaveCurrentStatus3post" width="300" height="212" /></a><br />
</strong></p>
<p><strong> </strong></p>
<p><em>Click to enlarge poster from upcoming ARWave demo at Software Freedom Day &#8211; for more see below</em></p>
<p>I am often asked what augmented reality brings to the table with respect to location based social networking, which is on the verge of going mainstream in smart phone apps like <a href="http://foursquare.com/">Four Square</a>. While the first part to my answer is usually to explain what is unique to augmented reality.</p>
<p>As Bo Begole notes, the full vision of AR requires machine   perception  technologies to detect  the identity and physical   configuration of  objects relative to each  other to accurately project   information  alongside/overlaid with a physical object (see this post on the PARC Blog by Bo Begole on the <a href="http://bit.ly/9Rsh79">difference between AR and ubiquitous computing</a> &#8211; thank you <a href="http://gamesalfresco.com/2010/09/12/weekly-linkfest-62/" target="_blank">Rouli for bringing my attention to this</a>).</p>
<p>But it is only in recent months that we have begun to see the kind of tools that make this possible become freely available to developers &#8211; see<a href="http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/" target="_blank"> my interview with Jay Wright of Qualcomm here</a>. Â  Also see this post on <a href="http://phototour.cs.washington.edu/bundler/" target="_blank">Bundler: Structure from Motion for Unordered Image Collections</a> an open source system that allows the creation of 3D point clouds from unordered image collections, e.g. internet image collections.Â  We now have many tools available to move mobile augmented reality beyond the recent crop of apps relying on GPS and compass alone for positioning into a new era of vision assisted AR apps that will increasingly bring the full vision of AR into our daily lives.</p>
<p>Further, the  integration of visual search  applications   like <a href="http://www.google.com/mobile/goggles/#text">Google Goggles</a> and <a href="http://www.kooaba.com/">Kooaba</a> which can detect the identity of particular objects will add another vital tool to machine perception technologies enabling AR &#8220;checkins&#8221; on potentially anything in the physical world around us, and more fuel to the <a href="http://gamepocalypsenow.blogspot.com/">Gamepocalypse</a> (e.g. it would be easy to turn every trash can in the city into a basketball hoop as we discussed at the <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a> meetup last month).Â   And soon, the Pandora&#8217;s Box ofÂ  facial recognition (Google Goggles have the capability though it is not released to the  public  yet) will open up.</p>
<p>Jesse Schell described the importance of AR in a nutshell <a href="http://augmentedrealityevent.com/2010/08/25/are2010-keynote-by-jesse-schell-augmented-reality-will-define-the-21st-century/" target="_blank">in his keynote for are2010</a>:</p>
<p><strong>â€œThe  whole point of AR is to see things from a different point of  viewâ€¦How  can there be a more powerful art form than one that actually  changes  what you see?â€</strong></p>
<p>But how AR matures as a social experience will be the key to Jesse&#8217;s suggestion that:</p>
<p><strong>â€œAugmented Reality will be one of the things that fundamentally define the 21st centuryâ€</strong></p>
<p>There are many interesting forms of AR that are not reliant on a tight  registration between media and physical objects &#8211; several are put forward by Bruce in the convo below.Â  And, it is likely we will see AR eyewear as an occasional useful accessory to a smart phone long before we have the sexy, affordable augmented reality eyewear worn that we wear throughout the day. Â  <a href="http://www.yankodesign.com/2010/08/31/speech-to-text-glasses/" target="_blank">These speech to text glasses</a> would be a very useful and viable accessory to a smart phone right now for the hearing impaired.</p>
<p>For the moment, as Bruce notes, some of the most interesting and useful augmented experiences to date have not been in the cell phone space:</p>
<p><strong> &#8220;There are other aspects of AR besides the cell phone space. Thereâ€™s  Total Immersion&#8217;s big display screens. Thereâ€™s the web-based fiduciary  stuff. And thereâ€™s projection mapping. And then thereâ€™s experience  design just for people who need their reality augmented for whatever  personal or social reason.&#8221;</strong></p>
<p>On of my favorite social AR experiences is this<a href="http://www.youtube.com/watch?v=oLnKSKaY1Yw&amp;feature=player_embedded" target="_blank"> SMS Slingshot</a>.</p>
<p>But I have been excited for a long while about the intersection of mobile social augmented    reality, real time communications, and ubiquitous computing see <a title="Permanent Link to Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave" rel="bookmark" href="../../2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/">Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave</a>.Â    And I have  described in    many places why I think real time, open,   distributed communications  for AR are so    important to developing social augmented experiences &#8211; see <a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank">the slides for my talk at Augmented Reality Event here</a>, <a href="../../2010/04/02/ar-wave-at-where-2-0-exploring-social-augmented-experiences/" target="_blank">here</a> and <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">here</a> for starters.</p>
<p><strong><br />
</strong></p>
<h3><strong> ARWave at Software Freedom Day 2010, September 18th 2010<br />
</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-12.12.02-PM.png"><img class="alignnone size-medium wp-image-5683" title="Screen shot 2010-09-17 at 12.12.02 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-12.12.02-PM-300x38.png" alt="Screen shot 2010-09-17 at 12.12.02 PM" width="300" height="38" /></a></p>
<p>Thomas Wrobel and Bertine van Hovell will demo the first ARWave Android client <a href="http://www.sfd2010.nl/" target="_blank">at Software Freedom Day this weekend</a>!</p>
<p>A number of people have asked me, (including Bruce), What will be the future of ARWave now that Google Wave is no longer a stand alone application?Â  Yes, the recently announced release of <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a> (see <a href="http://arstechnica.com/web/news/2010/09/google-sticks-wave-source-in-a-box-sticks-a-bow-on-top.ars" target="_blank">here </a>and<a href="http://www.readwriteweb.com/archives/google_announces_wave_in_a_box.php" target="_blank"> here</a>) is very exciting for the ARWave team.</p>
<p>The ARWave Android client is the  first open AR client built on an open, real time, distributed platform -Â  based on a server that anyone can download and set up, currently the  &#8220;FedOne&#8221; server but Wave in a Box, hopefully,  will be even easier to deploy.Â  Wave in a Box seems perfect for ARWave&#8217;s needs &#8211;  for more <a href="https://groups.google.com/group/wave-protocol/browse_thread/thread/70067fc740b4c8d3" target="_blank">see the WiaB Google Group here</a>.Â   And for more information on the ARWave client -Â  click to enlarge the poster below, see the <a href="http://arwave.org/pages/Videos.php" target="_blank">ARWave concept video here</a>, and for more, and how to get involved see <a href="http://arwave.org/new_index.php" target="_blank">arwave.org</a>.Â Â  Props to <a href="http://www.lostagain.nl/#" target="_blank">Thomas Wrobel and Bertine van Hovell</a> (posters below from demo for Software Freedom Day), Mark Evin, <a href="http://twitter.com/need2revolt" target="_blank">Davide Carnovale</a>, and <a href="http://twitter.com/kusako" target="_blank">Markus Strickler</a>, for all their hard and brilliant work on ARWave.Â  Also to <a href="http://www.jpct.net/" target="_blank">JCPT the open Android 3D engine</a> that has saved a lot of work!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus1post.png"><img class="alignnone size-medium wp-image-5687" title="ARWaveCurrentStatus1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/ARWaveCurrentStatus1post-212x300.png" alt="ARWaveCurrentStatus1post" width="212" height="300" /></a></p>
<p><em>click to enlarge slide</em></p>
<h3><strong>Social Augmented Experiences that Matter</strong></h3>
<p>My ideas on the future of social augmented experience have been deeply informed by the the conversations I had with Bruce Sterling and Anselm Hook this year.</p>
<p>Bruce  Sterling notes in the conversation below, location based social  apps like, Four Square, are interesting because they are not <strong> &#8220;urban geography like Google&#8217;s  satellite stare from above,&#8221;</strong> but  rather <strong>&#8220;groups of citizens are doing portraits  of their own region.&#8221; </strong> Augmented Reality, with its of lauded power to make the invisible visible is, of course, is the ideal tool for &#8220;citizen portraits&#8221;Â  to the next level.Â  Cory Doctorow  described to me three years ago (<a href="http://www.ugotrade.com/2007/10/31/cory-doctorow-a-reverse-surveillance-society/" target="_blank">see here</a>) an &#8220;inverse surveillance society,&#8221; enabled by an augmented viewÂ  &#8211; &#8220;<strong>where all the data from the positional and temporal  characteristics of all the objects that we own  were in aggregate  visible and available so that we can mix and match them  remix them  understand them and have more agency in the world.&#8221;</strong></p>
<p>It is very cool to go back to reread <a href="http://www.ugotrade.com/2007/10/31/cory-doctorow-a-reverse-surveillance-society/" target="_blank">this  conversation </a>now that it is becoming possible to build the kinds of apps Cory described, and Bruce Sterling envisioned in <strong><a href="http://mitpress.mit.edu/catalog/item/default.asp?tid=10603&amp;ttype=2" target="_blank">Shaping Things</a></strong> (see Amazon.orgÂ  page 111).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/shapingthings.jpg"><img class="alignnone size-thumbnail wp-image-5689" title="shapingthings" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/shapingthings-150x150.jpg" alt="shapingthings" width="150" height="150" /></a></p>
<p><em>click to enlarge</em></p>
<p>MyÂ  conversation with Bruce earlier this summer (see below) took place on the heels of <a href="http://augmentedrealityevent.com/">are2010 &#8211; Augmented Reality Event</a>.Â Â  <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">See the video of Bruce&#8217;s keynote, &#8220;Bake a BigPie,&#8221; here</a>,Â  and the <a href="http://augmentedrealityevent.com/2010/08/25/are2010-keynote-by-jesse-schell-augmented-reality-will-define-the-21st-century/" target="_blank">final keynote, &#8220;Seeing,&#8221; by Jesse Schell (see video here)</a> in which Jesse riffed on AR and the man with the X-ray eyes.Â  Both these awesome talks are still fresh in my mind.Â  Bruce noted how we should pay attention to augmentations for people and situations that could really use some augmentation&#8230; and not get too fixated on the coming of AR Goggles.Â  He elaborated on this in our conversation (again full transcript below):</p>
<p><strong>&#8220;Well,  itâ€™s a matter of deciding whose reality it is that youâ€™re  trying to augment.  Iâ€™m not trying to be a bleeding heart about it, but  obviously there are people in our society right now with reality that  could really use some augmentation.  They are mostly disadvantaged  people.  They are vision impaired, or maybe they have autism.  They  might be senile and just canâ€™t remember where they put their shoes.   These are people who could really use some help, right?&#8221;</strong></p>
<p><strong>So, start  with people who really need sensory or cognitive help. Before you  turn  our geeks into Superman, why donâ€™t you try turning some people who are  harmed into more functional individuals?  Then youâ€™ll be able to learn  how to do that. Then maybe you can ramp it up to these Nietzschian  heights of the superb Man With the X-ray Eyes.  Whatever.&#8221;</strong></p>
<p>What will make AR interesting and useful long before and long after we see the full vision of AR eyewear manifest is its social aspects.Â  Bruce points out:</p>
<p><strong>&#8220;My  argument would be that if you want people to be  more sensitive toward   certain, say, issues and problems, itâ€™s better to  find the people who   are already sensitive to those issues and  problems, and give them a   bigger stake in your augmentation system.&#8221;</strong></p>
<p><strong>&#8220;Say that I am really worried about public health.   Well, if I have a lot of nurses that are using my system, people who are  aware of my issues, then I could be walking around and Iâ€™ll see a lot  more tags saying, â€œThis is where he got food poisoning!â€  &#8220;In this  shooting gallery, many people have caught AIDS!â€  Or, you know,  â€œTuberculosis has been spotted over here in this building.â€</strong></p>
<p><strong>At  that point, I could simply share their knowledge and get some social  intelligence.  As opposed to trying to  amp the basements of my little  hacker-mind and drag stuff up thatâ€™s escaped my conscious attention.&#8221;</strong></p>
<p>Finding new ways to broker information &#8211; bring together needs with haves and different participants, empowered and disempoweredÂ  is., as Anselm discussed with me, one way to change our view of human to human, human to environment and human to civilization communication (particularly in light of thisÂ  &#8220;sobering account of how open data is used against the poor in Bangalore&#8221; that as <a href="http://twitter.com/timoreilly/status/23179898934" target="_blank">@timoreilly noted</a> recently <a href="http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/" target="_blank">OpenData Empowering the Empowered)</a>.</p>
<p>The key idea in a crisis filter, Anselm noted,Â  was to break  up the participants into different kinds, to connects wants with haves:</p>
<p><strong>&#8220;There are  people who are  inÂ  situation.Â  We call them citizens.Â  And  then there  are reporters,  people who report situations back to Twitter.Â  And then there are curators, people that canvas Twitter    looking for important Tweets.Â  And then there are first responders, people who take the curating collection of responses and then act on them.&#8221;</strong></p>
<p>This kind of brokerage between people acting in a curatorial role or matchmaking role with each other can be extended into and coevolve with machine assisted matching as Anselm explains.</p>
<p>It is also a vital part of creating social augmented experiences that matter.</p>
<p>One of Anselm Hook&#8217;s projects, which is called <a href="http://hook.org/" target="_blank">Angel</a> is the the most radical expression of connecting wants with haves in that the  idea is that &#8220;you have a  situation, you broadcast that  situation, and help  magically appears.Â   You donâ€™t even sign up forÂ a service.Â  You just get  help â€¦</p>
<p>As Anselm explains this is the same idea of a brokerage for dealing with emergencies, but applied to the long tail of crisis response.Â  As Anselm describes it:</p>
<p><strong><strong>&#8220;I am interested in personal crisis.Â  &#8216;I lost my cat.Â  Help.Â  I canâ€™t find </strong>where my kid is.Â  I am out of gas.Â  I have a flat tire.Â  My house is on fire.Â  My aunt is trapped in the bedroom.&#8217;Â  The kind of personal crisis    that is just as important, but is not enough to get a national  movement   to help you&#8230;</strong></p>
<p>I will publish this conversation with Anselm in full in an upcoming post.</p>
<h3>Zorop &#8211; an ARG for World Peace</h3>
<p><strong><strong><span> </span></strong></strong><a href="http://libarynth.org/augmented_foraging"><span style="font-family: 'times new roman';"><span style="font-size: small;"> </span></span></a>If you want to be part of a really exciting experiment to reimagine our relationships with each other and can be in San Jose this weekend, I highly recommend exploring <a href="http://zorop.org" target="_blank">this &#8220;rabbit hole&#8221;</a>.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="385" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/czUpYfme0kg?fs=1&amp;hl=en_US" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="640" height="385" src="http://www.youtube.com/v/czUpYfme0kg?fs=1&amp;hl=en_US" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Thank you <a href="http://www.lightninglaboratories.com/tcw/about-2/" target="_blank">Gene Becker</a>, <a href="http://www.lightninglaboratories.com/" target="_blank">Lightning Laboratories</a> and <a href="http://ubistudio.org/" target="_blank">Ubistudio</a> for sending me this invite:</p>
<p><strong>&#8220;Ken  Eklund (<a href="http://twitter.com/writerguygames" target="_blank">@writerguygames</a>) is developing a wonderful game for the 01SJ  Biennial called ZOROP, aimed at creating World Peace(!). Some of you  might know Ken from his work on the amazing ARGs EVOKE and World Without  Oil. Anyway Ken, along with his collaborator Annette Mees, are  furiously working to get ZOROP ready to go for the Sept 17th premiere at  01SJ.</strong></p>
<p><strong>Are you intrigued? I thought so, and here are your next steps down the rabbit hole:</strong> <strong> </strong></p>
<p><strong>&gt; Check out </strong> <strong><a href="http://zorop.org/" target="_blank">http://zorop.org</a> to learn about the game</strong></p>
<p><strong>&gt; Follow @ZoropPrime to watch it unfold: </strong> <strong><a href="http://twitter.com/zoropprime" target="_blank">http://twitter.com/zoropprime</a></strong></p>
<p><strong>&gt; &#8216;Like&#8217; ZOROP on FB for a different view: </strong> <strong><a href="http://www.facebook.com/pages/Zorop/141140772593618" target="_blank">http://www.facebook.com/pages/Zorop/141140772593618</a></strong></p>
<p><strong>&gt; Become one with the game; consider volunteering as a Zoropathian: </strong> <strong><a href="mailto:curious@zorop.org">curious@zorop.org</a></strong></p>
<p><strong>&gt; Head down to San Jose on the 17th, play the game, and ride the ZOROP Mexican Party Bus. Seriously.&#8221;</strong></p>
<p style="margin: 0pt;">
<p><strong><br />
</strong></p>
<h3><strong>Interview with Bruce Sterling</strong><strong> </strong><a name="tag1"></a></h3>
<p><a href="http://www.flickr.com/photos/brucesterling/4671866157/in/photostream/" target="_blank"><img class="alignnone size-medium wp-image-5676" title="Screen shot 2010-09-16 at 7.59.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-16-at-7.59.56-PM-300x180.png" alt="Screen shot 2010-09-16 at 7.59.56 PM" width="300" height="180" /></a></p>
<p><em>Click on image above to see video clip from</em> <em><a href="http://www.flickr.com/photos/brucesterling/4673885122/" target="_blank"><em>from brucesflickr</em></a></em></p>
<p>[Note the<a href="http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/" target="_blank"> first part of this interview is here</a> and I broke in anticipation of Part 2 just as I started experimenting with an idea <a href="http://www.linkedin.com/in/joshuakauffman" target="_blank">Joshua Kauffman</a> &#8211; an advisor and entrepreneur working on design  in the public sphere gave me for an interview technique &#8211; the All Souls College one-word  question interview.Â  Although apparently <a href="http://www.nytimes.com/2010/05/28/world/europe/28oxford.html" target="_blank">they recently scrapped it</a> and I am not very good to sticking to a single word!]</p>
<p><strong>Tish  Shute:</strong> We were talking about these proximity-based social work networks like Foursquare and Gowalla and how they may influence the emergence of social augmented experiences.</p>
<p>So Joshua&#8217;s suggestion for the first word was &#8220;territorialization&#8221; e.g. how do these new mobile social experiences like Foursquare,  and the observation that actually rather than breaking down territorialization &#8211; which would be a good thing, tend to support territorialization&#8230;</p>
<p><strong>Bruce Sterling: Yeah, theyâ€™re re-intensifying it in a very odd, electronic fashion.</strong></p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Bruce Sterling:  Itâ€™s not true of  projection mapping or the webcam fiduciary display stuff. But with the handheld stuff, and especially the urban informatic stuff, it really canâ€™t help but take on a local flavor. <a href="http://www.layar.com/" target="_blank">Layar</a> is like &#8220;Augmented Dutch Reality.&#8221;</strong></p>
<p><strong>And <a href="http://www.tonchidot.com/" target="_blank">TonchiDot</a> is &#8220;Augmented Japanese Reality.&#8221; Itâ€™s hard to imagine a Layar interface going gangbusters at Tokyo.  Whereas the TonchiDot interface, which is so clearly influenced by Anime and cartoon graphics&#8230;. Maybe it could find some niche of hipsters in Amsterdam hash barsâ€¦</strong></p>
<p><strong>Stuff that&#8217;s socially generated by people on the ground, as with Foursquare and Gowalla, is bound to take on a regional influence. Right? It&#8217;s like the New York hipsters who were early adopters of Foursquare. They&#8217;re not mapping New York! They&#8217;re mapping Hipster New York.</strong></p>
<p><strong>It&#8217;s all about Williamsburg and places where 24-year-olds go to drink&#8230; They found a demographic niche there. These guys are building the service for them. They&#8217;re people who are willing to work for Foursquare for free, because they want to wear the little king hat.</strong></p>
<p><strong>Tish Shute:</strong> I got the far far away badge &#8216;cos I live on the Upper West Side!</p>
<p><strong>Bruce Sterling: But that&#8217;s not urban geography, right? I mean, that&#8217;s not like Google&#8217;s satellite stare from above.  That&#8217;s a group of citizens doing a portrait of their own region.  You&#8217;re going to see interesting things happen because, of course, people who use Foursquare elsewhere are going to check into New York, and they&#8217;re going to look at the &#8220;New York Foursquare.&#8221;   They&#8217;re going to be aliens who interact with Foursquare people in New York and annotate what they&#8217;re seeing.</strong></p>
<p><strong>Tish Shute:</strong> Oh! Yes. Good point.</p>
<p><strong>Bruce Sterling:  That Foursquare community has a certain Ã©migrÃ© soul.  It&#8217;s different from the normal Ã©migrÃ© soul of simple tourists on New York. So you&#8217;re friend there is right about the territorialization.</strong></p>
<p><strong>Tish Shute:</strong> Yes, Joshua Kauffman is a smart guy!  Yes I am interested to see what interesting kinds of deterritorializations proximity based social networks and the hyperlocal view of augmented reality might bring, not just the new territorializations.</p>
<p><strong>Bruce Sterling: It&#8217;s not the intense kind of territorialization, like gangs putting down graffiti markers and beating people up.  It&#8217;s an inherent regional character that comes with using peer production to build your database.</strong></p>
<p><strong>Tish Shute:</strong> We were discussing whether AR could break down the walls between people &#8211;  people who share the same physical space but actually inhabit different territories even if they are sitting on the table next to you.</p>
<p><strong>Bruce Sterling: You know, I just wrote an article for my Italian magazine column. I think I mentioned this to you &#8211; a report about ARE 2010.   I titled it, &#8220;Chicks Dig Augmented Reality.&#8221;</strong></p>
<p><strong>Tish Shute:</strong> [laughs]</p>
<p><strong>Bruce Sterling:   There is a very heavy social element to AR, and a phone based element. So the question is: Why would a woman wear a fiducial marker? Like our <a href="http://www.metaio.com/" target="_blank">Metaio</a> speaker at ARE2010 who had a fiducial marker on her lapel pin.</strong></p>
<p><strong>Tish Shute:</strong> Right. Lisa!</p>
<p><strong>Bruce Sterling: Why would a woman go out in public with her Facebook profile on her body?</strong></p>
<p><strong>Tish Shute: </strong>Well I can think of some reasons&#8230;</p>
<p><strong>Bruce Sterling: So that men will approach her, of course.</strong></p>
<p><strong>Tish Shute:</strong> Yes the core of all successful social networks is always a form of dating app.</p>
<p><strong>Bruce Sterling: You do it a social icebreaker.  It&#8217;s like: I&#8217;m a woman, I&#8217;m sitting here alone, and you can sort of glide by and, you know, take a snap of me.  Then you retreat and have a beer with your friends and  you work up the courage, and then you come and say, &#8220;So! Susan!  I understand you like bicycling!  And, boy, me too!&#8221; Right?</strong></p>
<p><strong>Tish Shute:</strong> There are all kinds of social barriers between people in cities that AR might be helpful in breaking down.  An extreme example is the dilemma you actually quite often face as a New Yorker as you walk around a city.  There are people asleep on the pavement and you don&#8217;t know if they&#8217;re dead or alive.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> And you sort of like have this awful ethical dilemma of like, &#8220;Am I walking by someone I should be shaking by the shoulder, right, to wake them up so they don&#8217;t die, right?&#8221;</p>
<p><strong>Bruce Sterling: Yes.</strong></p>
<p><strong>Tish Shute: </strong> You said in your keynote that we should pay attention to augmentations for people and situations that could really use some augmentation..</p>
<p><strong>Bruce Sterling: Right. There actually is such an app in Britain right now.  I posted about it:  two Augmented Reality schemes for rubbish and hobos.</strong></p>
<p><strong>Tish Shute:</strong> Right. Yes I saw that!</p>
<p><strong>Bruce Sterling:  &#8220;Any sufficiently advanced technology is indistinguishable from garbage and hobos.&#8221;  You don&#8217;t need to personally find out whether this hobo is worth your help.  What you need is a good way to report the hobo to a hobo check-up service.   They come in, and they look on their own database or supply a database to you, or a facial recognition unit, whatever.  The service says: &#8220;Oh, well.  That&#8217;s Fred. He&#8217;s a paranoid schizophrenic. He always sleeps in that alley. Let him be.&#8221;</strong></p>
<p><strong>The same goes for the rubbish &#8212; although I don&#8217;t want to compare rubbish to hobos.   In fact, people do go out with their AR kits and take pictures of abandoned garbage bags and broken glass.  They upload them with geolocated tags for the local garbage guys.  Guys who are sitting around doing pretty much nothing because they don&#8217;t know where the rubbish is.</strong></p>
<p><strong>And they will come out and get the rubbish! I mean, they just deputize guys to go out and follow these alerts. Right?</strong></p>
<p><strong>But nobody predicted &#8212; least of all me &#8212; that you were going to have a high-tech Augmented Reality system that consisted of removing rubbish and derelicts. Right?   But rubbish and derelicts  always go profoundly under-reported. It&#8217;s just hard to get people&#8217;s attention.</strong></p>
<p><strong>But it&#8217;s very easy to set up a system so that, if you get  ten reports on the same piece of rubbish, that&#8217;s going to work its way to the top of the stack.   That&#8217;s why I was trying to get AR people away from the romance of  the hottest app for the shiniest machine.  More toward a design stance that&#8217;s more user-centric.</strong></p>
<p><strong>Where are the actual problems about stuff that we perceive?  Stuff we can&#8217;t do anything about?   Or people whose mechanisms of perceptions are harmed. They could be doing good work, being more participative, if they didn&#8217;t, basically, walk around without their glasses on.</strong></p>
<p><strong>Tish Shute:</strong> Well this leads well into the second word, Joshua suggested was interesting spring board &#8211; sensitivity.</p>
<p>On the one hand we can do these things for people who maybe need the augmentation because they have difficulty with one or another sense, e.g.,  their eyes are not functioning, or their ears are not functioning. But on the other hand, we can&#8217;t cross the social bridge to communicate with people who are temporarily disempowered in relation to the rest of society e.g. hobos and people who sleep on the streets of New York City.Â  And even though Augmented Reality could potentially be helpful it can even be more disempowering to the already disempowered.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> But re &#8220;sensitivity&#8221; &#8211; does augmentation increase or decrease our sensitivity?  This is a problem that Will Wright brought up [<a href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/" target="_blank">see video of Will Wright&#8217;s keynote at are2010</a>], e.g, the problem of parking HUDs getting in the way of your intuitive parallel parking skills.  The Lexus that takes driving control from you when you look back, &#8216;cos it knows that you&#8217;re looking at the road, and it starts to brake. Right?</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> The fact that the problem with technology is that it makes us less sensitive, right, augmentations sometimes get in our way?</p>
<p><strong>Bruce Sterling:  I suppose that&#8217;s true. But I&#8217;ve heard that said about practically every medium.  Especially television.</strong></p>
<p><strong>Everybody wants to blame machinery for their lack of morality.   It&#8217;s hard to top something like the Kitty Genovese killing in New York. This sort of legendary New York horror story from the 1960s. A woman is stabbed to death in public, no one does anything.</strong></p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Bruce Sterling: I don&#8217;t think that our media is making us any less humane or more callous.</strong></p>
<p><strong>Tish Shute: </strong>All right. Oh no! I see what you&#8217;re saying. Perhaps I misrepresented what Will was suggesting by putting it that way.  The question is perhaps more how do we get the sensitivity into the technology.  Human bodies are fantastically sensitive and sensory.</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute: </strong>And we have these like sensitivities.  For instance, How could augmentations of reality be like a blush ? You definitely want an interaction that&#8217;s not just this data being pushed at you. But what is the data that counts, right?  Will shows a slide often of an iceberg with the tip of the iceberg which is the conscious mind.</p>
<p><strong>Bruce Sterling: Oh, I see.  Yeah.</strong></p>
<p><strong>Tish Shute: </strong> And underneath it is all the preconscious stuff that really counts, right?  Any thoughts on that?</p>
<p><strong>Bruce Sterling:  I did take interest in that.  Will has obviously been spending a lot of time studying cognition.</strong></p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Bruce Sterling:  Iâ€™m not convinced that AR has got a lot to do with that.  There is certainly a trend there.  There are a lot of people who want to do body hacks and brain hacks.  I can imagine AR being used for that purpose, but it seems like a niche application.   What is the point of our accessing even more stuff thatâ€™s outside of our consciousness?</strong></p>
<p><strong>Tish Shute:</strong> One of the things he is talking about is game dynamics, is it?  The role of the imagination in play.  For example, he shows the high dynamic range photos that make the world magical.  Something you want to engage with playfully.  This he points out increases a sense of agency because you are encouraged to engage and to play with the world.</p>
<p><strong>Bruce Sterling:  Well, Iâ€™m a literary guy.  Italo Calvino did a lot of writing about this.  He talked about the classics of literature.  Why do we read the classics?  Calvino said we do not read, but reread the classics.  And the reason we do that is that, at first, we read a classic book and we think, â€œBoy, this book is really good.&#8221;   Then, five years later, we read it again and we think, â€œBoy, this is a really good book, and itâ€™s got so much more in it than I thought it had when I was 18.â€  Then we read it again at 28, and itâ€™s like, â€œOK, now I really seem to understand this book, and it means something to me now that I didnâ€™t know when I was 18 and 25.â€</strong></p>
<p><strong>What you are doing through that access is learning something about yourself.  So Will is arguing is what I really need is like a better augmentation.  So that I can go in there and sop up the book all at once.  I can grab every cultural nuance in it, instead of the stuff thatâ€™s  sliding past me because Iâ€™m 18 and kind of young and hasty.  Maybe I could have certain words and phrases helpfully underlined, that are like, â€œOK, well, this part is problematic for you.â€  In some sense, thatâ€™s not allowing me to be 18.</strong></p>
<p><strong>Iâ€™m never going to have the experience of my own maturation against this text, because Iâ€™ve devoured it all in one gulp.</strong></p>
<p><strong>My argument would be that if you want people to be more sensitive toward certain, say, issues and problems, itâ€™s better to find the people who are already sensitive to those issues and problems, and give them a bigger stake in your augmentation system.</strong></p>
<p><strong>Tish Shute:</strong> Yes the social augmented experiences are going to be the most valuable.</p>
<p><strong>Bruce Sterling:  Say that I am really worried about public health.  Well, if I have a lot of nurses that are using my system, people who are aware of my issues, then I could be walking around and Iâ€™ll see a lot more tags saying, â€œThis is where he got food poisoning!â€  &#8220;In this shooting gallery, many people have caught AIDS!â€  Or, you know, â€œTuberculosis has been spotted over here in this building.â€</strong></p>
<p><strong>At that point, I could simply share their knowledge and get some social intelligence.  As opposed to trying to  amp the basements of my little hacker-mind and drag stuff up thatâ€™s escaped my conscious attention.</strong></p>
<p><strong>Tish Shute:</strong> Interesting that seems to bring us to another kind of repetitive theme in AR,  the people tend to pigeon hole it as &#8220;merely&#8221; a visual interface.  But actually, itâ€™s the intersection, isnâ€™t it, of social intelligence and augmentation.</p>
<p><strong>Bruce Sterling:  Well, it depends entirely on how you design the system.  If Iâ€™ve got a military augmented reality, I would expect that to be mostly about urban fighting.  Itâ€™s going to be about kicking in a door and shooting terrorists.   If I pull that helmet off my head and put that on the head of an emergency worker or a cop, Iâ€™m going to get a militarized cop or a militarized emergency worker.</strong></p>
<p><strong>Tish Shute:</strong> Well the histories of the two great mass media of the twentieth century &#8211; TV and the atomic bomb were intertwined, and I suppose the evolution of ubiquitous media, augmented reality and urban warfare is already intertwined too.Â   So how can we encourage augmented realities to move beyond military roots that is common to much technology and into more peaceful urban realities?</p>
<p><strong>Bruce Sterling:  Well,  itâ€™s a matter of deciding whose reality it is that youâ€™re trying to augment.  Iâ€™m not trying to be a bleeding heart about it, but obviously there are people in our society right now with reality that could really use some augmentation.  They are mostly disadvantaged people.  They are vision impaired, or maybe they have autism.  They might be senile and just canâ€™t remember where they put their shoes.  These are people who could really use some help, right?</strong></p>
<p><strong>So, start with people who really need sensory or cognitive help. Before you  turn our geeks into Superman, why donâ€™t you try turning some people who are harmed into more functional individuals?  Then youâ€™ll be able to learn how to do that. Then maybe you can ramp it up to these Nietzschian heights of the superb Man With the X-ray Eyes.  Whatever.</strong></p>
<p><strong>Tish Shute:</strong> Did you notice that a couple of apps actually like <a href="http://www.tagwhat.com/" target="_blank">TagWhat</a> have apps geared towards people with disabilities &#8211; I haven&#8217;t had a chance to check it out.</p>
<p><strong>Bruce Sterling: Iâ€™m sorry, I wasnâ€™t looking at their tags.</strong></p>
<p><strong>Tish Shute:</strong> I was discussing this with Joshua who mentioned <a href="http://www.eyewriter.org/" target="_blank">Zachary Liebermanâ€™s Eye Writer</a>, which is for people with locked-in syndrome. Do you know that?</p>
<p><strong>Bruce Sterling: Sure. And people appreciate that because the poor guy, heâ€™s laid up with Lou Gehrigâ€™s Disease. Now theyâ€™ve given him  a way out.  AR is like a spark of new hope that gives his life meaning. Whatâ€™s wrong with that?</strong></p>
<p><strong>Tish Shute:</strong> Yeah. And <a href="http://www.youtube.com/watch?v=IJ8VMLECToQ" target="_blank">Tim Byrne using Sixth Sense</a> for Autism is interesting.</p>
<p><strong>Bruce Sterling: Letâ€™s consider it the other way. Letâ€™s say this graffiti writer there, instead of him being sick and weak, letâ€™s say heâ€™s an athlete.  So I want to make him into a super-human graffiti writer. I want him to run around graffiti-tagging the entire town before dawn. Is that a good idea? Do we need that? Super human, super taggers? What if heâ€™s going to spray up stencils of  Nietszche?  I kinda wonder whether the game is worth the candle.</strong></p>
<p><strong>Tish Shute: </strong>Yes I suppose it is not a great social scenario to be always augmenting the lives of the elites!  Hmm, the third single word interview question is &#8220;homophily,&#8221; and earlier were youâ€™re saying that weâ€™ve kinda got to accept this is very much part of AR &#8211; as how it works, because hyperlocal experiences gets created by local communities &#8211; that up to know have tended to be homophilies.</p>
<p><strong>Bruce Sterling: Well, I think thatâ€™s easily handled with some design thinking. You&#8217;ve got to do some user observation and show some sympathy with the user, and to be aware that youâ€™re designing for the user and youâ€™re not designing for yourself.</strong></p>
<p><strong>In a field as young as this, itâ€™s mostly geeks building cool stuff for geeks. In a lot of ways, itâ€™s a â€œcan you top thisâ€ contest. Thatâ€™s OK, but itâ€™s not good design to be your own client all the time. Itâ€™s like writing novels to amuse yourself, or sitting on the porch singing the blues on your own guitar with only yourself to hear.</strong></p>
<p><strong>Tish Shute:</strong> What will it take for AR mature out of this &#8220;geeks building cool stuff for geeks&#8221; phase do you think?</p>
<p><strong>Bruce Sterling:  Itâ€™s necessary to master some of the tools first.  I think of the way the web has developed over the years. When the World Wide Web first appeared, it was just for physicists, and was all line commands and quite unstable and difficult. Then you got usability studies, and things like Ajax and so forth. Itâ€™s a very painstaking thing.</strong></p>
<p><strong>Weâ€™re not best at  building interfaces for the best computer scientists.  Web 2.0 was built from things like watching people cry while they were trying to fill out insurance forms. â€œWell, why are you so upset?â€</strong></p>
<p><strong>â€œWell, I got to the end of the webpage, and then it said I took too long, and it cut me off and now I have to start all over!â€ <a href="http://blog.jjg.net/" target="_blank">Jesse James Garrett</a>, right? Benefactor of mankind.</strong></p>
<p><strong>If youâ€™re experienced, you think:  â€œWhy donâ€™t I build a little module here, and kind of move the form over here, then Iâ€™ll periodically update it with some asynchronous Java and XTML.â€ And people are like, â€œGee, how odd.â€ But that really works for real people. It comes from studying what people want to do.  Whereas, the current AR approach to a problem like the insurance form would be like, â€œI will give you the ability to record the entire insurance form, and it will flash before your eyes!â€    OK great, thatâ€™s a cool hack, but I donâ€™t really need X-Ray Eyes to fill out my insurance form. What I need is a more user friendly interface.</strong></p>
<p><strong>Tish Shute:</strong> Well it seems like we are moving into the terrain of Joshua&#8217;s fifth word &#8220;ventilation,&#8221; &#8211; if I understand it rightly &#8211; it is at least partially the antidote to territorialization because itâ€™s this idea that a place needs air so we come out of our hermetically sealed boxes of the way we relate to a place and what kind of augmentation would bring more oxygen to that space.</p>
<p>There was an interesting moment in the Auggies because when <a href="http://twitter.com/dutchcowboy" target="_blank">Maarten Lens-FitzGerald</a> presented the guerrilla shopping Layar and basically Mark Billinghurst and Jessie Schell who spoke first didn&#8217;t seem too impressed. They didnâ€™t want to walk to shopping &#8211; that was what web shopping did, it saved us from walking to shop&#8230; but I felt, to me you picked up on something which might have some bearing on &#8220;ventilation&#8221; in that this AR shopping Layar was kind of squatting Prada &#8211; a favela chic AR shopping thing?</p>
<p><strong>Bruce Sterling: I wasnâ€™t sure if I was interpreting what Maarten had in mind by that.  But I think Maarten sees his structure accurately as an experience thing rather than a mapping thing. I think heâ€™s proudest of things like the Berlin Wall app on Layar, as opposed to Layars that help you go get a hamburger. Itâ€™s like&#8230;so when Layar inserts parasitic augmented shopping over other peopleâ€™s  real shopping? That was rather a subversive thing.</strong></p>
<p><strong>I think the key there is that his client is called &#8220;Hostage T-shirts,&#8221; right? I mean itâ€™s actually kind of a transgressive little hippy T-shirt store that Layar can dump anywhere in the world. Layered right over, say, Versace and Prada.  I donâ€™t know what becomes of that effort. And Iâ€™m not sure about the term â€œventilation,â€ because thatâ€™s a term of art I havenâ€™t heard much.</strong></p>
<p><strong>Tish Shute:</strong> Maybe it&#8217;s like in a cafe.  Ventilation would mean we were able to communicate with all these different categories of people that we normally would be unable to connect to, even though we might be sitting only a few feet apart.</p>
<p><strong>Bruce Sterling:   So it means ventilation in the bottles of our homophilies.</strong></p>
<p><strong>Thatâ€™s not a personal problem for me.  I commonly live in foreign cities and, you know, and spend a helluva lot of time talking to strangers at conferences. So I donâ€™t think Iâ€™d have that particular tight little social island problem.</strong></p>
<p><strong>Tish Shute:</strong> Of the three judges at the Auggies, you seemed most enthusiastic about the Layar entry.</p>
<p><strong>Bruce Sterling: It may be theyâ€™re not as familiar with the business models of locative AR as I am, and as Maarten is. It was kind of a subtle in-joke he was making about Layarâ€™s own business model there.</strong></p>
<p><strong>Tish Shute: </strong>How do you explain that?</p>
<p><strong>Bruce Sterling: Well, you know, Layar&#8217;s in the business of  selling software to make mapping and urban structures into ecommerce.</strong></p>
<p><strong>The ideal way to do that obviously would be to move the richest customers into the most expensive shops in the most rapid way possible. Or at least distribute them in the directions they want to go, a la Google. Whereas this app that Maarten was talking about puts big barnacles in the way that are selling punk t-shirts.</strong></p>
<p><strong>Tish Shute:</strong> Right! Right!</p>
<p><strong>Bruce Sterling:   The Dutch are a bit subtle in their humor.  I rather imagine thereâ€™s a lot of discussion in Layarâ€™s inner circle about exactly what they want developers to do with their platform. Theyâ€™re going to have considerable political difficulty deciding who can have a Layar key and how you discipline people when they start doing weird stuff. &#8220;The Oakland Medical Marijuana layar.&#8221;</strong></p>
<p><strong>Tish Shute:</strong> Well, finding nudists is one of the top layars at the moment.</p>
<p><strong>Bruce Sterling: You know, obviously so. And finding narcotics in Amsterdam, or a prostitution layer.  I warned them nine months ago this was bound to happen. Iâ€™m sure theyâ€™re aware of it.  I don&#8217;t think Layar wants Googleâ€™s style of cool, technocratic detachment.</strong></p>
<p><strong>Tish Shute:</strong> But thatâ€™s pretty difficult to do in current augmented reality because we donâ€™t have all the mathematical voodoo for full on AR search yet, do we?</p>
<p><strong>Bruce Sterling: Well, you can hire it out. Somebodyâ€™s going to do it, if they get interested enough.  Thereâ€™s Nokia-Yahoo. Nokia-Yahoo! just did a big corporate deal&#8230;involving Nokiaâ€™s mapping system and Yahooâ€™s localization. So the Nokia-Yahoo! mash-up is called Nooo!   Or could be called Yahno. Yakia!  Unfortunately ridiculous names.</strong></p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because you mentioned the spidersâ€™ mating problem at Google. Theyâ€™ve got all the pieces to make this kind of level of AR obviously right now. But they actually havenâ€™t done it yet.</p>
<p><strong>Bruce Sterling: There must be at least some discussion in Google, but the same goes for Microsoft. Iâ€™m frankly baffled by Microsoft, because itâ€™s just full of insanely brilliant people. What the hell are they doing in there? Name one serious innovation thatâ€™s come out of their labs in five years. They make Integral Research look dynamic. Itâ€™s really kind of sad.</strong></p>
<p><strong>Tish Shute:</strong> Itâ€™s a very curious situation with AR though, because AR more than any new technology relies on these big hordes of data particularly for the mapping, right? And only the big four have the data &#8211; although we are beginning to see upstarts, Earth Mine, Simple Geo&#8230; Did you get a chance to meet Di-Ann Eisnor  from <a href="http://www.waze.com/homepage/" target="_blank">Waze &#8211; real-time maps and traffic information based on the wisdom of the crowd</a>.Â  Waze is a very interesting project that is a potential giant killer.</p>
<p><strong>Bruce Sterling: No, I didnâ€™t talk to them.  Iâ€™ve seen people speculate that Earthmine and Apple are going to make an allegiance. I guess if youâ€™re thinking that urban informatic mapping is a super big thing for AR, that must be true.   But Iâ€™m not convinced thatâ€™s necessarily the case. People have pointed out that you can just use Google Maps, and you donâ€™t have to walk around with a little visor.  There are other aspects of AR besides the cell phone space. Thereâ€™s Total Immersion&#8217;s big display screens. Thereâ€™s the web-based fiduciary stuff. And thereâ€™s projection mapping. And then thereâ€™s experience design just for people who need their reality augmented for whatever personal or social reason. [dog barking]</strong></p>
<p><strong>Tish Shute:</strong> Right. Oh, Iâ€™m in the middleâ€¦ My sonâ€™s come. What a good hair cut!</p>
<p><strong>Bruce Sterling: Hi, there.</strong></p>
<p><strong>Tishâ€™s Son</strong>: Hi.</p>
<p><strong>Bruce Sterling: Howâ€™s it going, sir? Good to see youâ€¦</strong></p>
<p><strong>Tishâ€™s Son:</strong> Good.</p>
<p><strong>Tish:</strong> [laughs]</p>
<p><strong>Bruce Sterling: Yeah. Nice looking shirt. I like the back of it.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s from the American Shaolin Temple. [laughs<strong>]</strong></p>
<p><strong>Bruce Sterling: All</strong> right. Awesome. Kung Fu geek shirt.</p>
<p><strong>Tish Shute:</strong> Yup he is a bit of Kung Fu Geek. He and his dad did an iPhone app on it for Yu-Gi-Oh, for Yu-Gi-Oh scoring.</p>
<p><strong>Bruce Sterling: Awesome. Plenty of PokÃ©mon-style combat in Yu-Gi-Oh.</strong></p>
<p><strong>Tish Shute:</strong> Yeah. Well, itâ€™s interesting because youâ€™ve talked about this aspect. That all of this, the PokÃ©mon aspect of AR hasnâ€™t kicked in yet. But itâ€™s obviously a match made in heaven to some degree, isnâ€™t it?</p>
<p><strong>Bruce Sterling: One would think so, yeah.  The whole little kid gaming thing. What does that have to do with Google or Bing? You donâ€™t need a massive database for stuff like that.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, youâ€™re right. But good tracking, mapping and registration requires a lot of mapping&#8230;</p>
<p><strong>Bruce Sterling: Well, our current tracking, mapping and registration requires that. Maybe thereâ€™s some other way to hack it that we donâ€™t know about yet.</strong></p>
<p><strong>Tish Shute: </strong>Thatâ€™s a very interesting point. We always have to stretch the way we think about mappingâ€¦ perhaps its a real-time understanding of the location youâ€™re in&#8230;perhaps the map is being negotiated through several social processes?</p>
<p><strong>Bruce Sterling: There are maps, and then there are maps. Thereâ€™s a kind of artillery map where you need to know the precise location of target spaces. And then thereâ€™s the kind of social map where Iâ€™m really looking for the IN-N-OUT Burger where my sister went last Tuesday. Thatâ€™s a different  system.</strong></p>
<p><strong>Tish Shute:</strong> And I think AR, at the moment, weâ€™re getting the most out of the social maps certainly. And the other [machine   perception  technologies to detect  the identity and physical    configuration of  objects relative to each  other to accurately  project   information  alongside/overlaid with a physical object] is still kind of the big dream, isnâ€™t it?</p>
<p><strong>Bruce Sterling: They say that men never ask for directions and women never read maps. Clearly, the genders have different ways of navigating the world. Whoâ€™s to say what manner of augmenting our experiences is hottest?  Iâ€™m not convinced that todayâ€™s rather rigid geolocativity is really what our society wants from that particular service. Maybe what we want is something more folksy.   Some useful nudge in the right direction as opposed to grids with 200 meters here and instructions to turn such-and-such.</strong></p>
<p><strong>Besides, thereâ€™s other hacks we havenâ€™t considered.  Weâ€™re very dependent on GPS, but just suppose all those satellites are blown out of the sky in a solar storm. Would we really want to give up mapping? Wouldnâ€™t we just come up with some other nifty hack?  Radio beacons, letâ€™s just say. Atomic clock timers in towns. Or maybe just little QR codes on lampposts that give you the exact location of that lamppost, and just click the thing and have it calculate where you are.</strong></p>
<p><strong>Tish Shute:</strong> Yes the <a href="http://thenexthope.org/" target="_blank">NextHope</a> <a href="http://thenexthope.org/2010/07/hackable-badge-accessory-kits-available/" target="_blank">OpenAMD project</a> had a clever way of triangulating location indoors.</p>
<p><strong>Bruce Sterling: Well, GPS is there and people all want to use it. Itâ€™s got good API, so of course you want to. And the guys who are good at doing it are real geolocative freaks. But the mere fact that we can do it this way, and that you can make it pay, doesnâ€™t mean that itâ€™s the ultimate way to provide that service to people.  Itâ€™s like saying that Egyptian hieroglyphics must be the greatest way to write,  because weâ€™ve got a lot of them and theyâ€™re hard to learn. What if somebody comes along with an alphabet? Itâ€™s going to be a little embarrassing.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, thatâ€™s a very good point. Now, this is a more simple ordinary question about the event. <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams</a> went off the map in the Auggie voting, and walked away with The Auggies. No one doubted that that was the mostâ€¦</p>
<p><strong>Bruce Sterling: I donâ€™t know. I thought those <a href="http://occipital.com/blog/" target="_blank">Occipital</a> guys with the panoramic painting&#8230;. That was hairy. I would have been tempted to give them the prize myself, actually.</strong></p>
<p><strong>Tish Shute:</strong> And what did you like best about that? Because I agree. I love <strong><a href="http://occipital.com/blog/" target="_blank">Occipital</a></strong>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-6.20.58-PM.png"><img class="alignnone size-medium wp-image-5704" title="Screen shot 2010-09-17 at 6.20.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-6.20.58-PM-300x41.png" alt="Screen shot 2010-09-17 at 6.20.58 PM" width="300" height="41" /></a></p>
<p><em>click to enlarge</em></p>
<p><strong>Bruce Sterling: I thought it was a more technically difficult stunt than the hand registration thing.  Using a hand as a 3-D cursor is hot, but  not like painting a panorama in 3-D in real time.  That was an impressive technical feat.</strong></p>
<p><strong>Tish Shute: </strong>And they hinted at the 2.1.1 AR, more AR version of that. What do you see coming out of that as possibilities?</p>
<p><strong>Bruce Sterling: Well, Iâ€™d heard of <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams</a>, so I wasnâ€™t stunned. But Iâ€™d never heard of those guys. I wonder what else the heck theyâ€™ve got in the att</strong>ic.</p>
<p><strong>Tish Shute:</strong> very cool stuff&#8230;</p>
<p><strong>Bruce Sterling: Well, more power to them. But clearly YDreams was the popular favorite. And who couldnâ€™t like it? It was just so AR.</strong></p>
<p><strong>Tish Shute</strong>: It was so AR and so gorgeous.</p>
<p><strong>Bruce Sterling: It was pretty, actually.Â  Except for their ugly menu button and poor font choice.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes. You didnâ€™t like that, did you? [laughs] But with the Occipital panorama, what do you see the next stage of that?</p>
<p><strong>Bruce Sterling: Well, obviously quicker and faster. Quicker and faster and more accurate in a network. Letâ€™s just say Iâ€™m in New York and youâ€™re in New York and Iâ€™m calling you for help. And you say where are you?  I just whirl around like this and I mail it to you on a Google Wave. And you whirl around like that, and then we compare the two panoramas and do an instant triangulation. And you say: Iâ€™m over here on this red dot of your screen.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, exactly.</p>
<p><strong>Bruce Sterling:  Weâ€™re navigating with panoramas by having two connected panoramas and considering the difference.</strong></p>
<p><strong>Tish Shute: </strong> Yeah, very interesting&#8230;</p>
<p><strong>Bruce Sterling: Not shabby, right?</strong></p>
<p><strong>Tish Shute:</strong> Not shabby at all.</p>
<p><strong>Bruce Sterling: If you could do it in real time.</strong></p>
<p><strong>Tish Shute:</strong> Then the other thing I missed because I was going to meet Will was I missed the Launch Pad competition. Did you catch that?</p>
<p><strong>Bruce Sterling: I didnâ€™t see it either. I thought of another app though.</strong></p>
<p><strong>Tish Shute:</strong> Oh!</p>
<p><strong>Bruce Sterling: Youâ€™ve got a panorama maker in your home office, and it just scans the office 24 hours 365 and tags anything that moves, right? OK, whereâ€™s the clipboard?Â  At 8:15 it was over here.  Now itâ€™s vanished. Now another object is viewed over here. So, logically, ping, you hit it with a sticky light and there it is, right?</strong></p>
<p><strong>Tish Shute:</strong> Oh,  that&#8217;s cool also knowing what has changed in any environment would be a big enabler for a lot of AR visions.</p>
<p><strong>Bruce Sterling:  Iâ€™m sure there are many other things you could do with panoramas.</strong></p>
<p><strong>Tish Shute:</strong> My jet lag is beginning to kick in big time &#8211; so many ideas to pursue from are2010 &#8211; those panoramas are very exciting though.</p>
<p><strong>Bruce Sterling: Oh, well, itâ€™s all right.  We can augment reality!   Iâ€™ve got three heads and six hands!</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Vision Based Augmented Reality (AR) in Smart Phones &#8211; Qualcomm&#8217;s AR SDK: Interview with Jay Wright</title>
		<link>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/</link>
		<comments>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/#comments</comments>
		<pubDate>Thu, 05 Aug 2010 22:56:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR standards]]></category>
		<category><![CDATA[AR version of Rock'em Sock'em]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Chokkan Nabi]]></category>
		<category><![CDATA[Christian Doppler Handheld AR LAB in Graz]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[going beyond compass/gps based AR]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[InsideAR]]></category>
		<category><![CDATA[Junaio]]></category>
		<category><![CDATA[Junaio glue]]></category>
		<category><![CDATA[Karma Augmented Reality Mobile Architecture]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Maarten Lens-FitzGerald]]></category>
		<category><![CDATA[markerless tracking]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open Android JPCT 3D engine]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Patrick O'Shaughnessey]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Qualcomm AR Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Developer Challenge]]></category>
		<category><![CDATA[Qualcomm Augmented reality SDK]]></category>
		<category><![CDATA[Qualcomm Developer Challenge]]></category>
		<category><![CDATA[Simulation3D]]></category>
		<category><![CDATA[Snapdragon]]></category>
		<category><![CDATA[Thomas Alt]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Unifeye Mobile SDK]]></category>
		<category><![CDATA[Unifeye SDK]]></category>
		<category><![CDATA[Unity for AR]]></category>
		<category><![CDATA[Unity for augmented reality]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Upliq 2010]]></category>
		<category><![CDATA[vision based AR]]></category>
		<category><![CDATA[vision based augmented reality]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Yohan Baillot]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5593</guid>
		<description><![CDATA[Recently, Qualcomm announced an SDK for vision based augmented reality &#8211; currently in private beta and open to the public this fall. The Qualcomm augmented reality (AR) bonanza will launch with a $200,000 developer challenge and a SDK that will put vision based augmented reality into the hands of developers without licensing fees. This is [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank"><img class="alignnone size-medium wp-image-5616" title="Screen shot 2010-08-05 at 6.07.36 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-6.07.36-PM-300x212.png" alt="Screen shot 2010-08-05 at 6.07.36 PM" width="300" height="212" /></a></p>
<p>Recently, <a href="http://www.qualcomm.com/" target="_blank">Qualcomm</a> announced <a href="http://qdevnet.com/ar" target="_blank">an SDK for vision based augmented reality</a> &#8211; currently in <a href="http://qdevnet.com/dev/augmented-reality/private-beta-program" target="_blank">private beta</a> and open to the public this fall.  The Qualcomm augmented reality (AR) bonanza will launch with a <a href="http://qdevnet.com/dev/augmented-reality/developer-challenge" target="_blank">$200,000 developer challenge</a> and a SDK that will put vision based augmented reality into the hands of developers without licensing fees.</p>
<p>This is a big step forward for augmented reality and a very important move made by an industry giant to support the rapidly evolving AR industry.  Innovation at all levels of the AR stack, particularly at the hardware level (CPU/GPU optimization) is vital for the full vision of augmented reality &#8211; media tightly registered to physical space, to take center stage.   Vision based AR takes mobile AR beyond compass/GPS based AR post-its, which are only loosely connected to the world (but the staple of most current AR apps), towards the holy grail of AR &#8211; markerless tracking with the whole world as the platform.</p>
<p>Click on the image above or <a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank">see here</a> for a video demo of an  AR version  of Rock&#8217;em Sock&#8217;em Robots game.Â  <a href="http://www.mattel.com/">Mattel</a>, one of the first companies  working with the SDK demoed AR Rock&#8217;em Sock&#8217;em, at the <a href="http://uplinq.com/">Uplinq 2010</a> conference (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>).</p>
<p>The Qualcomm AR stack, which reaches from the metal to developer APIs, will give Android developers an important edge in AR development.   And, when vision based AR starts getting integrated with visual search capabilities, and combined with cool tools like <a href="http://unity3d.com/" target="_blank">Unity</a>, we will start to see the augmented world get really interesting.</p>
<p>Visual search is already an area of AR getting a lot of attention, with <a href="http://www.google.com/mobile/goggles/#text" target="_blank">Google Goggles</a>, <a href="http://europe.nokia.com/services-and-apps/nokia-point-and-find" target="_blank">Point and Find</a>, <a href="http://www.cnet.com.au/augmented-reality-taking-off-on-japanese-smartphones-339304998.htm" target="_blank">Japan&#8217;s NTT DoCoMo set to launch &#8220;chokkan nabi,&#8221;</a> or &#8220;intuitive navigation,&#8221; in September, and the <a href="http://www.layarnews.com/2010/07/kooaba-meets-layar.html" target="_blank">recent partnership between Layar and Kooaba</a>.  <a href="http://www.metaio.com/" target="_blank">Metaioâ€™</a>s mobile augmented reality platform <a href="http://www.metaio.com/products/junaio/" target="_blank">Junaio</a> is already integrated with <a href="http://www.kooaba.com/" target="_blank">Kooabaâ€™s</a> computer vision capabilities.</p>
<p>And, of course, I am particularly excited about including open distributed real time communications for AR in this stack, which is why I asked a group of developers who have been inputting into the <a href="http://arwave.org/" target="_blank">ARWave</a> project if they had questions for Jay Wright, Qualcomm.Â  Thank you <a href="http://www.linkedin.com/in/yohanbaillot" target="_blank">Yohan Baillot</a>, <a href="http://lightninglaboratories.com/" target="_blank">Gene Becker</a>, <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, <a href="http://patchedreality.com/about/" target="_blank">Patrick  O&#8217;Shaughnessey</a>, <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://twitter.com/kusako" target="_blank">Markus Strickler</a>, and <a href="http://twitter.com/need2revolt" target="_blank">Davide Carnovale</a> for your input.Â  [Note: see my upcoming post, about the future of <a href="http://arwave.org/">ARWave</a> and real time distributed communications for AR following <a href="http://googleblog.blogspot.com/2010/08/update-on-google-wave.html" target="_blank">this Google announcement</a>.]</p>
<p><a href="http://www.linkedin.com/in/jaywright" target="_blank">Jay Wright</a>, â€œis responsible for developing and driving Qualcommâ€™s augmented reality commercialization strategy.â€ He â€œhandles partnerships with leading innovators in industry and academia and leads Qualcommâ€™s efforts in enabling augmented reality within the mobile ecosystem.â€  In the interview below, Jay very generously answers our questions in detail.</p>
<p>A key contributor of questions for this interview is Yohan Baillot.  Yohan is working on a full vision of AR &#8211; integrating computer vision, visual search, open distributed real time communications and AR eyewear.  Yohan Baillot is founder of <a href="http://www.simulation3d.biz/" target="_blank">Simulation3D</a>, a consulting and system integration company specializing in interactive visualization systems and eyewear-based AR systems.  (I hope to bring you an interview with Yohan soon!).</p>
<p>Qualcomm was the title sponsor for <a href="http://augmentedrealityevent.com/" target="_blank">are2010, Augmented Reality Event</a>, and  played a vital role in making this event an historic gathering of the talent and creative minds at the heart of the emerging AR industry.  Watch out for the videos of the are2010 sessions to be posted at the end of August.  My are2010 co-chair, <a href="http://ogmento.com/team" target="_blank">Ori Inbar</a>, is preparing them to go online while kicking his newly funded start up, <a href="http://ogmento.com/" target="_blank">Ogmento</a>, into high gear! Ogmento is also one of the start ups pioneering vision based AR.</p>
<p><a href="http://www.metaio.com/" target="_blank">Metaio</a>, (with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>, they are one of the first augmented reality companies), has played a key role in bringing a vision component to smart phone augmented reality apps with their <a href="http://www.metaio.com/products/" target="_blank">Unifeye mobile SDK</a>.Â  Junaio, Metaioâ€™s own mobile augmented reality platform has gone beyond location based AR with â€œjunaio glueâ€ &#8211; â€œthe camera&#8217;s eye is now able to identify objects and &#8220;glue&#8221; object specific real-time, dynamic, social and 3D information onto the object itself,â€Â (see my upcoming interview with Metaio founder, Thomas Alt).Â   Also, recently, Layar &#8211; who continue to innovate at a breathtaking pace, announced a partnership with the computer vision company Kooaba.</p>
<p>Both Maarten Lens-FitzGerald, Layar, and Thomas  Alt, Metaio, when I spoke to them recently,  saw the Qualcomm SDK as a very positive development for AR, and they look forward to exploring its capabilities and integrating it where appropriate with their AR tools.Â  See more about <a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">Layar&#8217;s  upcoming visit, to the US here &#8211; </a><a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">August  10th NYC, and August 12th SF</a>.Â  Also save the date, Sept 27th, Munich, for <a href="http://www.metaio.com/index.php?id=1103" target="_blank">InsideAR,</a> Metaio&#8217;s  upcoming conference.</p>
<p>It is clear that vision based AR will be driving the next wave of AR apps.  And, as Maarten and Thomas both pointed out, it will be interesting to see which use cases capture the imagination of users the most.  Having more tools freely available to AR developers will certainly be a boost to creativity.  And, Qualcommâ€™s SDK is going to give Android developers, in particular, a big opportunity to take the lead.</p>
<p><strong><br />
<h3>Interview with Jay Wright, Director, Business Development, Qualcomm</h3>
<p></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright.jpg"><img class="alignnone size-medium wp-image-5598" title="JayWright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright-300x255.jpg" alt="JayWright" width="300" height="255" /></a><br />
</strong></p>
<p><strong>Tish Shute:</strong> Before I start with questions on the new Qualcomm vision based augmented reality SDK, I want to briefly look ahead to what many people feel is vital for the full realization of augmented reality &#8211; head mounted displays, or more specifically, comfortable, sexy AR eye wear.  Is Qualcomm going to be involved in the development of augmented eye wear and wearable displays?</p>
<p><strong>Jay Wright:   I think thereâ€™s some core technology that needs to come together so we can have what we think needs to be a see-through head mounted display with a decent field of view.  And that looks like something that is quite possibly further than a three to five year horizon.</strong></p>
<p><strong>Tish Shute:</strong> Gene Becker asked some interesting general questions about the Qualcomm AR initiatives.  He said,  â€œIâ€™m unclear exactly what Qualcommâ€™s goal is.â€  It would be interesting to hear from you the Qualcomm view, from the top down.</p>
<p><strong>Jay Wright:</strong> <strong> Our largest revenue stream comes from sales of chipsets.    And we see augmented reality as a technology that drives demand for increasing amounts of processing power.  So we want to create demand for chips, higher-end chips, and augmented reality does that.  Specifically vision based augmented reality because it is so computationally intensive.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  And I think that is why people are very excited by the Qualcomm SDK.  It is not only the first free toolkit for developers to build vision apps from, isnâ€™t it?  Thereâ€™s been nothing freely available before this, has there?  But also Qualcomm is paying attention to the complete AR stack to support vision based AR development, from the chips to game/app development tools like Unity.</p>
<p><strong> </strong><strong>Jay Wright:  Thatâ€™s really the goal.  Weâ€™re not here to be in the augmented reality applications business.  Qualcommâ€™s role in the ecosystem has been to serve as an enabler.  And thatâ€™s what we want to do with augmented reality: provide the enabling technology that allows the entire ecosystem to flourish.</strong><br />
<br /></br><br />
<h3>&#8220;Augmented Reality has a number of attributes that make it a  great fit for Qualcomm&#8217;s core competencies&#8221;</h3>
<p></br><br />
<strong>Augmented Reality has a number of attributes that make it a great fit for Qualcomm&#8217;s core competencies. </strong><strong>Itâ€™s very computationally intensive, algorithmically complex, requires tight integration of hardware and software, and benefits from tight integration of multiple hardware components.  And thatâ€™s the kind of problem we like here, where we can apply our core competence of really optimizing complex systems for performance, while at the same time minimizing power consumption. </strong></p>
<p><strong> And as you know Tish, mobile AR is really extremely power sensitive.  We sometimes talk about it as a batteryâ€™s worst nightmare.  Itâ€™s roughly equivalent to playing a 3D game and recording a video all at the same time.</strong></p>
<p><strong>Whenever there is something that takes a lot of power, thatâ€™s a definite opportunity for us to optimize it.</strong></p>
<p><strong>Tish Shute:</strong> Right.  One of the core business is chips right, but for Qualcomm thereâ€™s basically a lot of profit in licensing.  When I talked to the developer community about the Qualcomm SDK developers first question was, â€œWhatâ€™s the licensing?  Whatâ€™s this going to cost us in the long run to develop on this SDK re licensing?â€  And they had all different takes on this.  So everyone had different ideas about what your approach to licensing might or might not be.  Could you clarify the approach to licensing, as I think this is a core concern for developers.</p>
<p><strong>Jay Wright:   Anytime you see something for free, you kind of say, â€œHey, whatâ€™s the hook?â€  So yes, itâ€™s definitely a logical question.  Our intent is not to generate licensing revenue from application developers using the SDK.  So the SDK will be made available free of charge for development, and it will also be free of charge for developers to deploy applications.</strong></p>
<p><strong>Tish Shute:</strong> Now, this is another question.  You also include not just image recognition capabilities but Unity in the package you are offering developers.  Unity products usually involve a license.  They do have some free products too, I think.  But how does this work?  And how do you separate your part from their part, or donâ€™t you?</p>
<p><strong>Jay Wright:  Thatâ€™s a good question.  What weâ€™re trying to do with the platform is incorporate it into tools that people already know how to use.  So weâ€™re actually going to have the SDK support two different tool chains.  One of them is the Android SDK and NDK.  And then the other one, is Unity.</strong></p>
<p><strong>Weâ€™re working with Unity to create an extension to the Unity environment that will be available as part of the Unity installer when you install Unity from the Unity website.  Developers will still be paying whatever license fees are associated with Unityâ€™s products on their existing pricing schedule.</strong></p>
<p><strong>Tish Shute:</strong> One of Thomas Wrobelâ€™s question is whether developers can just use the image recognition without Unity?  Your answer is yes, you can work with the computer vision component of the SDK separate from Unity?</p>
<p><strong>Jay Wright:  Yes, you can.</strong></p>
<p><strong>Tish Shute:</strong> Good because we would like to build a completely open Android client for ARWave, and not tie it to Unity unless people choose to.  Heâ€™s using the <a href="http://www.jpct.net/" target="_blank">open Android JPCT 3D engine</a>, which heâ€™s adapting for AR.  So he could actually use the part of the SDK that does image recognition and association with that, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.  You are not required to use Unity.  Unity is just one option for building the application.</strong></p>
<p><strong>Tish Shute:</strong> Great! Thatâ€™s very good.  But Iâ€™m sure many developers are going to jump on the chance to use Unity.  But I mean itâ€™s nice to be flexible because itâ€™s so early for AR that people have different ideas and new use cases coming up all the time.  I think itâ€™s excellent youâ€™ve divided that.</p>
<p>Another of Thomasâ€™s questions was, â€œCan developers use their own positioning data sharing solution?â€  Heâ€™s really talking about AR blips.</p>
<p><strong>Jay Wright:  With data sharing solutions, I am assuming that by data he means referring to augmentation data or graphics?</strong></p>
<p><strong>Tish Shute:</strong> Yes, and Iâ€™ll ask him to elaborate.  But, at the moment, everyone is using different ideas for POI, arenâ€™t they?<br />
<br /></br><br />
<h3>&#8220;The goal with our platform is to make it just as easy for a  developer to create 3D content for the real world as it is for a game  world or a virtual world.&#8221;</h3>
<p></br><br />
<strong>Jay Wright:  Yes.  So let me answer it this way, Tish.  The goal with our platform is to make it just as easy for a developer to create 3D content for the real world as it is for a game world or a virtual world.  So all weâ€™re really trying to do is provide the computer vision piece that makes the real world look like a bunch of geometric surfaces and potentially some meta data that is associated with this so you know what you are looking at.</strong></p>
<p><strong>So that means from a developerâ€™s perspective, you are still doing all of the 3D content, all of the animations, all of the game logic, all of the rendering.  You are still doing that all yourself.  So if you think about doing an AR game, you are doing everything you used to do, except you are not creating a virtual terrain.  You are just going to map it in the real world.</strong></p>
<p><strong>So if you want to do a browser that is doing POIâ€™s, your POI data, or augmentation, or meta data, or whatever it is, that can be in your application, it can be in the cloud, it can be wherever you want to put it.  Weâ€™re not putting any constraints on what that content is or where itâ€™s stored.</strong></p>
<p><strong>Tish Shute:</strong> Right, and thatâ€™s what I hoped for.  And I think that does answer the question.  People are interested to know how far Qualcomm is going with this.  For instance, Gene Becker asked: â€œdo they see a business at a certain level in the AR stack?â€  As you said AR development basically feeds into the core business of chip development, right?  But does Qualcomm also see some new business models developing?</p>
<p><strong>Jay Wright:   I think itâ€™s foreseeable that Qualcomm could identify other business opportunities down the line.  But weâ€™re certainly not there today.  Today, our motivation for the investment in AR is to create technology that is going to advance the chipset business.</strong></p>
<p><strong>Tish Shute:</strong> When the news came out about Qualcommâ€™s support of a game development studio at Georgia Tech at the same time as the SDK I think I wondered what was the scope of Qualcommâ€™s interest [for more on using Unity for AR development see <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">Vision-Based Augmented Reality Technical Super Session  video</a> from <a href="http://uplinq.com/">Uplinq 2010</a>].Â  For example, I am interested to know how the Qualcomm initiative in developing an AR stack connects to the effort to introduce an AR browser based on web standards, i.e., the <a href="https://research.cc.gatech.edu/polaris/content/home" target="_blank">Kharma/Kamra KML/HTML Augmented Reality Mobile Architecture from Blair MacIntyre and the Georgia Tech team</a> (image below)?  Are you supporting the open standards based browser development too?</p>
<p><strong>Jay Wright:   Blair is going to continue to work on the browser effort.  And itâ€™s our expectation that he will use our SDK and technologies for vision pieces of the browser effort where appropriate.  So they are certainly not mutually exclusive.  I would just think about our technology as one element of what may be used in that browser, as I expect it would be an element of what any other app developer would put in their application, whether it be browser, or game, or whatever.</strong></p>
<p><strong>Tish Shute:</strong> Yes Now, this is an interesting question, which is sort of connectedâ€¦Iâ€™m trying to keep some form of narrative for this!  It follows from the question about Blairâ€™s web-based standards browser.  A few people have asked me why we havenâ€™t heard more from Qualcomm in all these various standard discussions that are starting to come up.  I mean is it just too early, or are you too busy, or what?</p>
<p><strong>Jay Wright:  No, let me explain.  The type of standards that have come up so far have been around how HTML should be extended for geo-browser type applications.  And while thatâ€™s interesting, I think the standards efforts that Qualcomm would be more likely to be associated with in the near term are those related to APIâ€™s that are hardware accelerated.</strong></p>
<p><strong>So one of the things that we are in the process of doing right now, Tish â€“ because as you know, Qualcomm is a company that adheres to standards and strives to produce a leading implementation of those standards on our hardware and software â€“ is we are in the process of determining what API set within the existing SDK should be standardized.</strong></p>
<p><strong>Tish Shute:</strong> Right.</p>
<p>Now, my next question is, â€œWho are the other players at this level of the AR stack in the standards conversation? Who else is working at that level?â€  Obviously, the AR Lab in Graz was, but now they are Qualcomm, right?</p>
<p><strong>Jay Wright:   They are still independent.  Qualcomm is the exclusive industrial partner of the Christian Doppler Handheld AR LAB in Graz.</strong></p>
<p><strong>Tish Shute:</strong> Does this compete with, say, the work that other AR start ups are doing?</p>
<p><strong>Jay Wright:  Our intent is not to compete with companies that have done augmented reality technology.  Our intent is to enable the entire ecosystem.  So we would like to work with both Metaio and Total Immersion to find ways that they can benefit from our technology.  That would be the hope &#8211; that our technology can kind of lift and float all boats in the ecosystem.</strong></p>
<p><strong>Tish Shute: </strong>There are not many implementations of vision based AR right now?  I mean obviously Microsoft is doing stuff because they have <a href="http://www.robots.ox.ac.uk/~gk/" target="_blank">Georg Klein</a> now, right, and there is Google Goggles, Total Immersion, Metaio, and it will be interesting to see where Layarâ€™s partnership with Kooaba will lead?</p>
<p><strong>Jay Wright:  Yes.  I think there are relatively few commercial implementations of vision based AR stacks.</strong></p>
<p><strong>Tish Shute:</strong> One of Patrick O&#8217;Shaughnessey&#8217;s question is he wants to understand what features are going to be in the vision component, very specifically.  Patrick Oâ€™Shaughnessy, <a href="http://patchedreality.com/" target="_blank">Patched Reality</a>, working with <a title="Circ.us" href="http://circ.us/" target="_blank">Circ.us</a>,  <a title="Edelman" href="http://edelman.com/" target="_blank">Edelman</a>,   and <a title="metaio" href="http://metaio.com/" target="_blank">Metaio</a> used the Unifeye SDK to do <a href="http://mashable.com/2010/07/09/ben-and-jerrys-iphone-app/" target="_blank">a vision based AR app for Ben and Jerryâ€™s</a> thatâ€™s been getting all the attention lately. He was a speaker at are2010.</p>
<p>He very specifically wants to know what features will be included in the computer vision component.  He says, â€œIâ€™m most interested in understanding what features are going to be in the vision component.  Is it marker based?â€  Well I know itâ€™s more than marker  based.  I saw some of it in <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>.  Is it â€œNFT?  PTAM? other?  Also, are you are integrating any backend services.â€  That is an interesting question!</p>
<p><strong>Jay Wright:  So letâ€™s get to the features on the client side, the vision based features.  Thereâ€™s support for, what AR aficionados would know as natural feature targets, or image based targets.  And we use those to represent, obviously, 2D planar surfaces.</strong></p>
<p><strong>The other thing that we are trying to do to set expectations, Tish, about where these can be used is to let people know that they work best in what weâ€™re calling near-field environments.  So the idea isnâ€™t that you use the system to create a large scale AR system that can recognize buildings indoors and outdoors.  Itâ€™s the idea where I can recreate 3D experiences that take place on surfaces that are in my immediate field of view, whether that be on the table in front of me, or on the floor, or on the wall, or on the shelf.</strong></p>
<p><strong>Also, when you talk about near field experiences, there are some other constraints that are implied.  Like, if itâ€™s in front of me and my immediate field of view is probably going to be pretty well lit.  And lighting, of course, is an important requirement.</strong></p>
<p><strong>So weâ€™ll support these natural feature targets, or image targets.  And we also have support for sort of a hybrid marker image type.  Itâ€™s something called a frame marker, which has kind of a black border with some dots on it.</strong></p>
<p><strong><a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank"><img class="alignnone size-medium wp-image-5610" title="Screen shot 2010-08-05 at 5.13.50 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-5.13.50-PM-300x166.png" alt="Screen shot 2010-08-05 at 5.13.50 PM" width="300" height="166" /></a><br />
</strong></p>
<p>Click on the image above or <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">here to view Vision-Based Augmented Reality Technical Super Session video</a> from <a href="http://uplinq.com/">Uplinq 2010</a></p>
<p><strong>Jay Wright:  So thereâ€™s this additional type.  And the reason for this additional hybrid marker type is it has a lower computational requirement than a natural feature target.  So the idea is these things can be used as game pieces or elements of play where I want to have a large number of them detected and tracked simultaneously.</strong></p>
<p><strong>So you can have, for example, one big natural feature target that serves as a game board or game surface, and you can use these other things as smaller game pieces.  And when you put them out, different types of content can appear on them and do different things.</strong></p>
<p><strong>Tish Shute:</strong> Yes, thatâ€™s nice!  And the other thing I noticed was the virtual buttons.  How well developed is that?</p>
<p><strong>Jay Wright:  The idea behind virtual buttons is, in addition to supporting augmentation, we want to support interaction.  And we think there are going to be different types of user interaction with augmented reality content.  It may be hand tracking and finger tracking, but another compelling form weâ€™ve identified so far is the ability for me to touch particular surfaces and have an event fire within the application..</strong></p>
<p><strong>So virtual buttons are rectangular areas on image targets that a developer can define, and they serve as buttons.  So you can create a target that is a game board, for example, and define certain regions.  And when the user covers that region with his hand, like pushing a button, your application can detect that event and take some action.</strong></p>
<p><strong>Tish Shute:</strong> Nice!  And what is the documentation on these capabilities that is offered by Qualcomm&#8230;For example Yohan Baillot, who is interested in integrating eyewear-based AR systems with smartphones asked. How deep does this go?  Will there be full documentation on <a href="http://www.qualcomm.com/products_services/chipsets/snapdragon.html" target="_blank">Snapdragon</a>, people who want to work at that level? Is there a chip SDK?</p>
<p><strong>Jay Wright:   . Qualcommâ€™s model is to work with providers of the operating systems and deliver functionality of the chip through the operating system. So many operating systems APIs will take advantage of functionality thatâ€™s in the chip. But there is no separate chip SDK per se.</strong></p>
<p><strong>Tish Shute:</strong> I suppose that does come up a little bit with one of Anselm Hookâ€™s questions, because there is some overlap with Google Goggles here, isnâ€™t there, in terms of what youâ€™re doing, right? Are you going to work closely with Google Goggles ?</p>
<p><strong>Jay Wright: Google Goggles is performing what weâ€™ve described â€˜visual searchâ€™. So the idea is you take a picture, send it to the cloud and identify it and the results come back. I think if we see Google Goggles go in a direction where thereâ€™s an AR experience that would be a good area for us to collaborate with Google.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank">Anselm Hook</a> is very interested in having some kind of open standard around this physical tagging of the world, right, &#8211; the physical world as a platform. But I suppose thatâ€™s down the road but is there a plan to start talking about open standards here &#8211; visual search with image recognition? Thatâ€™s a very powerful combination. (see my interview with Anselm Hook here).</p>
<p><strong>Jay Wright:    I think it is. And weâ€™re very interested to hear from developers and others that have ideas about how they would want to integrate with the functionality that we have to best enable those kinds of combined experiences.</strong></p>
<p><strong>Tish Shute:</strong> Well, I know Anselm has a lot of very important ideas on that.</p>
<p><strong>Jay Wright: Iâ€™d be very interested in hearing those because we want to do everything we can to enable the maximum number of applications and best user experience for anything that people want to do.</strong></p>
<p><strong>Tish Shute:</strong> Letâ€™s go back to some specific questions about the platform, right? For example Yohan Baillot asked, â€œIs it arbitrary image/tag recognition supported? Is the tag / image specifiable by user? Is face recognition supported?â€  Not yet, face recognition, right?</p>
<p><strong>Jay Wright:    Not yet.</strong></p>
<p><strong>Tish Shute:</strong> What are the plans with that?</p>
<p><strong>Jay Wright:    I think weâ€™ve identified it as an interesting area and something that thereâ€™s some interest in, but have not made a decision on a particular technology direction.</strong></p>
<p><strong>Tish Shute:</strong> Youâ€™ve answered some of these but 3D model based vision tracking. Yohanâ€™s question was, â€œIs 3D model based vision tracking supported (that is recover the pose of the camera using a known 3D model and a 2D camera view of this model)?â€</p>
<p><strong>Jay Wright:    Thatâ€™s something weâ€™re looking at very closely, but again, donâ€™t have a plan, or donâ€™t have a future date for.</strong></p>
<p><strong>Tish Shute:</strong> And you said with the natural landmark tracking thatâ€™s not supported, right?</p>
<p><strong>Jay Wright:    I donâ€™t know if I know what that means, Tish. But we donâ€™t have any APIs that provide compass or GPS functionality other than already exists in the operating system. So if you want to take advantage of the compass or other sensors, you can absolutely do that, but the SDK does not currently provide anything different or anything more than already exists in the OS.</strong></p>
<p><strong>Tish Shute:</strong> This is an interesting question, â€œIs Snapdragon offloading some processing to the GPU, if any?â€</p>
<p><strong>Jay Wright:    Certainly  rendering functionality that utilizes OpenGL is being offloaded to the GPU. Weâ€™re currently in the process of determining multiple methods for offloading functionality between both symmetric and heterogeneous cores on Snapdragon. Which would include the GPU, the apps processor, and  DSPs.</strong></p>
<p><strong>Tish Shute: </strong> No one has truly solved optimizing the GPU/CPU for mobile AR yet have they?</p>
<p><strong>Jay Wright:    That really gets to the heart of the optimization here. Which pieces ought to be operating on which cores and when, and why? And thatâ€™s something that weâ€™re looking at very closely.</strong></p>
<p><strong>Tish Shute: </strong> Right.  The only AR &#8211; that is truly 3D media tightly registered to the physical world has been done for military and medical (and that has often been with a locked of camera!).  But to take mobile AR to the next level I think many developers would like access to the CPU/GPU, for example a developer interested in the future of eyewear like Yohan?</p>
<p><strong>Jay Wright:     Weâ€™re very interested in hearing what kinds of tools developers would like to see.</strong></p>
<p><strong>Tish Shute:</strong> What is the best forum for discussing feature specifics?</p>
<p><strong>Jay Wright:    To provide feature requests to us?</strong></p>
<p><strong>Tish Shute:</strong> Yes. And discuss them.</p>
<p><strong>Jay Wright:    if people go to <a href="http://qdevnet.com/ar" target="_blank">qdev.net/AR</a> thereâ€™s an application up there for the private beta program. So if people do have ideas about features or other things they would like to see, theyâ€™re welcome to submit [their requests and ideas] there.</strong></p>
<p><strong>Tish Shute:</strong> I also have some questions about the specifics of the competition.  Some people are a little confused about some things.  Yohan asked, â€œWhat is the expected form of the project?  Lab demonstration?  Specific capability?  Complete end to end system?â€</p>
<p><strong>Jay Wright:  The only requirement is that they submit an Android application that we can then get running on a device.  So if it has a backend component or backend server that it works against, great.  If it does, it does.  If it doesnâ€™t, it doesnâ€™t.  But thatâ€™s really it. Thereâ€™s no limit to the application category.  It can be a game, it can be a museum tour, it can be a childrenâ€™s learning game or learning experience.  It can really be anything.  The idea is we want to find experiences for which AR delivers some unique value. Weâ€™ll be announcing more specifics about the competition in the near-future.</strong></p>
<p><strong>Tish Shute:</strong> Right, because some people werenâ€™t sure about the Unity being separated whether it was biased towards games.  And itâ€™s not really, is it?</p>
<p><strong>Jay Wright:  Unity is a bias toward just rapid development for 3D, I think.  Itâ€™s most commonly associated with games, but there are also a lot of Unity customers that use it for medical simulations and other types of applications that arenâ€™t really games at all.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  Itâ€™s very flexible, I know.  You did bring up the backend services again.  Are you thinking of offering any of that?</p>
<p><strong>Jay Wright:  There is a backend tool that we offer.  And the backend tool is what you use to generate your targets.  So if you want to create or use a particular image for a target in your application, you upload it to our target management application, and then it will evaluate that target and tell you how well it will work.  So as you know, certain images are more likely to be recognizable than others.  And so thereâ€™s metrics in that application that will give you some feedback.</strong></p>
<p><strong>And then you can download your target resource from the website that you can then incorporate into your application project.</strong></p>
<p><strong>Tish Shute:</strong> So this is available at the moment to people who are in the private beta and not to&#8230;you know, all of this information and documentation, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.</strong></p>
<p><strong>Tish Shute: </strong>So thatâ€™s an incentive.  Now, just to encourage people to submit to the private beta is the other thing that people seem confused about.  In one part you say 25 developers.  And some people have thought that meant it was limited to 25 individuals.  And some people have like maybe four people on their team, so they were going, â€œWell, are we going to be accepted because we have four developers, or do we count as one because we are all working at the same project?â€</p>
<p><strong>Jay Wright:   itâ€™s just 25 companies.</strong></p>
<p><strong>Tish Shute: </strong> OK.  I think weâ€™ve gone through the questions.  Just to clarify and maybe give some incentive for people to apply to the private beta&#8230;the big advantage of getting in the private beta, aside from getting a monthâ€™s start on the competition, is that you get a chance to input, right?</p>
<p><strong>Jay Wright:  Yes.  A chance to provide feedback, get early access to the technology.  And then we are also providing a free HTC phone.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes.  I forgot the phone.  Yes, right.  In the requirements, though, you basically seem to be asking for sort of a full app&#8230;some people get reticent about delivering their full application plan, right?</p>
<p><strong>Jay Wright:  Yes.  I understand that.  People should just reveal what they are comfortable talking about.  Just so you understand the constraint on this end, this is early technology and weâ€™re trying to understand exactly what the support requirement is going to be.  And we have limited supported resources at this time, so we want to make sure that we can focus the resources that we have on folks that are really going to use the technology and have a sound plan to actually build something.  So thatâ€™s really the motivation behind limiting the size of the private beta.</strong></p>
<p><strong>Tish Shute:</strong> OK.  Yes, itâ€™s good to reiterate that.  Weâ€™re down to the last question that I have, and then Iâ€™ll ask you if there is anything that I missed.  You say you are partnering with Mattel.  Who are the developers?  Because I mean Mattel isnâ€™t an augmented reality development team.</p>
<p><strong>Jay Wright:  Mattel used a subcontractor, <a href="http://www.aura.net.au/">Aura Interactive</a>.</strong></p>
<p><strong>Tish Shute: </strong> Nice.  But thatâ€™s your only partner that I saw, right?  Why Mattel?</p>
<p><strong>Jay Wright:  Well, to launch a new technology, companies will often find showcase partners to demonstrate compelling uses of it.  And we thought Mattel and the Rockâ€™em Sockâ€™emâ„¢ toy was a great example of combining augmented reality with an existing toy.</strong></p>
<p><strong>Tish Shute:</strong> And I think people agree with you on Rockâ€™em Sockâ€™em (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameron&#8217;s RWW post</a>).</p>
<p><strong>Jay Wright:  And thereâ€™s other showcase partners and applications that we will continue to work on to kind of spur the ecosystem and show what is possible.</strong></p>
<p><strong>Tish Shute: </strong>OK.  Now, is there anything Iâ€™ve left out that you think?  Whatâ€™s the core of this narrative that we need to get across, and if Iâ€™ve left anything out that is a key piece?</p>
<p><strong>Jay Wright:  I think youâ€™ve done an excellent job of covering all the bases, Tish.</strong></p>
<p><strong>Tish Shute: </strong> [laughs]</p>
<p><strong>Jay Wright:  I think the important overriding message to get across is that we really see ourselves in an enablement role here, and that we are trying to provide&#8230;.weâ€™d like to provide fundamental technology that helps all developers build content for the real world.</strong></p>
<h3><strong><strong><br />
</strong></strong></h3>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>AR Wave: Layers and Channels of Social Augmented Experiences</title>
		<link>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/</link>
		<comments>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/#comments</comments>
		<pubDate>Tue, 13 Oct 2009 18:52:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR Browser]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[augmentaion]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Channels and Social Augmented Realities]]></category>
		<category><![CDATA[citi sensing]]></category>
		<category><![CDATA[citizen sensing]]></category>
		<category><![CDATA[Clayton Lilly]]></category>
		<category><![CDATA[cybernetics vs ecology and human waste]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[eco mapping]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geospatial web]]></category>
		<category><![CDATA[geospatial web and augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave as an AR enabler]]></category>
		<category><![CDATA[Google Wave enable augmented reality]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[green tech augmented reality]]></category>
		<category><![CDATA[immersive sight]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Layers]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[Life Clipper]]></category>
		<category><![CDATA[life streaming]]></category>
		<category><![CDATA[location based media]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Mannahatta]]></category>
		<category><![CDATA[map based augmentation]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[modulated mapping]]></category>
		<category><![CDATA[modulated napping]]></category>
		<category><![CDATA[multi-user]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[non euclidian geometry]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[Seanseable Labs]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality experiences]]></category>
		<category><![CDATA[sound augmentation]]></category>
		<category><![CDATA[Thomas K. Carpenter]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Wave as a platform for augmented reality]]></category>
		<category><![CDATA[Wave Blip]]></category>
		<category><![CDATA[Wave Bots]]></category>
		<category><![CDATA[Wave playback]]></category>
		<category><![CDATA[Wave playback feature]]></category>
		<category><![CDATA[Wave Robots]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4585</guid>
		<description><![CDATA[It is now nearly two weeks since the Google Wave preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the Google Wave Federation Protocol and servers (click on the image to see [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>It is now nearly two weeks since the <a href="http://wave.google.com/" target="_blank">Google Wave </a>preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a> and servers (click on the image to see the dynamic annotated sketch <a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank">or here</a>).</p>
<p>Even in the short time we have had to explore Wave, some very exciting possibilities are becoming clear. Thomas puts some of the virtues of Wave as an AR enabler succinctly when he writes:</p>
<p><strong>â€œWave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</strong></p>
<p><strong>These are all incredibly useful properties for any AR experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the userâ€¦..a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secureâ€¦data is only exchanged when necessary, and will stay local if no one else needs to view it.</strong></p>
<p><strong>Wave allows bots to run on itâ€¦allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</strong></p>
<p><strong>For all these reasons and more, Wave makes a great platform for AR.â€</strong></p>
<p>There will be much more <span>coming soon on Wave enabled AR because the Google Wave invites have begun to flow out to a wider community now. This week, many of our small ad-</span>hoc group looking at the development challenges and implications of Google Wave for AR actually got into Wave for the first time.</p>
<p>Many thanks to all the people who have contributed to this discussion so far including: Thomas Wrobel, Thomas K. Carpenter, Jeremy Hight, Joe Lamantia, Clayton Lilly, Gene Becker and many others.</p>
<p>We will be setting up some public AR Framework Development Waves this week.Â  If you have any trouble finding them, or adding yourself to it, please add Thomas and I to your contact list.Â  I am tishshute@googlewave.comÂ  Thomas is darkflame@googlewave.comÂ  The first two are currently called:<strong> </strong></p>
<p><strong><br />
AR Wave: Augmented Reality Wave Framework Development</strong> (developer forum)</p>
<p><strong>AR Wave: Augmented Reality Wave Development</strong> (for general discussion)</p>
<p>The discussion so far has been in two areas. On the one hand, it is gear-heady and focused on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a>, code, development challenges, and interfacing to mobile, while on the other hand people have been looking at use cases and questions of user experience.</p>
<p>Distributed, â€œshared augmented realities,â€ or â€œsocial augmented experiences&#8221; â€“ that not only allow mashups, &amp; multisource data flows, but dynamic overlays (not limited to 3d), created by users, linked to location/place/time, and distributed to other users who wish to engage with the experience by viewing and co-creating elements for their own goals and benefit &#8211; are something very new for us to think about.</p>
<p>As, Joe Lamantia, puts it, now:</p>
<p><strong>â€œthereâ€™s a feedback loop between which interactions are made easy by any given combo of device;/ hardware / software / connectivity, and the ways that people really work in real life (without any mediation / permeation by tech).â€</strong></p>
<p>Joe Lamantia whose term, <strong>â€œsocial augmented experiencesâ€</strong> I borrow for this post title, has done some thinking about <strong>â€œconcepts and models for understanding and contributing to shared augmented experiences, such as the social scales for interaction, and the challenges attendant to designing such interactions.â€ </strong>Check out <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia&#8217;s blog </a>for more on this later this week.</p>
<p>It is very helpful, as Joe points out, to shift the focusÂ  back and forth between the experience and the medium.</p>
<p>It is super exciting to have clear evidence that shared augmented realities are no longer merely possible, but highly probable and actually do-able now.</p>
<p>I shouldÂ  be absolutely clear about what Google Wave does to enable AR because obviously Wave plays no role in solving image recognition and tracking/registrations issues.Â  But, for example, Wave protocols and servers do provide a means to exchange, edit, and read data, and that enables distributed, social augmented realities.</p>
<p>Thomas explains how the newly named &#8220;AR Blip&#8221; works as:</p>
<p><strong>&#8220;An AR Blip is simply a Blip in wave containing AR data. Typically this would be the positional and url data telling a AR browser to position a 3d object at a location in space.</strong></p>
<p><strong>In more generic terms, an AR Blip allows data of various forms (meshes,text,sound) to be given a real-world position.&#8221;</strong></p>
<p>I have mentioned in other posts (<a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>) that Wave can be used for AR as precise or as loose as the current generation devices can handle. And as the hardware and software for the kind of AR that can put media out in the world to truly immerse you in a mixed space, the frameworkÂ  shouldÂ  be able to handle this too.</p>
<p>(a note on the Wave playback feature &#8211; this opens up a whole new world of possibilities.Â  Check out <a href="http://snarkmarket.com/2009/3605" target="_blank">this post</a> on some of the implications of playback for writing!)</p>
<p>The use cases we have been coming up with are too numerous to go into in detail this post<span>.Â  The open nature of an AR framework/Wave standard will lead to many new applications we have barely begun to imagine.Â  As Thomas points out, different client software can be made for browsing, potentially allowing for various specialist browsers, as well as more generic ones for typicalÂ  use. T</span>he multitudes of different kinds of data in/output that could be integrated in an open AR framework as it evolves are mind boggling.</p>
<p>But, for now, someÂ  obvious use cases do come to mind:<br />
eg.</p>
<p>- Historical environmental overlays showing how a city used to be/and how this vision may be constructed differently by different communities</p>
<p>- Proposed building work showing future changes to a structure/and the negotiations of this future (both the public and professionals could submit their own comments to the plans in context), seeing pipes, cables and other invisible elements that can help builders and engineers collaborate and do their work.</p>
<p>- Skinning the world with interactive fantasies</p>
<p>I asked Thomas to help people understand how Wave enables new interactions to data by explaining how Wave could enable citi sensing and citizen sensing projects (e.g.<a href="http://tinyurl.com/y97d5zr" target="_blank"> this one being pioneered by Griswold</a>):</p>
<p><strong><strong>&#8220;Sensors, both mobile and static could contribute environmental data into city overlays;</strong></strong></p>
<div><strong><strong>â€”temperature, windspeed, air quality (amounts of certain particles) water quality, amount of sunlight, Co2 emissions could all be feed into different waves. The AR Wave Framework makes it easy to see any combination of these at the same time.&#8221;</strong></strong></div>
<div><strong><strong><br />
</strong></strong></div>
<p><strong><strong> </strong></strong>Having these invisible aspects of the world made visible would create ways to improve sustainability, social equity, urban management, energy efficiency, public health, and allow communities to understand and become active participants in the ecosystems and infrastructure of their neighborhoods.</p>
<p>The key is reflecting thisÂ  kind of data back to people &#8220;making it not back story but fore story,&#8221; right where we are, right where it happens, as well as having it available for analysis.</p>
<p>As well asÂ  creating new opportunities to interact/respond to/and enhance data, making visible the invisible as <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko&#8217;s</a> work on <a href="http://www.amphibiousarchitecture.net/" target="_blank">Amphibious Architecture</a> and <a href="http://www.haque.co.uk/" target="_blank">Usman Haque&#8217;s</a> project <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> shows, can also create new connections/understandings between humans and the non human&#8217;s that share our world, e.g. fish, plants, waterways.</p>
<p>At a more prosaic levelÂ  potential buyers of property could see more clearly what they are buying, city planners could see better what needs to be worked on, and environmental researchers could see more clearly the impact people are having on an area.</p>
<p>Also Wave can provide some of the framework necessary to begin to begin to address tricky problems of privacy. Sensitive data can be stored on private waves, e.g. medical data for doctors and researchers, but the analysis of theÂ  data could still be of benefit to all, e.g., if it&#8217;s tied disease occurrences to locations andÂ  relationships between the environmental data and health wereâ€¦quite literallyâ€¦made visible.</p>
<p><strong>&#8220;The publication of energy consumption and making it visible as overlays, could help influence the public into supporting more energy efficiency companies and businesses. It could also help citizens to try to keep their own energy usage down, to try to keep their street in â€œthe green.â€</strong></p>
<p>Thomas notes:</p>
<p><strong>&#8220;With all of the above, it becomes fairly trivial to write persistent Wave-bots that automatically send notice when certain criteria are met (pollutants over a certain level, for example). On publicly readable waves, anyone can use the data in their local computers, process it, and contribute results back on a new wave. Alternatively, persistent remote severs could run Cron jobs, or other automated processing, using services such as App Engine to run wave robots.</strong></p>
<p><strong>All these possibilities become â€œfreeâ€ when using Wave as a platform for geographically tied data.&#8221;</strong></p>
<p>But of course this is just the beginning!</p>
<p><em>Recently, I talked at length with Jeremy Hight who has been thinking about, designing and creating shared augmented realities, that anticipate the kind of dynamic, real time, large scale architecture we now have available through Wave,Â  for quite some time now.Â Â  This is exciting stuff. </em></p>
<p><em><br />
</em></p>
<h3><strong>Modulated Mapping:</strong> Talking with Jeremy Hight about Layers, Channels andÂ  Social Augmented Experiences</h3>
<p><strong><strong> </strong></strong></p>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5.jpg"><img class="alignnone size-medium wp-image-4611" title="modulatedmapping5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5-230x300.jpg" alt="modulatedmapping5" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><strong><em><span>image from Volume Magazine (Hight/Wehby)</span></em></strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I know you have been involved in locative media from its early days. Perhaps we can talk about how AR continues the locative media journey?</p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> gave me this distinction, recently:<em> &#8220;AR is about systems that put media out in the world, and immerse you in a mixed space. Â Even the current &#8220;not really registered&#8221; mobile phone AR systems are still &#8220;sort of&#8221; AR (e.g., Layar, etc).</em></p>
<p><em>Locative media/ubicomp/etc are very different, in that they tend to display media on a device (phone screen) that is relevant to your context, but does not attempt to merge it with the world.<br />
The difference is significant, and making it clear helps people think about what they do and what they want to do, with their work. The locative media space though points toward future AR systems (when the technology catches up!).&#8221;</em></p>
<p><strong><strong>Jeremy Hight: The need is to finish the arc that locative media and early AR have started and to now truly return to the map itself, but as an internet of data, interactivity, channels of data , end user options like analog machines once were but in high end tools, a smart AI-ish ability for it to cull data for the user, and to allow social networking to be in real world places on the map both in building augmentation and in using and appreciating it..not hacks..which have their place&#8230;but a rhizome, a branched system with shared root,end user adjustable and variable..this is the key.</strong></strong></p>
<p><strong><strong>This takes AR and mapping and makes a possible world of channels in space and this eventually can be a kind of net we see in our field of vision with a selected percentage of visual field and placement so a geo-spatial net, a local to world wide fusion of lm into a tool and educational tool</strong></strong></p>
<p><strong><strong><span>VR[virtual reality] has greatly advanced, but in nodes as it has limitationsâ€¦LM [locative media] is the sameâ€¦AR [augmented reality] is the way..</span></strong><strong> it now has locative elements and aspects of VR integrated into its functionality and nodes&#8230;it is the best option with all of these elements, greater hybridity and data level potential a well as end user and community sourcing potential</strong></strong></p>
<p><strong><strong>I wrote an essay for Archis&#8217; Volume, the architecture magazine on a near future sense of some of this&#8230;.a visual net on the lens like ar but with smart objects and social networking and dissent.</strong></strong></p>
<p><strong><strong>I also wrote of these things for immersive graphic design, spatially aware museumÂ  augmentation, education through ar and lm and nod to the base interface of eye to cerebral cortex in layered and malleable augmentation in my essay <a href="http://www.neme.org/main/645/immersive-sight" target="_blank">&#8220;Immersive Sight&#8221;</a> a few years back</strong></strong></p>
<div id="gqg9" style="text-align: left;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b.jpg"><img class="alignnone size-medium wp-image-4601" title="dgznj3hp_3dj7g8zf7_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b-300x225.jpg" alt="dgznj3hp_3dj7g8zf7_b" width="300" height="225" /></a></strong></div>
<p><strong><strong>image [above] is simple illustration of a possible example on a screen or in front of eye where in a mondrian show..the graphic design of information actually builds as one moves</strong></strong></p>
<p><strong><strong>(key is calibrated spatial intervals and related layers of further augmentation which is logical due to location and proximity)</strong></strong></p>
<p><strong><strong>from immersive sight on immersive graphic design:</strong> <em>&#8220;The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.</em></strong></p>
<p><strong><em>Another interesting possibility is that individual points of augmentation donâ€™t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect). </em></strong></p>
<p><strong><em>Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.</em>&#8220;</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>One of the things I found very inspiring when I read your papers was that your ideas are not all dependent on a model of AR that would necessarily require goggles, back packs and lots of CPU/GPU &#8211; not that that wouldn&#8217;t be nice, but that even using &#8220;magic lens&#8221; AR of the kind smart phones has enabled in an open distributed framework would open up a lot of new possibilities for what you call modulated mapping wouldn&#8217;t it?Â  What kind of social augmented realities might be enabled by a distributed infrastructure like this [AR Wave]?</p>
<p><strong><strong>Jeremy Hight: right&#8230;.I see that as wayyy down the road&#8230;most important is the one you talk about as it is more immediate and thus more essential and needed. Eventually the goggles will be like a contact lens and a deep immersive ar version ofÂ  this will come, that to me is certain, but a ways down the road.Â  An incredible amount is possible now, and this is a more pragmatic move as opposed to the more theoretical of what is a few steps from here. Thus it is more important and essential now. Tools like Google Wave are taking what even 2 years ago was more theoretical discussions of what may be and instead introducing key elements to a more immediate, powerful, flexible level of augmentation. What have been hacks and isolated elements are to be integrated and social networking, task completion, shared tools and graphics building and geo-location.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think some people question what augmented reality has to bring to the continuum of location based experiences that other forms of interface/mapping do not?</p>
<p><strong><strong><span>Jeremy Hight: rightâ€¦.and the schism between its commercial </span></strong><strong>flat self and tests with physics etc and in between &#8230;there are a lot of unfortunate assumptions it seems as to where ar and lm cross and how ar can be many things beyond deep immersion or the opposite pole of a hockey puck having a magic purple line etc&#8230;.like lm is seen as either car directions or situationist experiments with deep data&#8230;..the progression to me is deeply organic&#8230;.and now augmentation can be more malleable, variable and end user controlled.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes, it is really exciting time for AR.Â  Historically AR research has gone after the hard problems of image recognition, tracking and registration because we have had available to us these dynamic, real time, large scale architectures like Wave available (until now!),Â  so less work has been done on exploring the possibilities for distributed AR fully integrated with the internet and WWW hasn&#8217;t it?</p>
<p>A distributed augmented reality framework such as we have envisaged on Wave wouldÂ  allow people to see many layers from many different people at the same time. â€¬And this kind of model has been part of your thinking and fundamental to your work for a while, hasn&#8217;t it? But it is a very new idea to most people to think about collaboratively editing layers on the world, and to be able to viewÂ  augmented space through channels and networked communities?Â  Could you explain some of the ways you have explored these ideas and how they could be explored further now to create meaningful experiences for people?</p>
<p><strong><strong><span>Jeremy Hight: right..exactlyâ€¦modulated mapping to me can be an amazing tool for studentsâ€¦back end searching data visualizations and augmentations based on their needsâ€¦while they do something else on their computer or iphoneâ€¦that can be amazing..and not deep </span></strong><strong>immersive..The map can be active, malleable, open source fed, and even, in a sense, intelligent and able to adapt. The possibility also exists for this map to have a function that based on key words will search databases on-line to find maps, animations, histories and stories etc to place within it for your study and engagement. The map is thus a platform and yet is active. Community is possible as people can communicate graphically in works placed on the map and in building mode in the tool. All the tropes of locative media are to be in a </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> system of channels of augmentation and a spatial net. The software by design will allow development on the map and communication like programs such as second life but in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> itself.</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1.jpg"><img class="alignnone size-medium wp-image-4607" title="interactive 3d map copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1-246x300.jpg" alt="interactive 3d map copy" width="246" height="300" /></a></strong></p>
<p><strong><strong><em><strong><span>image from Parsons Journal of Information Mapping Volume 2 (Hight/Wehby)</span></strong></em></strong></strong></p>
<p><strong><strong><span>I wrote an essay a few years ago for the Sarai reader questioning the traditional map and its semiotics and need to reconsider â€“ then did work looking into it and what those dynamics were and they got into 2 group shows in museums in Russiaâ€¦so it actually was my arc toward modulated mappingâ€¦an interesting way to it! But yes the map itself..this is a huge area of potential and non screen based alone navigation etc. I see now that my 2 dozen or so essays in lm,ar, interface design and augmentation have all also been leading in this direction for about 10 years now</span></strong></strong></p>
<p><strong><strong>Tish Shute: </strong>IÂ  love immersive visualization but can we &#8220;return to the map &#8211; the internet of data&#8221; as you mentioned earlier and produce interesting augmentation experiences that go beyond locative media&#8217;s device display mode without having the goggles, for example, through the magic lens of or smart phones?</strong></p>
<p><strong><strong>Jeremy Hight: yes, absolutely.Â  the map in the older paradigm is an artifice born often of war and border dispute and not of the earth itself and its processes&#8230;the new mapping like google maps is malleable, can be open source, can read spaces and can be layers of info in the related space not plucked from it as in the past..this is amazing. the old map also was born of false semiotics/semantics like &#8220;discovery of new lands&#8221; or &#8221; pioneer&#8221;Â  while the places were there already and names often were of empire&#8230;now this is no longer the case</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2.jpg"><img class="alignnone size-medium wp-image-4608" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2-300x233.jpg" alt="jeremy map small2 copy" width="300" height="233" /></a></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>So geoAR is an a better way to express a new social relationship to mapping? And how does this fit into the evolving arc of locative media that evolves into augmented reality?</p>
<p><strong><strong>Jeremy Hight:&#8230;early lm was mostly geocaching and drawing with gps..it took new paradigms to invigorate the fieldÂ  a lot of folks focus on tools and what already is, cross pollination can ground ideas that are more radical&#8230;a metaphor in a sense to place what can be in a familiar context.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>one of the great disappointments in VR has been its isolation from networked computing and also, up to now, augmented reality &#8211; to achieve an immersive experience withÂ  tight registration of media/graphics have to create separate system isolated from the internet and power of the web.</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.this will change. vr is to me an island but ar takes a part of it and shifts the paradigm and new things open this way. Do you know the project <a href="http://www.lifeclipper.net/EN/process.html" target="_blank">&#8220;life clipper&#8221;</a>? friends of mine..doing interesting things..they are a clear bridge betwen lm and ar&#8230;.and from vr</strong></strong></p>
<p><strong><strong>in ar augmentation and what is being augmented become fused or in collision or in complex interactions as a means to a larger contextualization and exploration of what is being augmented..this is true in immersive or non ar&#8230;.huge potential</strong></strong></p>
<p><strong><strong>vr is a space, now can be surgery which is amazing. but not layered interaction, thus an island and graphic iconography on a location can use symbolic icons which opens up even more layers (graphic designer/information designer in me talking there I suppose..)</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes !Â  talk to me more about layers and channels I think this is one of the most interesting questions for meÂ  in augmented reality at the moment &#8211; what can we do with layers and channels and the new possibilities on connections between people and environments that these can create?</p>
<p>The ability for anyone to post something is critical to the distributed idea but one of the reasons I am so excited by Google Wave is I am fascinated by the playback function. How do you think this will enable new forms of collaborative locative narratives (<a href="http://snarkmarket.com/2009/3605" target="_blank">nice post on Wave playback here </a>).</p>
<p><strong><strong>Jeremy Hight: We are in an age of cartographic awareness unseen in hundreds of years. When was the last time that new </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> tools were sold in chain stores and installed in most vehicles? When was the last time that also the augmentation of maps was done by millions (Google map hacks, etc)? The ubiquitous gps maps run in automobiles while people post pictures and graphic pins to denote specific places on on-line maps.</strong></strong></p>
<p><strong><strong>The need is for a tool that combines all of these new elements into an open source, intuitive layered and rhizomatic map that is porous (like pumice, organic in form yet with â€œbreathing roomâ€ ),ventilated (i.e: adjustable, a flow in and out), and open (open source,open access,open spatialized dialog).</strong></strong></p>
<p><strong><strong><span> I wrote of this in my essay &#8220;Revising the Map: Modulated Mapping and the Spatial Interface .&#8221;(</span></strong><span> </span><a id="h0qr" title="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )" href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%20%29"><span>http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )</span></a></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3.jpg"><img class="alignnone size-medium wp-image-4609" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3-300x206.jpg" alt="jeremy map small2 copy" width="300" height="206" /></a></strong></p>
<p><strong><em><strong><span>image from Parsons Journal of Information Mapping (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> One mapping project I really like is <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>.Â  How could distributed AR contribute to a project like <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>?</p>
<p><strong><strong>Jeremy Hight: that is a good example..imagine taking manhattan and having channels of options to overlay, that being an excellent option, and imagine being able to even run a few at once with deliniating icons..you can augment a space with history, data, erasure, narrative, scientific analysis, time line of architecture, infrastructure, archaeological record etc&#8230;.endless possibilities, and this agitates place and place on a map into an active field of information with end user control&#8230;and open options for new layers</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>and do you think we could do interesting things with AR on a project like Mannahatta even with the current mediating devices we have available &#8211; i.e. our smart phones as obviously the rich pc experience of Mannhatta has built for it&#8217;s web interface would not be available as AR at this point?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.k.i.s.s right?Â Â  these projects do not have to only be immersive and graphic intensive&#8230;&#8230;take how people upload photos onto google maps&#8230;.just make that on a menu of options, there are some pretty cool hacks already..<br />
&#8230;options is key, a space can have a community as well, building on it in software, and others navigating it, i see it near future and down the road..always have with ar really</strong></strong></p>
<p><strong><strong><a href="../wp-content/uploads/2009/10/locativenarratives1.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1.jpg"><img class="alignnone size-medium wp-image-4596" title="locativenarratives1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1-230x300.jpg" alt="locativenarratives1" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><em><strong><span>image from Volume Magazine (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Jeremy Hight: and yes, a lot of people focus on ar as its limitations and processing power needs as a major road block</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>so do you see AR on smart phones adding any value to a project like Mannahatta?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;that it can be integrated into other similar works and even disparate but cloud linked ones&#8230;so a place can be &#8220;read&#8221; in diff ways on the iphone&#8230;.beyond its map location, and more can be possible if you are there&#8230;others away, so it becomes channels of augmentation</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>AR like locative media puts who you are, where you are, what you are doing, what is around you center stage in online experience but it also &#8220;puts media out in the world&#8221; &#8211; people I think understand this well as a single user experience but we are only just beginning to think about how this will manifest as a social experience &#8211; could explain more about modulated mapping as an experience of social augmentation?</p>
<p><strong><strong style="background-color: #99ff99; color: black;"><span>Jeremy H</span>ight: Modulated</strong> <strong style="background-color: #ff9999; color: black;">Mapping </strong><strong>is a tool that will allow channels to be run along the map itself. This will allow one to view different icons and augmentations both as systems on the map and in deeper layers of information (photos, videos, animations,Â  visualizations, etc) that can be turned on and off as desired. The different layers of icons and data may be history, dissent, artworks, spatialized narratives, and annotations developed that are communally based on shared interests, placed spatially and far beyond. The use of chat functionality in text or audio will be open in building mode and in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> navigation/usage as desired. This also allows a community to develop or augment in the spaces on the earth. These nodes can be larger and open or small and set by groups in their channel. The end result is an open source sense of </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> that will also have a needed sense of user control as one can select which layers of augmentation they wish to see and interact with at any time. It also will incorporate all the functionality of locative media in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software and </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong>. In building mode and in map mode, icons will be coded to represent within channels (remember that the person using it has selected channels of augmentation from many based on their current interests and needs). Icons will be coded as active to show work in progress in cities and the globe to both invite participation and to further agitate the map from the sense of the static as action is visible even with its icons as people are working and community is formed in common interest/need .</strong></strong></p>
<p><strong><strong>locative media got a buzz for &#8220;reading&#8221; places&#8230;when I helped create locative narrative that was what blew me away back in 2001&#8230;that we could give places a voice by placing data from research and icons on a map&#8230;&#8230;this meant lost history or augmentation was possible as kind of voices of a place and its layers&#8230;&#8230;.I called it &#8220;narrative archaeology.&#8221; We now have tools that can push these ideas and concepts farther..much farther&#8230;and with a range beyond what was before, and then the map was just a tool&#8230;.but now we are returning to the map itself&#8230;..and this as place as much as marker..this is where ar takes the ball to use a bad metaphor</strong></strong></p>
<p><strong><strong>also that project could only work if you came to our spot of a 4 block augmentation and with us there to lend you our gear&#8230;we are far beyond that now but it had its place</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>How do you see &#8220;in context&#8221; AR and something we might call &#8220;context aware&#8221; cloud computing models interacting?</p>
<p><strong><strong>Jeremy Hight: sure&#8230;and I must add that I have issues with cloud computing as much as it is a good idea..</strong>.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>because of loss of autonomy?</p>
<p><strong><strong>Jeremy Hight: tivo is simply a hard drive&#8230;but it keyword reads and givesÂ  suggestions..that is the is cro magnon link to what can be</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The nice thing about Wave is because of the Federation model, the cloud model and local store ur own data models should work together.<strong><strong><span> </span></strong></strong></p>
<p><strong><strong><span>Jeremy Hight: yes..that is better&#8230;..loss of autonomy also opens up the arbitrary which is the flaw of search engines as we know itâ€¦even Bing fails to me in that sense</span></strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>how do you mean, could you explain?</p>
<p><span> </span><strong><strong><span>Jeremy Hight: spidersÂ  cull from wordsÂ  but cull like trawlers at sea â€¦. tested Bing with very specific requests.. it spat out the same mass of mostly off topic resultsâ€¦.</span><br />
<span> I wonder if there is a way to cull from key words and topics from a userâ€¦not O</span>rwellian back end of courseâ€¦but from their preferences, their searches etc..</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>did you see the discussion on search in the AR Framework doc? AR search will be a massively important thing that will take a lot of intelligence and all sorts of algorithm development won&#8217;t it?</p>
<p><strong><strong>Jeremy Hight:It also has one area of key functionality that moves into more intuitive software. Upon continued usage, the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software will â€œlearnâ€ and search based on key words used and spheres of interest the user is </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> or observing as mapped and will integrate deeper data and types of animations, etc. into the map or will have them waiting to be integrated upon user approval as desired. Over time the level of sophistication of additions and of search intuition will increase dramatically. The search can also, if the user wishes, run in the back end while working in the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> program, or in off time as selected while doing other tasks. It also can never be used if one is not interested. One of the key elements of this </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> is that it is not composed of a closed set or needs user hacks to augment, but instead is to evolve and deepen by user controls and desired as designed. Pre-existing data,visualizations and augmentations can be integrated with relative ease.</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>One of the things that Joe Lamantia points out about social augmented experiences is that they will operate across a number of different scales &#8211; conversation &gt; product design &amp; build team &gt; neighborhood / town fixing potholes &gt; global community for causes. How do designs for channels and layers change across these different social scales?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> quote myself &#8230;&#8221;The &#8220;frontier&#8221; is often defined as the space just ahead of the known edge and limit, and where it may be pushed out deeper into the previously unknown. The frontier in the world of ideas is not the warm comfort of what has been long assimilated; and the frontier in the landscape is not of maps, but of places beyond and before themâ„</strong></strong></p>
<p><strong><strong>The border along what has been claimed is not only that of maps â€“ it is of concepts, functions, inventions and related emergent industries. Ideas and innovations are like the cloud shape that briefly forms around a jet breaking the sound barrier, tangible yet not fully mapped into measure. It is when things are nailed down into specific entities, calibrated and assessed, that the dangers may inflict themselves â€“ greed, competition, imitation, anger, jealously, a provincial sense of ownership either possessed or demanded&#8221;. (from essay in Sarai reader). Otherwise channels and augmentation do not have to be socio-economically stratifying or defined by them. We built 34nÂ  for almost nothing on older tools.</strong></strong></p>
<div id="yqjj" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><img class="alignnone size-medium wp-image-4599" title="dgznj3hp_1g3svj8fq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b-300x225.jpg" alt="dgznj3hp_1g3svj8fq_b" width="300" height="225" /></a></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><span> </span></a></strong></div>
<p><strong><em><strong><span>image from 34north 118westÂ  (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>The ar that is not deep immersion can be more readily available and channels can be what end users need like the diversity of chat rooms or range of Facebook users among us.</strong></strong></p>
<p><strong><strong>I had two moments yesterday that totally fit what we talked about.Â  I went to west hollywood book fair and traditional directions off of mapping for driving directions were wrong and we got lost&#8230;our friend could only get a wireless signal to map on itouch and we had to roam neighborhoods then we called a friend who google mapped it and we found we were a block away&#8230;.so a fast geomapping overlay with an icon for the book fair on some optional grid service or community would have made it immediate.Â  Then at the book fair talked to a small press publisher who is trying to map works about los angeles by los angeles authors on a map..she was stunned when I told her it could be a kind of google map feature option</strong></strong></p>
<p><strong><strong>it also has great potential to publish and place writing and art in places..both for commentary and access. imagine reading joyce in chapters where it was written about and then another similar experience but with writers who published on a service into their city.</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> The challenge of shared augmented realities is not just a matter of shipping bits around, but also of how it we will use channels and layars &#8211; to create and negotiate different, distributed perspectives, understand a shared common core/or expressions of dissent (this came up in an email conversation with <a href="http://www.oreillynet.com/pub/au/166" target="_blank">Simon St Laurent</a>).</p>
<p><strong><strong><strong>Jeremy Hight:</strong> well my example earlier could have been communal in a way too..a tribe sort of augmentation channeling &#8230;.like subscribing to list servs back in the day but of augmentation communities/channels, and for folks to build and use in shared live form, coordinating too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong> </strong>one good thing though about building an open AR Framework is that as bandwidth/CPU/hardware gets better shared high def immersive experiences could be supported by the same framework..</p>
<p><strong><strong>Jeremy Hight: excellent</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>were you thinking of the image recognition and tracking with this example?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> yeah&#8230;.like scanning across a multi channeled google map augmentation with diff icons and their connected data&#8230;and poss social networking and fle sharing even in that mode&#8230;and rastering etc&#8230;.could be cool with google wave </strong><strong><span>- on the map..then zooming in a la powers of ten..(eames film).</span></strong></strong></p>
<p><strong><strong>-</strong><strong><span>I have pictured variations of this for a few years now in my head like the example of my friends and I yesterdayâ€¦we could have correlated a destination by icons in diff channels..one being lit events within lit channel in l.a mapâ€¦maybe things streaming on it tooâ€¦remote info and video etc&#8230; that would be awesome</span></strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> So many of the ideas in you paper on modulated mapping (see <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a>) are brilliant use cases for shared augmented realities. Perhaps you could talk more your ideas about locative narrative because this is something I think is at the core of the kinds of experiences that a distributed AR Framework would make possible?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> on the project &#8220;34 north 118 west&#8221; we mapped out a 4 block area for augmentation of sound files triggered by latitude and longitude on the gps grid and map and the map on the screen had pink rectangles that were the &#8220;hot spots&#8221; where the augmentation had been placed.</strong></strong></p>
<div id="nwc6" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b.jpg"><img class="alignnone size-medium wp-image-4600" title="dgznj3hp_0gg994bf9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b-300x225.jpg" alt="dgznj3hp_0gg994bf9_b" width="300" height="225" /></a></strong></strong></div>
<p><strong><em><strong><span>image of interactive map with map based augmentation connected to audio augmentation on site for 34north 118west (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>We researched the history of the area and placed moments in time of what had been there at specific locations &#8230;.I called this <a href="http://www.xcp.bfn.org/hight.html" target="_blank">&#8220;narrative archaeology&#8221;</a> as it allowed places to be &#8220;read&#8221; by their augmentations&#8230;info that was of the place beyond the immediate experience (diff types of info) that otherwise would be lost or only found in books or web sites elsewhere. there now are locative narratives around the world but they need to be linked.Â  from humble origins &#8220;narrative archaeology&#8221; went on to be recently named of the 4 primary texts in locative media which is pretty amazing to me&#8230;but it is growing</strong></strong></p>
<p><strong><strong>- the limitations then were what I called the &#8220;bowling alley connundrum&#8221; &#8211; the specifc data had to reset like pins&#8230;..and was isolated&#8230;.this led me to think about ar back then and up to now.Â  How these could lead to much more from that point, data that would be more layered, variable , fluid..yet still augmented place and sense of place and social networking within data and software</strong></strong></p>
<p><strong><strong><a href="http://34n118w.net/34N/" target="_blank">lifeclipper</a> to me is a bridge</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>But Life Clipper is isolated from the internet currently is it?</p>
<p><strong><strong><span>Jeremy Hight: yes&#8230;ours was too.. that is what google wave makes possible.. our project only ran on our gear..in 4 blocksâ€¦with additional auxi</span>liary info online, and not malleable..but hey 2001 and all..</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>so the sites for 34 north 118 west are still active though?</p>
<p><strong>Jeremy Hight: oh yeah!</strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>nice I really like sound augmentation &#8211; have you seen <a href="http://www.soundwalk.com/blog/tag/augmented-reality/" target="_blank">Soundwalk</a>?</p>
<p><strong><strong><span>Jeremy Hight: yes, very cool..</span> </strong><strong>we chose sound only as it fought the power of image..instead caused a person to be in a sense of two places and times at once</strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> and in 2001 that was definitely a visionary project!</p>
<p>You must be very excited that finally the pieces are coming together to make this stuff scale!</p>
<p><strong><strong><strong>Jeremy Hight:</strong> I can&#8217;t even tell you!! it is funny..i have known that this would come..just waited and waited&#8230;</strong></strong></p>
<p><strong><strong>..knew it needed the right people and tools..</strong></strong></p>
<p><strong><strong><span>..so the bowling alley connundrum led me to develop my project shortlisted for the iss (international space station)Â  as I thought a lot about how points and works are not to be isolatedâ€¦but connectedÂ  and should be flowing in diff parts of a mapâ€¦.to open up perspective and connected augmentations , but also to think about the map againâ€¦not as a base only. then moved into my work with new ways to visualize time and it all really began to gell.Â  The ideas first were published as an essay</span></strong><span> </span><a id="qw.2" title="(http://www.fylkingen.se/hz/n8/hight.html)" href="http://www.fylkingen.se/hz/n8/hight.html"><span>(http://www.fylkingen.se/hz/n8/hight.html)</span></a><span> </span><strong><span>and later my project blog</span></strong><span> (</span><a id="bp.b" title="http://floatingpointsspace.blogspot.com/)" href="http://floatingpointsspace.blogspot.com/%29"><span>http://floatingpointsspace.blogspot.com/)</span></a></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>One thing I noticed when I was reading your paper is how you have been exploring non-euclidian geometries.Â  Could you explain how this is part of your idea of modulated mapping?</p>
<p><strong><strong><span>Jeremy Hight: Yes, this first came to me when my wife was reading to me from a book on the Poincare Conjecture and I was hit with a new way to measure events in time and after months of sketches, schematics and research came to see how it could also be connected to a geo-spatial web of projects and augmentations.Â  It was published in the inaugural issue of Parsons School of Design&#8217;s Journal of Information Mapping which was an exciting fit.</span></strong><span><strong> I call it &#8220;Immersive Event Time&#8221;</strong>(</span><a id="o3rt" title="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)" href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%29"><span>http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)</span></a></strong></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b.jpg"><img class="alignnone size-medium wp-image-4634" title="dgznj3hp_4cxz57xgv_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b-195x300.jpg" alt="dgznj3hp_4cxz57xgv_b" width="195" height="300" /></a></strong></span></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b.jpg"><img class="alignnone size-medium wp-image-4635" title="dgznj3hp_5g68k9ggh_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b-300x225.jpg" alt="dgznj3hp_5g68k9ggh_b" width="300" height="225" /></a><br />
</strong></span></p>
<p><strong><strong>so the last 3 years I have been working on how it could all work as channels of augmentation, and building and navigation as open and community in a sense as well as ai capability that was the time work especially. how time as experienced within an event is not a time &#8220;line&#8221;Â  but points on and within a form&#8230;.and how this model is better for visualizing events in time and documenting them. it actually sprang form reading a book on the poincare conjecture, popped a bunch of other stuff together so one could visualize an event in time as like being in the belly of a whale..with time as the ribs..and our measure of time as the skin&#8230;and moving within it&#8230;.hoping this will be used as educational tool</strong></strong></p>
<p><strong><strong>and this also can be tied to ar and map again&#8230;how documentation of important events can be kept within icons on a google map..then download varying visualizations based on bandwidth and desired format</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>I have been thinking about is the new forms of social interaction/agency that these kinds of augmentations of space/place/time will create.Â  it seems there are two poles &#8211; one is the area Natalie Jeremijenko explores of shifting social relations from institutions/statistics to real time/location based/interactions and new forms of social agency.Â  The other pole completely is more like the cloud based AI and perhaps crowd sourced machine learning.</p>
<p>Your ideas explore the possibilities of both these poles.Â  And certainly one of the big deals of distributed AR integrated with would be the possibilities it opened up both for new forms of networked social relationships and for new ways to draw on network effects.</p>
<p><strong><strong><strong>Jeremy Hight:</strong> and cross pollinations within &#8230;that is what my mind goes to</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>The other night I met Assaf Biderman, MIT, from the <a href="http://senseable.mit.edu/trashtrack/" target="_blank">Trash Track</a> team. Trash Track doesn&#8217;t utilize AR but I could see that there are possibilites there.<br />
What do you think?</p>
<p><strong><strong><span>Jeremy Hight: yes, absolutely,</span> </strong><strong>there can sort of skins on locations that user end selection can yield &#8230;like channels of place&#8230;.and can range from pragmatic core to art and play and places between&#8230;.how this recalibrates the semiotics of map&#8230;more than just augmentation as seen as a kind of piggy back on map..map becomes interface and defanged platform if you wil, interestingly my more poetic/philosophic writing led me here too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> I know they are at very different poles of the system but I do wonder how AR can bring some of the level of social agency/interaction that <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a> works on into a productive interaction with the kind of innovations in Machine learning that Dolores Lab style machine learning!!and others are pioneering?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> Natalie&#8217;s genius to me is in practical functional tech that also opens deeper questions and even new openings of what is needed..amazing layers in her work that way.. succint yet deep..very deep</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>Yes &#8211; I a just writing a post about her work &#8211; I find it deeply moving the way she has delved into the possibilities to using technology to open us up to our world.Â  One of the reasons I find distributed AR so interesting is because it will make it possible for all kinds of people to create and use augmentation in their lives and communities.</p>
<p>So to return to how a distributed AR framework could contribute to a project like Trash Track?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> what about using it for community, dissent and awareness raising then?Â  like Natalie&#8217;s work but building like a communal work of multiple points, like the old adage of the elephant and the blind menÂ  sorry..metaphor &#8211; like one of my points in immersive sight was how one could take augmentation as multiple works sort of turning the faces of a thing or place&#8230;and how this would make a larger work even in such a flow so people moving in a space could also build..</strong></strong></p>
<p><strong><strong>what of ar traces left as people move calibrated to user traffic and trash as estimated in an urban space&#8230;like it goes back to chris burden in the 70&#8242;s making you know that as you turn the turnstyle you are drilling into the foundation and may be the one that collapses the building?</strong></strong></p>
<p><strong><strong>so their movements leave trash. Natalie is all about raising awareness to cause and effect and data , space and ecology. love that.Â  so maybe &#8230;<br />
a feedback loop , artifact and user end responsibility can leave traces &#8230;trash&#8230;</strong></strong></p>
<p><strong><strong>.. cybernetics vs ecology and human waste</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>could you elaborate?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> brain fart&#8230;that the mass of trash people leave is a piece at a tiime&#8230;.and how like the space shuttle mission when it was argued first true cybernaut occured&#8230;.one cord to air for astronaut..one for computer on their back to fix broken bay arm&#8230;if there is a way to build on that and in relation to the topic&#8230;..how this can go further, that machines do not waste as much&#8230;as ar is a means to cybernetic raise awareness..eh..</strong><strong><span>In a sense it is likeÂ  the space shuttle mission when arguably the first true cybernaut occurredâ€¦.one cord to air for astronaut..one for computer on their back to fix broken bay armâ€¦if there is a way to build on that and in relation to the topicâ€¦..how this can go further, that machines do not waste as muchâ€¦as ar is a means to cybernetic raise awareness..eh.. hmmm.</span>.. </strong><strong> sensors etc&#8230;wearables too &#8211; could be eco awareness with data and machine and human</strong></strong></p>
<p><strong><strong>what about a cloud computing system with a slight ai in the sense of intuitive word cloud and interest scans&#8230;..so as one moves through say new york they can be offered new ai data and services as they move ? could also be of eco interests? concerns about urban farming, eco waste, air pollution etc&#8230;.perhaps with (jeremijenko element here) Â sensors placed in locations and these also giving data reads in public areas Â with no input but hard data itself&#8230;&#8230;hmm..could be interesting</strong></strong></p>
<p><strong><strong>it can also give info of the carbon footprints (estimated prob unless data is public record somehow) of chain businesses Â and data on which are more eco friendly as well as an iconography color coded and icon coded to the best places to go to support greening and eco friendly business? Â and the companies could promote themselves on this service to attract eco aware customers who would be seeing them as kindred spirits and helping the<br />
larger effort?</strong></strong></p>
<p><strong><strong>kind of eco mapping..and ar on mobile app</strong></strong></p>
<p><strong><strong>what about sensors that read air pollution levels, levels of solar radiation (to aid with skin protection in shifting light values in a city space..ie put on some skin cream now&#8230;), light sensors that detect density and over density in public spaces&#8230;to use the old trope in art of reading crowds in a space..but instead could indicate overcrowding, failing infrastructure in public spaces (which is a congestion that leads to greater pollution levels as well as flaws in city planning over time..), and perhaps a tie in to wearables&#8230;&#8230;worn sensors Â on smart clothes&#8230;.this could form a node network of people in the crowds &#8230;.and also send data within moving in a space&#8230;</strong></strong></p>
<p><strong><strong>here is a kooky thought&#8230; what of taking the computing power and data of people moving in a space..and not only get eco data and make available to them levels of<br />
data..but make possibly a roving super computer&#8230;crunching the deeper data of people open to this&#8230;&#8230;a hive crunching deeper analysis of the space, scan properties from sensors, and even a game theory esque algorithm of meta data if say 40 people out of 50 hit on a certain spike or reading&#8230;and even their input&#8230;..I worked in game theory for paleontology in this manner for a time as a teen&#8230;.a private project&#8230;&#8230; Â  the reading can lead to a sort of meta read by what hits most consistently..as well as in their input..text of what they experienced, observed,postulated,analyzed even&#8230;. this could be really interesting&#8230;even if just the last part from collected data and not from any complex branching of servers..</strong></strong></p>
<p><strong><strong>I thought at 19 or so that the flaw in paleontology was in how so many larger theories were shifting exhibitions and larger senses of things like were there pre-historic birds that were mistaken for amphibean and then back again&#8230;.so why not make a computer program and feed all the papers published into it and see what hits were counted in terms of an emerging meta theory&#8230;and landscape of key points being agreed upon&#8230;this data would be in a sense both algorithmic and a sort of unspoken dialogue &#8230;came from a lot of study of game theory one summer&#8230;</strong></strong></p>
<p><strong><strong>hope this makes some sense&#8230;I forgot to mention that I originally planned to be a research meteorologist and my plan in middle school or so was to get a phd and develop new software to have a global map and then run models of hypothetical storms across it in real time animations of cloud forms, radar and wind analysis/fields, barometric pressure spaghetti charts etc&#8230;.and to also do 3d cut away models of storm architectures&#8230;so been into visualizations of complex data and mapping for a long time!</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>Wow let me think about this one!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/feed/</wfw:commentRss>
		<slash:comments>18</slash:comments>
		</item>
		<item>
		<title>Total Immersion and the &#8220;Transfigured City:&#8221; Shared Augmented Realities, the &#8220;Web Squared Era,&#8221; and Google Wave</title>
		<link>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/</link>
		<comments>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/#comments</comments>
		<pubDate>Sun, 27 Sep 2009 04:42:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[3D Interactive Live Show]]></category>
		<category><![CDATA[Acrossair]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[anime]]></category>
		<category><![CDATA[Apple iPhone]]></category>
		<category><![CDATA[AR baseball cards for Topps]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[Architectural League of New York]]></category>
		<category><![CDATA[ARML]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented City]]></category>
		<category><![CDATA[augmented city lab]]></category>
		<category><![CDATA[augmented reality books]]></category>
		<category><![CDATA[augmented reality entrpreneurship]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality making visible the invisible]]></category>
		<category><![CDATA[augmented reality mark-up language]]></category>
		<category><![CDATA[augmented reality pollution meter]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[augmented virtuality]]></category>
		<category><![CDATA[Bionic Eye]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruno Uzzan]]></category>
		<category><![CDATA[Conflux]]></category>
		<category><![CDATA[cross platform compatibility for augmented reality]]></category>
		<category><![CDATA[D'Fusion]]></category>
		<category><![CDATA[Daniel Wagner]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[elements of networked urbanism]]></category>
		<category><![CDATA[Elizabeth Goodman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[Fish 'n Microchips]]></category>
		<category><![CDATA[Flickr]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo spatial web]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geoaugmentation]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Protocol]]></category>
		<category><![CDATA[Gov 2.0 Expo Showcase]]></category>
		<category><![CDATA[Gov 2.0 Summit]]></category>
		<category><![CDATA[Graz University of Technology]]></category>
		<category><![CDATA[Imagination]]></category>
		<category><![CDATA[Incheon Free Economic Zone]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[Int13]]></category>
		<category><![CDATA[Interaction Design for Augmented Reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Jonathan Laventhol]]></category>
		<category><![CDATA[Korea's u-Cities]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar 3D]]></category>
		<category><![CDATA[magic lens augmented reality]]></category>
		<category><![CDATA[manga]]></category>
		<category><![CDATA[Mark Shepard]]></category>
		<category><![CDATA[Mark Weiser]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Microsoft Bing]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Mobilizy]]></category>
		<category><![CDATA[multiuser augmented reality]]></category>
		<category><![CDATA[Natalie Jeremijenko]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[near-field object rcognition and tracking]]></category>
		<category><![CDATA[Networked City]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[newer urbanism]]></category>
		<category><![CDATA[open]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[Orange Cone]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[realtime panorama mapping on mobile phones]]></category>
		<category><![CDATA[RobotVision]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Sentient City Survival Kit]]></category>
		<category><![CDATA[Shangri La]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[Sky Writer]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[symbiosis between augmented reality and brands]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the LAN of things]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[the web squared era]]></category>
		<category><![CDATA[ThingM]]></category>
		<category><![CDATA[things as services]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Tod E. Kurt]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Toward the Sentient City]]></category>
		<category><![CDATA[Transfigured City]]></category>
		<category><![CDATA[twitter]]></category>
		<category><![CDATA[u-City]]></category>
		<category><![CDATA[ubiquitous computing and augmented reality]]></category>
		<category><![CDATA[uCity]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Weisarian Ubiquitous Computing]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[xClinic]]></category>
		<category><![CDATA[XMPP versus HTTP]]></category>
		<category><![CDATA[Yocahi Benkler]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4439</guid>
		<description><![CDATA[Above is an image aboveÂ  from Total Immersion&#8217;s augmented reality experience developed for the &#8220;Networked City&#8221; exhibition in South Korea, &#8211; &#8220;a fun scenario created for a u-City&#8217;s infrastructure and city management service&#8221; &#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b.jpg"><img class="alignnone size-medium wp-image-4440" title="dhj5mk2g_338cwpzntgp_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b-300x170.jpg" alt="dhj5mk2g_338cwpzntgp_b" width="300" height="170" /></a></p>
<p><em>Above is an image aboveÂ  from <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> augmented reality experience developed for the <a id="winm" title="&quot;Networked City&quot; exhibition in South Korea, &quot;" href="http://www.tomorrowcity.or.kr/sv_web/en_US/space.SpaceInfo.web?targetMethod=DoUe04Sub1" target="_blank">&#8220;Networked City&#8221; exhibition in South Korea,</a> &#8211; &#8220;a fun scenario created for a<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"> u-City&#8217;s</a> infrastructure and city management service&#8221; </em></p>
<p><strong>&#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special AR goggles a whole new world unfolds â€“ as graphics overlaid on the city model.</strong><em><strong>&#8221; </strong>(<a href="http://gamesalfresco.com/2009/09/14/total-immersion-brings-augmented-reality-to-tomorowcity-todaytomorrow/" target="_blank">Games Alfresco)</a></em></p>
<p>&#8220;The Networked City,&#8221; is a large scale augmented virtuality of a scenario for a networked city. But my guess, reading the &nbsp; &nbsp;    <em><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a></em>, is the plan is to move from an augmented virtuality to an augmented reality as Incheon Free Economic ZoneÂ  (IFEZ) realizes its vision to become a leading u-City &#8211; where reality is turned &#8220;inside out&#8221; (see <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">Inside Out: Interaction Design for Augmented Reality )</a>.Â <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php"> </a>If you are not familiar with South Korea&#8217;s u-Cities, <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">check out this post</a> for a short primer (and note<a href="http://www.google.com/trends?q=augmented+reality&amp;ctab=1986817859&amp;geo=all&amp;date=all" target="_blank"> Google Trends search on Augmented Reality </a>showsÂ  South Korea leaving everyone else in the dust).<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"></p>
<p></a></p>
<h3>Ubiquitous computing and augmented reality are like adenine and thymine &#8211; a DNA base pair.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM.png"><img class="alignnone size-medium wp-image-4442" title="Screen shot 2009-09-24 at 11.34.35 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM-300x256.png" alt="Screen shot 2009-09-24 at 11.34.35 PM" width="300" height="256" /></a></p>
<p><em>A sky view of Incheon Free Economic Zone (<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">from Korean IT Times</a>). For more on the IFEZ vision to become a leading u-City <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">see here</a>.</em></p>
<p><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a> writes about the u-city concept:</p>
<p><strong>&#8220;Korea began using the term u-City after accepting the concept of ubiquitous computing, a post-desktop model of human-computer interaction created by Mark Weiser, the chief technologist of the Xerox Palo Alto Research Center in California, in 1998. There have been a lot of research in this field since 2002. As a result, many local governments in Korea have applied this concept to various development projectsÂ since 2005Â based on a practical approach to it.&#8221;</strong></p>
<p>The back story to many of my recent posts, including this one, is an understanding of a relationship between ubiquitous computing and augmented reality that emerged, for me, in a February conversation with Adam Greenfield, <a title="Permanent Link to Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield" rel="bookmark" href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/">Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</a>.Â  In cased you missed it, here is the link again because I think it holds up very well considering the rapid developments of recent months.Â  Also, importantly for this post, it includes a discussion ofÂ  moving on from Weiserian visions.</p>
<p><a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield&#8217;s Speedbird</a> is one of my key sources for understanding &#8220;networked urbanism,&#8221; and the list he makes of <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank">the elements of networked urbanism here</a> (also see the comments) &#8211; is my mantra for thinking about the DNA base pair relationship of augmented reality and ubiquitous computing.</p>
<p>Adam Greenfield&#8217;s, <strong>&#8220;summary of what those of us who are thinking, writing and speaking about networked urbanism seem to be seeing&#8221;</strong> is:</p>
<p><strong>1. From <em>latent</em> to <em>explicit</em>; 2. From <em>browse</em> to <em>search</em>; 3. From <em>held</em> to <em>shared</em>; 4. From <em>expiring</em> to <em>persistent</em>; 5. From <em>deferred</em> to <em>real-time</em>; 6. From <em>passive</em> to <em>interactive</em>; 7. From <em>component</em> to <em>resource</em>; 8. From <em>constant</em> to <em>variable</em>; 9. From <em>wayfinding</em> to <em>wayshowing</em>; 10. From <em>object</em> to <em>service</em>; 11. From <em>vehicle</em> to <em>mobility</em>; 12. From <em>community</em> to <em>social network</em>; 13. From <em>ownership</em> to <em>use</em>; 14. From <em>consumer</em> to <em>constituent</em>.</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Augmented Reality &#8211; Making Visible the Invisible</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM.png"><img class="alignnone size-medium wp-image-4509" title="Screen shot 2009-09-26 at 2.44.27 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM-300x229.png" alt="Screen shot 2009-09-26 at 2.44.27 PM" width="300" height="229" /></a></p>
<p>The screenshot above is one ofÂ  the coolest &#8220;making visible the invisible&#8221; AR applications. It was developed at Columbia University Graphics and User Interface Lab where <a href="http://www1.cs.columbia.edu/%7Efeiner/" target="_blank">Steven Feiner</a> is Director (see the deep list of projects from the lab <a href="http://graphics.cs.columbia.edu/top.html" target="_blank">here</a>).Â  This app &#8220;shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.&#8221; Credit: Sean White and Steven FeinerÂ  (<a href="http://www.technologyreview.com/computing/23515/page2/" target="_blank">via Technology Review</a>).</p>
<p>The recent emergence of &#8220;magic lens&#8221; augmented reality apps for our smart phones &#8211; <a href="http://www.wikitude.org/" target="_blank">Wikitude</a>, <a href="http://layar.com/" target="_blank">Layar,</a> <a href="http://www.acrossair.com/" target="_blank">Acrossair</a>, <a href="http://support.sekaicamera.com/">Sekai Camera</a>, and many others now, have given us a new window into our cities. But we are yet to realize the full potential of the AR/ubicomp base pair that can &#8220;make visible the invisible&#8221; and give us new opportunities to relate to the invisible data ecosystems of our cities, not merely as a spectator experience,Â  but as an interactive, in context, real time opportunity to reimagine social relations.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">Mark Shepard</a> says in <a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">his curatorial statement</a> for, <a href="http://www.sentientcity.net/exhibit/" target="_blank">&#8220;Toward the Sentient City:&#8221;</a> (Much more soon on this very significant exhibit which runs from Sept. 17th to Nov. 7th, 2009.)</p>
<p><strong>&#8220;In place of natural weather systems, however, today we find the dataclouds of 21st century urban space increasingly shaping our experience of this city and the choices we make there.&#8221;</strong></p>
<p>Augmented Reality, as Joe Lamantia points out, is becoming the great &#8220;<a id="o0mh" title="ambassador of ubiqitous computing" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">ambassador of ubiqitous computing</a>.&#8221; AR is. &#8220;<strong>&#8230;mak[ing] it possible to experience the new world of ubiquitous computing by reifying the digital layer that permeates our inside-out world,&#8221; </strong>and we are only just glimpsing the razor thin end of the wedge in this regard.</p>
<p>I am still working on my <a href="http://www.gov2summit.com/" target="_blank">Gov 2.0 Summit </a>write upÂ  and, amongst other things, I will talk about how an emerging new social contract around open data, here in the US,Â  will put augmented realityÂ  apps center stageÂ  &#8211; &#8220;doing stuff that matters.&#8221; At <a href="http://www.gov2expo.com/gov2expo2009" target="_blank">Gov 2.0 Expo Showcase</a> Tim O&#8217;Reilly tweeted:</p>
<p><a id="i23q" title="Tim O'Reilly" href="http://twitter.com/timoreilly">Tim O&#8217;Reilly</a> Really enjoyed @capttaco (Digital Arch Design) @ #gov20e: &#8220;Augmented Reality could be a new public infrastructure&#8221; <a href="http://bit.ly/18iCx" target="_blank">http://bit.ly/18iCx</a></p>
<p>Also see Tim O&#8217;Reilly and Jennifer Pahlka on Forbes.com discuss the <a href="http://www.forbes.com/2009/09/23/web-squared-oreilly-technology-breakthroughs-web2point0.html" target="_blank">The &#8220;Web Squared&#8221; Era</a> -Â <strong> &#8220;the Web Squared era is an era of augmented reality arriving (like the sensor revolution) stealthily, in more pedestrian clothes than we expected</strong>.<strong>&#8230; &#8230;our world will have &#8220;<a href="http://www.orangecone.com/archives/2009/02/smart_things_an.html" target="_blank">information shadows</a>.&#8221; Augmented reality amounts to information shadows made visible.&#8221;</strong></p>
<p>Again there is back story to how I came to think about Information Shadows in relation to augmented reality.Â  So in case your missed it the first time, here is the link to a conversation that began in a hallway meeting between Tim O&#8217;Reilly, Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a>, Usman Haque, <a href="http://www.pachube.com/" target="_blank">Pachube</a>, and Gavin Starks, <a href="http://www.amee.com/" target="_blank">AMEE</a>, at <a href="http://en.oreilly.com/et2009/" target="_blank">ETech earlier this year</a>,Â  <a title="Permanent Link to Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009" rel="bookmark" href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/">&#8220;Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM.png"><img class="alignnone size-medium wp-image-4547" title="Screen shot 2009-09-26 at 9.32.09 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM-300x225.png" alt="Screen shot 2009-09-26 at 9.32.09 PM" width="300" height="225" /></a></p>
<p><a href="http://www.slideshare.net/rlenz/augmented-city-lab-picnic-09" target="_blank">Slide from Augmented City Lab</a> @ <a href="http://www.picnicnetwork.org/" target="_blank">Picnic &#8217;09</a></p>
<h3>So What&#8217;s Next for Mobile Augmented Reality?</h3>
<p><a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4513" title="Screen shot 2009-09-26 at 3.45.45 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-3.45.45-PM-300x186.png" alt="Screen shot 2009-09-26 at 3.45.45 PM" width="300" height="186" /></a></p>
<p>These videos from Daniel Wagner&#8217;s team from Graz University of Technology showing <a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank">Realtime Panorama Mapping and Tracking on Mobile Phones</a> and <a href="http://www.youtube.com/watch?v=W-mJG3peIXA&amp;feature=player_embedded" target="_blank">Creating an Indoor Panorama in Realtime</a>, as Rouli from Games Alfresco points out,Â  indicate that there is a lot in store for us at <a href="http://www.icg.tugraz.at/Members/daniel/MultipleTargetDetectionAndTrackingWithGuaranteedFrameratesOnMobilePhones/inproceedings_view">ISMAR09</a>.</p>
<p>We may not be so impressed by directory style/&#8221;post it&#8221; AR anymore, as these applications have become common place so quickly!Â  But while these early mobile AR apps may be disappointing in relation to some futurist visions of AR &#8211; merely AR/ubicomp appetizers,Â  there are still good implementations of this model coming out (see new comers to the app store<a id="tzvf" title="Bionic Eye" href="http://mashable.com/2009/09/24/bionic-eye/" target="_blank"> Bionic Eye</a> and <a href="http://www.readwriteweb.com/archives/robotvision_a_bing-powered_iphone_augmented_realit.php" target="_blank">RobotVision</a>). And <a href="http://layar.com/" target="_blank">Layar,</a> always on the ball, has upped the ante for the new cohort of AR Browsers with <a href="http://layar.com/3d/" target="_blank">Layar 3D</a>.</p>
<p>But as Bruce Sterling <a href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-robotvision/" target="_blank">notes here</a>:</p>
<p><strong>*In AR, everybody wants to be the platform and the browser, and nobody wants to be the boring old geolocative database. Look how Tim [creator of RobotVision] here, who is like one guy working on his weekends, can boldly fold-in the multi-billion dollar, multi-million user empires of Apple iPhone, Microsoft Bing, Flickr, and Twitter, all under his right thumb</strong></p>
<p> (watch <a id="qxek" title="video here" href="http://www.youtube.com/watch?v=hWC9gax7SCA&amp;feature=player_embedded">video here</a>)</p>
<p>But ifÂ  you looking for something more from AR, you probably won&#8217;t have to wait too long.Â  The two pioneering companies in AR, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> &#8211; founded in 1999, and <a href="http://www.metaio.com/" target="_blank">Metaio</a> &#8211; founded in 2003 are both coming out with &#8220;mobile augmented reality platforms&#8221; in a matter of weeks (see press releases <a href="http://augmented-reality-news.com/2009/09/14/bringing-its-augmented-reality-to-mobile-applications-total-immersion-partners-with-smartphones-app-provider-int13/" target="_blank">here</a> and <a href="http://gamesalfresco.com/2009/09/18/metaio-announcing-mobile-augmented-reality-platform-junaio/" target="_blank">here</a>).Â  And both companies, it seems, will deploy much more sophisticated AR rendering and tracking than we have seen to date.</p>
<p>I approached Bruno Uzzan, founder and CEO of Total Immersion, for an interview as part of my look at the new industry of augmented reality through the eyes of the founding members of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. These consortium members are some of the first commercial augmented reality companies.</p>
<p><a href="#jumpto">The interview below</a> with Bruno began early this summer and then we both went on vacation and it picks up after the announcement of the <a href="http://www.int13.net/blog/en/" target="_blank">partnership between Total Immersion and Int13</a>.</p>
<p>The significance of this announcement is that Total Immersion is now positioned to take the augmented reality experiences they have developed for a number of top brands onto multiple mobile platforms with, &#8220;<strong>Int13&#8242;s very clever embedded solution that allows our [Total Immersion's] solutions to work across many [mobile] platforms,&#8221; </strong>while Int13 gets to extend their reach.</p>
<p>Total Immersion has a 50 person R&amp;D team and their two main focuses have been, firstly getting:<strong> </strong></p>
<p><strong>&#8220;Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility&#8230;.&#8221;</strong></p>
<p>and, secondly:<strong></p>
<p></strong></p>
<p><strong>&#8220;Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Pandora&#8217;s Box &#8211; Shared Augmented Realities</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM.png"><img class="alignnone size-medium wp-image-4450" title="Screen shot 2009-09-25 at 1.18.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM-186x300.png" alt="Screen shot 2009-09-25 at 1.18.15 AM" width="186" height="300" /></a></p>
<p>Spes or &#8220;Hope&#8221;; <a title="Engraving" href="http://en.wikipedia.org/wiki/Engraving">engraving</a> by <a title="Sebald Beham" href="http://en.wikipedia.org/wiki/Sebald_Beham">Sebald Beham</a>, German c1540 (see <a href="http://en.wikipedia.org/wiki/Pandora%27s_box" target="_blank">Wikipedia article on Pandora&#8217;s Box</a>)</p>
<p>There are many weaknesses to the mobile smart phone AR experiences we have now, and the lack of near field object recognition (to date), and difficulties with accurate positioning aren&#8217;t the only ones.Â  Note re solving positioning problems in mobile AR, we are yet to see ARÂ  leverage public libraries for analyzing scenes like Flickr&#8217;s geo tagged photos, see Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha.â€</a> And for more on this <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">my post here</a>.</p>
<p>But, as Joe Lamantia points out:</p>
<p><strong>â€œOne of the weakest aspects of the existing interaction patterns for augmented reality is their reliance on single-person, socially disconnected user experiences.â€</strong></p>
<p>In my view, <strong>The Pandora&#8217;s Box of Augmented Realities</strong> is an open, distributed, multiuser augmented reality framework, fully integrated with the internet and world wide web.</p>
<p>As Yochai Benkler has pointed out many times, and argues again in, <a href="Capital, Power, and the Next Step in Decentralization" target="_blank">Capital, Power, and the Next Step in Decentralization</a>, it is &#8220;open, collaborative, distributed practices that have been at the core of what made the Internet.&#8221;Â  We have to try to make sure that open, collaborative, distributed practices are at the core of mobile augmented reality.</p>
<p><strong></p>
<p></strong></p>
<h3>Can Google Wave be the basis for an Open, Distributed, Multiuser Augmented Reality Framework?</h3>
<p><a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank"><img class="alignnone size-medium wp-image-4492" title="Screen shot 2009-09-25 at 11.51.20 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-11.51.20-PM-300x141.png" alt="Screen shot 2009-09-25 at 11.51.20 PM" width="300" height="141" /></a></p>
<p>I have been exploring the idea of using <a href="http://wave.google.com/" target="_blank">Google Wave </a>protocol as the basis for a distributed, multiuser open augmented reality framework with a small group of AR enthusiasts and developers. And I am happy to say the proposal is beginning to get fleshed out a little.Â  New collaborators are welcome both for &#8220;gear heady&#8221; input and use case suggestions (but re the latter, you can&#8217;t just say everything you see in <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>..!).</p>
<p>This effort started with Thomas Wrobel&#8217;sÂ  proposal for an Open AR Framework prototyped on IRC &#8211; see <a id="s336" title="here" href="../../2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/">here,</a> and click to enlarge the image above of, <a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank">&#8220;Sky Writer: Basic Concept for an Open Multi-source AR Framework.&#8221;</a></p>
<p>But recently we began looking at the <a href="Wave Federation Protocol" target="_blank">Wave Federation Protocol</a>.Â  And, if you check out <a id="ogbq" title="this post," href="http://www.jasonkolb.com/weblog/2009/09/why-google-wave-is-the-coolest-thing-since-sliced-bread.html#more" target="_blank">this post,</a> and <a id="c0ep" title="this post" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">this post</a>, you may get a glimpse of why Google Wave protocol might be a good basis for an open, distributed, AR Framework.Â  You will notice, if you study what Google Wave has done with the XMPP protocol, that many ofÂ <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank"> the elements of networked urbanism</a> that Adam Greenfield describes resonate strongly with what is being attempted in Wave.</p>
<p>But enough said for now!Â  Regardless of the details of implementation,Â  Google Wave or an AR protocol built from scratch (phew! the latter does seem like a lot of work) -Â  an open, distributed, multiuser AR framework integrated with the internet and web would explode the potential of AR, creating new possibilities for data flows, mashups ,and shared augmented realities.</p>
<p>And we are excited by Google Wave because, as Thomas puts it:</p>
<p><strong>&#8220;The really great thing wave does &#8230;.(aside from being an open standard backed by a major player&#8230;hopefully leading to thousands of worldwide servers )&#8230;.is that it allows anyone to create any number of waves, set precisely who can view or edit them, and for them to be able to be updated quickly and continuously (and even simultaneously!)</strong><strong> Better yet, changes will (if necessary) propagate to all the other servers sharing that wave. It does all this right now. From my eyes this does a lot of the work of an AR infrastructure already.</strong></p>
<p><strong>I cant see any other protocol actually doing anything like this at the moment, although correct me if I&#8217;m wrong, as alternatives are always welcome :)&#8221;</strong></p>
<p>Also, Thomas notes, <strong>&#8220;even the playback system (that is, the ability to playback the changes made to a wave since its creation) &#8230;this could give us automatically some of the ideas Jeremy Hight has mentioned in <a href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">his visionary work here</a>,Â  and <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a> on &#8220;the geo spatial web, interlinked locations and data, immersive augmentation and open source geo augmentation.&#8221;</strong></p>
<p>One of the many reasons why an Open, distributed AR Framework would be so cool is it would open up all kinds of possibilities for <span>GeoAR</span> by providing the over-arching standard protocol for communication of updates necessary for the substandards that will facilitate <span>GeoAR</span>.</p>
<p>Also important to note is theÂ  <a id="o0is" title="Wave Federation Protocol docs which are all publicly available here" href="http://www.waveprotocol.org/" target="_blank">Wave Federation Protocol</a> allows anyone:</p>
<p><strong>&#8220;to run wave servers and become wave providers, for themselves, or as services for their users, and to &#8220;federate&#8221; waves, that is, to share waves with each other and with Google Wave. &#8211; &#8220;the federation gateway and a federation proxy and is based on open extension to <a href="http://www.waveprotocol.org/draft-protocol-spec#RFC3920">XMPP core</a> [RFC3920] protocol to allow near real-time communication between two wave servers.&#8221; See Reuven Cohen&#8217;s blog for more <a id="rmr3" title="here" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">here</a> and <a id="mqxr" title="&quot;HTTP is Dead, Long Live the Real Time Cloud.&quot;" href="http://www.elasticvapor.com/2009/05/http-is-dead-long-live-realtime-cloud.html" target="_blank">here, &#8220;HTTP is Dead, Long Live the Real Time Cloud.&#8221;</a></strong></p>
<p>Still some people have expressed concern that an AR Framework using Google Wave protocol would give Google disproportionate influence. Â  Will Google-specific functionality be an issue?Â  How much stuff is Google specific just because no one else is using it (yet)? And how much is Google specific because it holds no value to anyone else but Google? These are some of the questions that have come up.</p>
<p>You are going to see a variety of suggestions for standards and specs for open AR coming out out in the next few months which as, Robert Rice of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a> points out is: <strong>&#8220;a good thing, we need that competition early on to settle down on best case.&#8221; </strong>Recently,Â <a href="http://www.mobilizy.com/" target="_blank"> Mobilizy</a> have offered up an ARML (&#8220;an augmented reality mark-up language specification based on the OpenGISÂ® KML Encoding Standard (OGC KML) with extensions&#8221;) for consideration see <a href="http://www.mobilizy.com/enpress-release-mobilizy-proposes-arml" target="_blank">here.</a></p>
<p>So it is, perhaps, also important to note, that an Open AR Framework should be neutral/transparent to techniques ofÂ  &#8220;reality recognition,&#8221;Â  and methodologies of registration/tracking, allowing various ones to work on the system as new techniques evolve, and to support as many evolving standards as possible.</p>
<p>Augmented Reality developers, like Total Immersion and others with powerful rendering/tracking AR software, should be able use an Open AR Framework to exchange the data which their tracking will use. And the tracking/rendering problems they and other researchers have solved are much harder than figuring out data exchange on on a standard infrastructure or protocol!</p>
<p>So I pricked up my ears when I heard Bruno Uzzan, CEO of <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> -Â  the first and currently the largest augmented reality company, with a 50 person R&amp;D team in France and offices in LA, where Bruno himself is now based, say: <strong>&#8220;Total Immersion isÂ  only months away from launching shared mobile augmented reality experiences using near field object recognition/tracking across multiple platforms&#8221;</strong> (for more details read my conversation with Bruno Uzzan <a href="#jumpto">below</a>).</p>
<p>I was happy when I asked Bruno about the possibilities for developing an open, distributed, multiuser augmented reality framework fully integrated with the internet and world wide web (possibly using Google Wave protocols), and he replied:</p>
<p><span id="pnk:" title="Click to view full content"><strong>&#8220;I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.&#8221;</strong></span></p>
<p><span title="Click to view full content"><strong></p>
<p></strong></span></p>
<h3>Total Immersion &#8211; working with the &#8220;symbiosis between augmented reality and brands&#8221;</h3>
<p><a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank"><img class="alignnone size-medium wp-image-4457" title="dhj5mk2g_344g64g96cq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_344g64g96cq_b-300x224.png" alt="dhj5mk2g_344g64g96cq_b" width="300" height="224" /></a></p>
<p>Total Immersion has created many of the best known and most ambitious augmented reality experiences for major brands to date, including Mattel&#8217;s <a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php">new AR toys</a><a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php"><img src="http://www.uxmatters.com/mt/archives/images/new-window-arrow.gif" alt="" width="14" height="12" /></a> to be released in conjunction with the James Cameron film Avatar, and <a id="dmas" title="AR baseball cards for Topps" href="http://www.youtube.com/watch?v=I7jm-AsY0lU">AR baseball cards for Topps</a>, <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">video here</a> (or click screenshot above), and the <a href="http://www.publishersweekly.com/article/CA6698612.html?industryid=47152" target="_blank">UK&#8217;s first augmented reality book</a>s.</p>
<p>Bruno founded Total Immersion 10 years ago when he was just 27. And the kind of conviction it took to survive as an augmented reality business in the decade before augmented reality captured the world&#8217;s attention is remarkable.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1.png"><img class="alignnone size-medium wp-image-4456" title="dhj5mk2g_343dbsph2fz_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1-300x225.png" alt="dhj5mk2g_343dbsph2fz_b" width="300" height="225" /></a></p>
<p>AR&#8217;s first steps out into the world after 17 years as predominantly a lab science maybe &#8220;wobbly&#8221; (what new technology isn&#8217;t), and sometimes gloriously kitsch &#8211; check out<a id="d_eu" title="the riotus video of and AR Live Show Total Immersion produced in Korea here." href="http://www.t-immersion.com/en,video-gallery,36.html" target="_blank"> this riotus video of the 3D Interactive Live Show Total Immersion produced in Korea </a> (also see the <a href="http://augmented-reality-news.com/2009/09/15/entertainment-first-interactive-3d-live-show-now-open-in-south-korea/" target="_blank">Total Immersion Augmented Reality Blog</a> for more on the TI&#8217;s turn keyÂ  Interactive 3D Live Show Solution).</p>
<p>As Lamantia points out <a id="eo6x" title="here" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php" target="_blank">here</a>, &#8221; projecting mixed realities into public, common, or social spaces makes them  social by default.&#8221;</p>
<p>However, the potential for shared location based augmented reality experiences is as yet untapped.Â  So I see the entry of the most experienced commercial augmented reality company into mobile as pretty interesting.Â Â  WhileÂ  smart phone AR still has significant limitations, and it certainly does differ from some of the futurist dreams of AR (see <a id="x3:y" title="Mok Oh's post hear on his disappointment in this regard" href="http://allthingsv.com/2009/09/03/you-know-what-really-grinds-my-gears-augmented-reality/">Mok Oh&#8217;s post here on his disappointment in this regard)</a>, it is significant that Total Immersion is committing to becoming a leader in mobile AR.</p>
<p>Our smart phones, the powerful networked sensor devices that so many people carry in their pockets, have proved themselves a &#8220;good enough for now&#8221;Â  mediating device for early manifestations of the ubiquitous computing and augmented reality base pair.Â  And now AR and ubicomp is mixed in theÂ  rich, messy soup of everyday life, commerce, business, marketing, art, entertainment, and government, we should get ready to see these technologies grow up fast, and unfold in some surprising ways that lab science didn&#8217;t necessarily predict.</p>
<p>And, perhaps, the new dialogue between scientists and entrepreneurs may spur both communities to outdo themselves.</p>
<p>Particularly, as <a href="http://programmerjoe.com/" target="_blank">Joe Ludwig</a> notes: &#8220;It seems to me that the biggest disconnect between the academics and the entrepreneurs is that they disagree on how far we are from the finish line.&#8221;</p>
<p>See the comments&#8217;s on Ori Inbar&#8217;s post, <a title="Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?" rel="bookmark" href="http://gamesalfresco.com/2009/09/22/augmented-reality-entrepreneurship-natural-evolution-or-intelligent-design/">Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?</a>, forÂ  a courteous but spirited discussion on the potential benefits and frictions of the newly expanded AR community ofÂ  researchers andÂ  entrepreneurs.</p>
<p>As <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre </a>(see my long conversation with Blair<a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank"> here</a>) notes:</p>
<p><strong>&#8220;not all academics and researchers are only interested in the traditional models of impact. Case in point: I wouldnâ€™t be building unpublishable games, nor investing so much time talking to the press, entrepreneurs and VCs if I did not believe strongly in the value of the impact I am having by doing that â€” and I know others with the same attitude.&#8221;</strong></p>
<p>In this vein, check out the Marble Game (<a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank">video here</a>) developed by Steve Feiner and his team at Columbia U. It&#8217;s enabled by Goblin XNA, an open source AR framework built on top of Microsoft&#8217;s XNA, which powers XBox live games, Zune games, and some Windows games. For more about Goblin XNA and AR from Columbia U <a href="http://graphics.cs.columbia.edu/projects/goblin/index.htm" target="_blank">see here</a>.Â  (Hat tip to <a href="http://www.oreillynet.com/pub/au/125" target="_blank">Brian Jepson</a> for this link)</p>
<p><a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4528" title="Screen shot 2009-09-26 at 5.16.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-5.16.56-PM-300x182.png" alt="Screen shot 2009-09-26 at 5.16.56 PM" width="300" height="182" /></a></p>
<p>While we are still waiting for the kind of sexy AR specs &#8211; nothing totally game changing in <a href="http://gigantico.squarespace.com/336554365346/2009/9/20/eye-for-an-iphone.html" target="_blank">Gigantico&#8217;s AR eyewear rounup</a> (<a href="http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PG01&amp;p=1&amp;u=%2Fnetahtml%2FPTO%2Fsrchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;s1=%2220080088937%22.PGNR.&amp;OS=DN/20080088937&amp;RS=DN/20080088937" target="_blank">maybe note this Apple patent</a>), that might get wide adoption. But at least researchers are not afraid to explore the possibilities of AR Goggles.</p>
<p>But how far are we now, with or without sexy goggles,Â  from a fuller expression of the base pair DNA of ubiquitous computing and augmented reality?</p>
<h3>We may have a LAN of things before we have an Internet of Things</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1.jpg"><img class="alignnone size-medium wp-image-4534" title="dhj5mk2g_345g9bxbwd3_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1-300x199.jpg" alt="dhj5mk2g_345g9bxbwd3_b" width="300" height="199" /></a></p>
<p><em>The picture above is a workshop I attended at <a href="http://confluxfestival.org/2009/about/" target="_blank">Conflux</a> last weekend &#8211; <a href="http://confluxfestival.org/2009/events/workshops/natalie-jeremijenko/" target="_blank">Fish â€˜n microChips</a>, with <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko.</a> We are at the site of the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> project (a commissioned work for <a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">Toward the Sentient City</a>) and &#8220;a collaborative project with <a href="http://www.environmentalhealthclinic.net/environmental-health-clinic/" target="_blank">xClinic</a>, The Living and other intelligent creatures.&#8221;</em></p>
<p>We are probably as far off some grand futurist visions of ubiquitious computing as we are some of the futurist visions of augmented reality. But as it turns out that may not be a bad thing! Recently, <a href="http://twitter.com/mikekuniavsky" target="_blank">@mikekuniavsky</a> noted in a tweet:</p>
<p><span><span>&#8220;Another argument for the LAN of Things before the Internet of Things: <a rel="nofollow" href="http://tinyurl.com/lgp9uq" target="_blank">http://tinyurl.com/lgp9uq&#8221;</a></span></span></p>
<p><span><span>Bert Moore, <a href="http://www.aimglobal.org/members/news/templates/template.aspx?articleid=3553&amp;zoneid=24" target="_blank">in the article Mike linked to points out</a>, the grand vision of an &#8220;internet of things&#8221; with everything connected to everythingÂ  can &#8220;distract people from thinking about the benefits of RFID in smaller, more easily implemented and cost-justified applications.&#8221;Â  The same argument I think applies to sensor networks and augmented reality.</p>
<p></span></span></p>
<p>In New York City, a series of commissioned works for the <a href="http://www.archleague.org/" target="_blank">Architectural League of New York&#8217;s</a> exhibit,<em> </em><a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">&#8220;Toward the Sentient City&#8221;</a><em> </em>are giving us the opportunity to dip our toes into the ocean of a &#8220;networked urbanism.&#8221; Â  For only a small budget, two of the <a href="http://www.sentientcity.net/exhibit/?cat=4" target="_blank">five commissioned works</a>, <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibeous Architecture</a> and <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> demonstrate how sensor networks can allow us to explore new kinds of communities &#8211; connecting people to environments in interesting ways to create new forms of social agency.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">&#8220;Amphibeous Architecture</a>&#8221; -Â  from The Living Architecture Lab at Columbia University Graduate School of Architecture, Planning and Preservation (Directors David Benjamin and Soo-in Yang) and Natalie Jeremijenko, Environmental Health Clinic at New York University, uses a skillfully built (electronics and water are notoriously hard to mix) array of partially submerged sensors to pierce the blinding, reflective surfaces of the riversÂ  surrounding Manhattan and to create a new two way relationship with the ecosystem below &#8211; the water, our neighbors the fish and even a beaver that lives in the water surrounding Manhattan.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM.png"><img class="alignnone size-medium wp-image-4536" title="Screen shot 2009-09-26 at 6.34.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM-300x125.png" alt="Screen shot 2009-09-26 at 6.34.56 PM" width="300" height="125" /></a></p>
<p><em>Image from <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Toward the Sentient City</a></em></p>
<p>In a similar spirit, &#8220;<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a>&#8221; &#8211; Usman Haque, creative director, Nitipak â€˜Dotâ€™ Samsen, designer, Ai Hasegawa, designer, Cesar Harada, designer, Barbara Jasinowicz, producer, creates a network of people and electronically assisted plants to explore what it takes to work together on energy consumption and to experience the consequences of &#8220;selfish&#8221; and &#8220;unselfish&#8221; behavior interactively before it is too late to modify our actions.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM.png"><img class="alignnone size-thumbnail wp-image-4537" title="Screen shot 2009-09-26 at 6.55.29 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM-150x150.png" alt="Screen shot 2009-09-26 at 6.55.29 PM" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM.png"><img class="alignnone size-thumbnail wp-image-4548" title="Screen shot 2009-09-26 at 9.37.06 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM-150x150.png" alt="Screen shot 2009-09-26 at 9.37.06 PM" width="150" height="150" /></a></p>
<p><em>The &#8220;Greedy Switch</em>&#8220;<em> from <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse </a>on the left. On the right &#8220;The System&#8221; &#8211; click to enlarge.<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank"></p>
<p></a></em></p>
<p>Much more to come in another post on these works, and &#8220;Toward the Sentient City.&#8221;Â  Also an update on how <a href="http://www.pachube.com/">Pachube</a> &#8211; an important part of both these projects and a very important contribution to ubiquitous computing because it creates the opportunity to connect environments and create mashups from diverse sensor data feeds &#8211; has matured since my interview with Pachube founder, Usman Haque, <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">&#8220;Pachube, Patching the Planet,&#8221;</a> in January this year.</p>
<p>In the picture above <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a>, and <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol</a> give the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> sensor array a last look over, as it will soon be lowered into the East River. Jonathan is on a busman&#8217;s holiday to help out at the pre launch of Amphibious Architecture, nr Manhattan Bridge, NYC.</p>
<p>I was very happy to getÂ  a chance to talk to <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol </a>- more on our conversation in another post<em>. </em>Jonathan Laventhol is <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">CTO of Imagination &#8211; one of the world&#8217;s leading design, events, and branding agencies.</a> We talked about the importance ofÂ <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank"> Pachube</a>, which Jonathan called the &#8220;The Facebook of Data,&#8221;Â  andÂ  how the <strong>symbiosis between brands and augmented reality</strong>, and healthcare applications, wouldÂ  be key to augmented reality emerging into the mainstream.</p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b.jpg"><img class="alignnone size-medium wp-image-4453" title="dhj5mk2g_340djvd2thc_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b-235x300.jpg" alt="dhj5mk2g_340djvd2thc_b" width="235" height="300" /></a></em></p>
<p>Natalie Jeremijenko&#8217;s workshop at Conflux on the social negotiation of technology and how <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">&#8220;everyware&#8221;</a> can give us the chance to experience new forms of agency and connection was a totally inspiring.Â  And I will cover this too in another post.Â  I have so much awesome stuffÂ  to write about at the moment!</p>
<p>None of the projects in, &#8220;Toward the Sentient City,&#8221; included a mobile augmented reality, or &#8220;magic lens&#8221; component, but they all pointed to why &#8220;enchanted windows into our newly inside-out reality&#8221; are going to be so important. And why the DNA base pair of ubicomp and augmented reality can really do stuff that matters.</p>
<h3>Shangri- La &#8211; &#8220;Transfigured City&#8221;</h3>
<p><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b.png"><img class="alignnone size-medium wp-image-4452" title="dhj5mk2g_342g43n6w7k_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b-300x249.png" alt="dhj5mk2g_342g43n6w7k_b" width="300" height="249" /></a></a></a></p>
<p>Screenshot from <a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a> episode </em><a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a></p>
<p>In my AR Consortium founder member interview series, I have found that, understandably, the visionary founders of these first augmented reality companies are a little reticent about sharing their full vision.Â  They are basically on stealth mode in this regard.Â  So as you will not, from my interview with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> founder and CEO, Bruno Uzzan, get a fully drawn scenario of his vision for a next generation of shared augmented reality experiences, here&#8217;s a really interesting anime episode from the anime Shangri La called, <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, to mull over instead.</p>
<p>As you can tell from this rather long and circuitous intro to my my conversation with Bruno Uzzan, IÂ  have been investigating shared augmented realities pretty intensively recently. And Mike Kuniavsky pointed me to <em><em><a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a></em></em>, and<a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank"> Transfigured City</a>, in a conversation with Mark Shepard, after Mark&#8217;s presentation at Conflux, <a href="http://confluxfestival.org/2009/events/workshops/mark-shepard/" target="_blank">Sentient City Survival Kit.</a></p>
<p><a href="http://thingm.com/about-us/team/mike-kuniavsky.html">Mike Kuniavsky</a> with <a href="http://thingm.com/about-us/team/tod-e-kurt.html">Tod E. Kurt</a> is founder of <a href="http://thingm.com/home.html" target="_blank">ThingM</a>, a ubiquitous computing device studio. Also Mike Kuniavsky researches, designs and writes about people&#8217;s experiences at the intersection of technology and everyday life &#8211; see Mikes blog <a href="http://www.orangecone.com/" target="_blank">Orange Cone</a>.Â  And I interviewed Mike at Etech- see<a href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> here</a>.</p>
<p>In <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, the &#8220;Metal Age&#8221; group has to figure out how to share and communicate in a city transfigured by augmented realities/virtualities, where no-one sees the same place in the same way.Â  Only one character can figure out from her previous experience of the city the relationship between the transfigured city and how it used to be.</p>
<p>The conversation I had with <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> on <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">The Transfigured City</a> continued at a picnic in Washington Square Park the next day with Elizabeth Goodman, who I met at Etech when she gave a brilliant presentation, <a id="eag1" title="Designing for Urban Green Space" href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Designing for Urban Green Space</a>.Â  We covered so many areas at the picnic related to ubiquitous computing and augmented realities that this conversation probably deserves a post of its own (my writing to do list is growing longer!).</p>
<p><a id="on28" title="The Plot Synopsis for Shangri La" href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">The Plot Synopsis for Shangri La</a>:</p>
<p><strong>&#8220;In the mid-21st century, the international committee decided to forcefully reduce CO2 emission levels to mitigate the global warming crisis. As a result, the economic market was transferred mainly into the trade of carbon. A great earthquake destroys much of Japan, yet the carbon tax placed on the country is not lifted, so Tokyo is turned into the worldâ€™s largest &#8220;jungle-polis&#8221; that absorbs carbon dioxide. Project Atlas is commenced to plan the rebuilding of Tokyo and oversee the government organization, which the Metal Age group opposes due to its oppressive nature. However, Atlas is only built with enough room for 3,500,000 people and most people are not allowed to migrate into the city. The disparity between the elite within Atlas and the refugees living in the jungles outside of its walls set up the background of the story.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<p><a name="jumpto"><span style="font-size: medium;"><strong> Talking With Bruno Uzzan</strong></span></a></p>
<p><span style="font-size: medium;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost.jpg"><img class="alignnone size-medium wp-image-4494" title="BrunoUzzanpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost-225x300.jpg" alt="BrunoUzzanpost" width="225" height="300" /></a></p>
<p></strong></span></p>
<p><strong></p>
<p>Tish Shute:</strong> We won&#8217;t have fully opened the Pandora&#8217;s Box of Augmented Realities until we have ubiquitous, shared augmented realities, will we?</p>
<p><span id="p-xo" title="Click to view full content"> <strong>Bruno Uzzan: Yes. The most important for augmented reality is the experience we want to share. Now we are working on the cell phone, we can potentially do some marketing components that we already have developed now on cell phone. Done. Itâ€™s working.</strong></span></p>
<p><strong>But the most interesting part of it is how these new components [cell phone AR] will be used for marketing campaigns by brands. And we are also pretty much well positioned to transform some of the AR that we currently have working on Mac and PC and to transform these to applications working on mobile devices. </strong></p>
<p><strong>Tish Shute: </strong> We havenâ€™t really experienced yet what it means to actually share mobile AR experiences?</p>
<p><strong>Bruno Uzzan: Itâ€™s hard &#8212; we did a Facebook app. Itâ€™s a first try, it has a way to go.Â  But </strong><span id="c8ek" title="Click to view full content"><strong> to go more and more into social, is the way forward for us &#8211; to share and expand AR experiences. But yes, I mean what youâ€™re seeing is how two people on two different applications can share that same expanse.Â  For sure we are going in that direction. We are currently working on those kind of solutions. How people can share and experience together at the same time. Thatâ€™s how we start creating excitement in augmented reality, and itâ€™s coming up.</strong></span></p>
<p><strong>It&#8217;s a new market and thereâ€™s so much more in store for augmented reality. You know, some people are telling me, donâ€™t you believe that augmented reality is a gimmick? It will be a trend for a few weeks or a few months and then gone? I say, youâ€™re kidding me. This is only the beginning. I mean I can assure you that the applications that are on the market today are one percent of what we will have five years from now.</p>
<p></strong></p>
<p><strong>Tish Shute: </strong>I agree.</p>
<p><strong>Bruno Uzzan: And Iâ€™m sure that augmented reality will be a part of a lot of components that we are currently using today &#8211; GPS, web browser, glasses, I mean there are so many applications that will come up shortly. This is only the beginning. Iâ€™m completely convinced that augmented reality will be in three years from now what virtual reality is today, which is a billion dollar market.Â  I know that itâ€™s not just a gimmick of a few weeks or a few months, because so many brands are jumping into it, spending money, exploring solutions.Â  I know that itâ€™s not just short term -what they are willing to do and we are willing to do, but also middle and long term. And thatâ€™s what makes this adventure pretty much unique and what makes creating a cutting edge technology, very, very much exciting for us.</p>
<p></strong></p>
<p><span id="pb9s" title="Click to view full content"><strong>Tish Shute:</strong> First could you explain more to me about your partnership with Int13. I am not sure I understand what is in the arrangement from Total Immersion&#8217;s POV. I mean what happens re your own mobile software development? Haven&#8217;t you only been licensed the Int13 SDK for a limited period of time and have limited access to all it&#8217;s power? </span><span id="p_2y" title="Click to view full content"><a href="http://gamesalfresco.com/2009/09/15/why-int13-got-in-bed-with-total-immersion/" target="_blank">Stephane from Int13 said to Ori on Games Alfresco, here, </a>â€œwe have licensed the SDK4 for two years,â€ and then Ori asks, â€œbut you have basically kept the power to yourselves, right?â€ So if they are the only ones that can enhance it and develop the software, where willÂ  TI be in two years in mobile if you havenâ€™t really had the chance to develop your own software .</span></p>
<p><span id="j5co" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Actually itâ€™s a real win-win situation. Int13 is a very small company and they have so many requests they can&#8217;t possibly fulfill them all. SoÂ  this is a way for both of us to be, as quickly as possible, the first mobile provider for all the requests we have. Also they give us exclusivity so nobody else can use INT13 SDK for such applications.Â  I think that it is a good partnership, </strong></span></p>
<p><strong>And concerning our own mobile applicationâ€¦ First of all we have currently some mobile applications working. But with Int13 we have a mobile solution that can work on many different devices. Thatâ€™s a fact and thatâ€™s working. And, believe me you will hear from us a lot more about this soon. We are fully independent on our mobile development. The reason we closed the partnership with Int 13 isÂ  to be able to deploy mobile in a broad way.</strong></p>
<p><strong> I mean you know that the difficulty with AR mobile is that each separate device needs some customization. Working on the iPhone is different from working on the Nokia, different from working on the Palm; itâ€™s different from working on the Samsung. Each of them have their own operating system inside and so we were interested in Int13&#8242;s very clever embedded solution that allows our solutions to work across many platforms.</strong></p>
<p><strong>The reason we are working with Int13 is that we are able to work on so many mobile devices, thanks to Int13. And in the mobile AR race that we are currently in, the next two years will be extremely important to usâ€¦</strong></p>
<p><span id="z_5s" title="Click to view full content"><strong>Tish Shute:</strong> OK, that definitely clarifies it a lot. So Int13 has done an embedded solution to allow TI developed AR solutions to work easily across many devices?</span></p>
<p><span id="y.wt" title="Click to view full content"><strong>Bruno Uzzan: YesÂ  they have kind of an embedded solution, a way to address extremely quickly new cell phone&#8230; But, currently on our side, we are in discussions with a mobile companyâ€¦ and that only refers to some very specific mobile devices.Â  And what they have is also a way to embed deeper our technology into mobile, so that we can have quickerâ€¦ applications that work on a large number of cell phones.</strong></span><span id="mufh" title="Click to view full content"> </span></p>
<p><strong>Tish Shute:</strong> So, basically it means you don&#8217;t have to go through some complicated negotiations with each of the cell phone companies, is what you are saying?</p>
<p><strong>Bruno Uzzan: Not only negotiations, but also hard development. You know? Working on the Windows mobile is completely different from working on the Palm OS. You know, that&#8217;s different! Its a big work, to have a mobile application working on many other devices. So, INt13,Â  provides us a way for us to save some time and some development cost too.</strong></p>
<p><strong>Tish Shute:</strong> And Int13 doesn&#8217;t have powerful AR development tools like <a href="http://www.t-immersion.com/en,interactive-kiosk,32.html" target="_blank">D&#8217;fusion</a> right?</p>
<p><strong> Bruno Uzzan: Right! That&#8217;s right. That&#8217;s why we say it&#8217;s a true win-win solution. They can benefit from our work too. And we can benefit from their work, in order to deploy quicker and faster mobile solutions. </strong></p>
<p><strong>Tish Shute:</strong> Now, the second thing isâ€¦ there is a lot of debate and disagreement about how far mobile augmented reality is from delivering something more that the &#8220;post it&#8221; approach that has been much publicized in recent months, via all the AR browser apps.</p>
<p>But from my understanding from the conversation we had earlier this summer (see below), Total Immersion is targeting a much higher level of mobile augmented reality than we&#8217;ve seen to date?</p>
<p><strong>Bruno: Yes the browser apps we have seen are a kind of augmented reality, but not exactly the way we see it. Let me explain you why. With this kind of application it&#8217;s true that you can overlay 3D-information and video. That&#8217;s a fact. So, in a sense, that&#8217;s augmented reality. But the way that they are working on the position of the 3D on that video is that they are using compass and GPS-information.. so it means that this AR solution will work only on some building and some physical objects that are FIXED. In a fixed and known position.</strong></p>
<p><strong>So you want to go to a theater?</strong></p>
<p><strong> </strong><span id="a9qv" title="Click to view full content"><strong>The theater is here, for sure it will not move, so you know the position of the theater, and thatâ€™s a fact that you can superimpose an object on the theater. Thatâ€™s what can be done currently. What we are achieving and what we are doing on mobile is more than that. We want to be able to port our solution with trading cards, with brands, into a smart phone.</strong></span></p>
<p><strong>Iâ€™m assuming that you want a can, a drink can, to be able to trigger an experience. The only way you can do it is to be able to understand what the can, it is. And the current solutions that are out there canâ€™t do that, itâ€™s impossible. </strong></p>
<p><strong>Tish Shute:</strong> Right, yes. Thereâ€™s no near-field object at all in these early browser apps.</p>
<p><strong>Bruno Uzzan: And the solution we have is that we can recognize a can and then &#8212; in a very, very precise way and that activates geo-location, so we can superimpose 3D. I mean in that case, it opens up all the applications that we currently have, so they could work on mobile.</strong></p>
<p><strong>Tish Shute:</strong> So for example, if youâ€™re working with a soft drink company, people can trigger that experience wherever they see that can?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Yes. Yes, I assumed that was what youâ€™re doing</p>
<p><strong>Bruno Uzzan: We believe &#8212; and maybe thatâ€™s not the case, but we believe that our marker-less tracking technology is pretty much unique on the mobile devices.</strong></p>
<p><strong>I havenâ€™t seen yet, from anyone, a full augmented reality mobile solution working.</p>
<p></strong></p>
<p><span id="rzqr" title="Click to view full content"><strong>I really see AR being part of the Web 3.0 next generation. I mean the vision I have is that, you know &#8212; today, when you want to have information, you go on a website and then you find your information. AR &#8212; and the future is that I think it will be part of the opposite. You want to have information about a product, you just show it to your computer and the information will automatically pop up. I see here a new way to market some key messages, a new way to get information is that some physical product by themselves could be a way to get information, and you donâ€™t have to search anymore for them, itâ€™s coming out to you.</strong></span></p>
<p><strong>AR is definitely for me, one of these components. Another thing that AR is a solution, another thing that AR itself will create these kind of results in how information is being displayed. But Iâ€™m seeingÂ  here a way that could be part of a new way to have access to information. And thatâ€™s part of the vision I have. Whatever, if it is through mobile phone or web or PC, Mac, whatever, I really believe that now this kind of new generation of receiving information will come shortly and could be a kind of a new &#8212; could be part of the new 3.0 generation of the web. </strong></p>
<p><strong>Tish Shute:</strong> My friend <a id="evae" title="Gene Becker" href="http://www.genebecker.com/" target="_blank">Gene Becke</a>r did <a href="http://www.genebecker.com/2009/09/thinking-about-design-strategies-for-magic-lens-ar/" target="_blank">an interesting post recently on some of the current limitations of mobile AR</a> where he pointed out the problem of:</p>
<p><em><strong>&#8220;S</strong><strong>implistic, non-standard data formats</strong> â€“ POIs, the geo-annotated data that many of these apps display, are mostly very simple one-dimensional points of lat/long coordinates, plus a few bytes of metadata. Despite their simplicity there has been no real standardization of POI formats; so far, data providers and AR app developers are only giving lip service to open interoperability. Furthermore, they are not looking ahead to future capabilities that will require more sophisticated data representations. At the same time, there is a large community of GIS, mapping and Geoweb experts who have defined open formats such asÂ <a href="http://georss.org/" target="_blank">GeoRSS</a>,Â <a href="http://geojson.org/" target="_blank">GeoJSON </a>andÂ <a href="http://code.google.com/apis/kml/documentation/" target="_blank">KML</a> that may be suitable for mobile AR use and standardization.&#8221;</p>
<p></em> <span id="gd8y" title="Click to view full content"></p>
<p><strong></p>
<p></strong></span><span id="v68s" title="Click to view full content"><strong> Bruno Uzzan: Thatâ€™s interesting. I mean &#8212; I know exactly what his is referring to. He is mainly referring to a localization and how you can have a quick, accurate localization.Â  If you look at current solutions, and you look at this 3-D superimposing on the video, the 3-D is shaking a lot. I donâ€™t know if you see that in some of these early efforts.</strong></span></p>
<p><strong>Itâ€™s hard to use because the 3-D, you know, isÂ  part of the magic of augmented reality, that is when the 3-D is being inserted in a very easy way and smooth way in your solution. Here, when you see this overlay, 2-D or 3-D overlaid on the video, itâ€™s shaking a lot. One reason for this is that the GPS compass is not accurate enough to coordinate the perfect location of the user. And here, what Gene says is interesting. I think we are addressing this localization issue in a pretty smart way.</strong></p>
<p><strong>But to be frank with you, I donâ€™t believe mobile augmented reality in the extremely short term &#8212; Iâ€™m talking about three weeks, one, two months is mature enough for good AR applications.Â  It will be shortly.Â  But for now it is more proof of concept than a true and easy application to use. </strong></p>
<p><strong>But we are starting to see a lot of new application coming out, but I really believe that marketing and entertainment are the two key markets for AR right now.</strong></p>
<p><strong>Iâ€™ve been working ten years in augmented reality. And, eight years ago, when I was talking about augmented reality, I was E.T., you know? Nobody understood what I said, and I thought it was crazy. And now, today, yes itâ€™s completely different.</strong><strong> </strong></p>
<p><strong> </strong></p>
<p><strong>Tish Shute:</strong> The Pandora&#8217;s Box of Augmented Realities, in my view, is an open, universal and standard, distributed, multiuser, augmented reality framework fully integrated with the internet and world wide web. I have been looking into Google Wave protocols as a basis for this would you be interested in this? Do you think it is feasable?</p>
<p><span id="ngwf" title="Click to view full content"> </span><span id="vz68" title="Click to view full content"><strong> </strong></span></p>
<p><span id="vz68" title="Click to view full content"><strong>Bruno Uzzan: I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.</strong></span></p>
<p><strong>Tish Shute:</strong> Yes I suppose an open AR Framework involves cooperation and collaboration, it is more about business and politics than technological problems.</p>
<p><strong> Bruno Uzzan: Yes!Â  Actually the Web is politics. Business is politics. </strong></p>
<p><span id="yeg4" title="Click to view full content"><strong>Tish Shute: </strong>I would be interested if anyone in your R&amp;D team would be interested in looking at some of the ideas that are emerging in our little discussion of Google Wave and an Open AR FrameworkÂ  to offer feedback. it is an interesting time now to input on the Wave Federation Protocol docs because nothing is set it stone right now.</span></p>
<p><span id="hzrf" title="Click to view full content"><strong>Bruno Uzzan: Just shoot me an email, I&#8217;ll try to put you in touch with the right person and, and a team member that can input on this.</strong></span></p>
<p><span id="hbcd" title="Click to view full content"><strong>Tish Shute: </strong>For mobile augmented reality the best thing weâ€™ve got now is the phone, right?</span></p>
<p><strong>Bruno Uzzan: Right. </strong></p>
<p><strong>Tish Shute:</strong> And the only way we can use the phone is by holding it up, right?Â  Isnâ€™t this a bit of an an obstacle as you introduce better object recognition and tracking?Â  People are going to have to stop moving to use their phone. What do you feel about that experience? Isn&#8217;t AR eyewear and essential part of a tightly registered AR experience?</p>
<p><strong></p>
<p>Bruno Uzzan: </strong>We donâ€™t do hardware and we donâ€™t have the current solution for eyewear that would do all we need for a good mobile AR experience, so I guess we donâ€™t have the current answer for that.Â  But we are beginning to see the next generation of this &#8212; of these glasses.</p>
<p><strong>Tish Shute:</strong> But youâ€™re happy enough with the mobile experience of augmented reality on smart phones that youâ€™re investing in this next generation of software for this.</p>
<p><strong>Bruno Uzzan: Yes, I know. We know that some application will not work on the iPhone. And yes, whatever you do, you still need to hold the iPhone, so it means that you canâ€™t play with your hands anymore. So we know that partially, some AR solutionsÂ  we have on other platforms will lose the magical effectivities on just the iPhone.</strong></p>
<p><strong>But Iâ€™m starting to see on the market some glasses that could perhaps be not too expensive &#8212; thatâ€™s a challenge!Â  And easy to use &#8212; thatâ€™s another big challenge. And, that could fit on anybodyâ€™s faces and head &#8212; there&#8217;s another big challenge. So yes, Iâ€™m starting to see that, but so far AR glasses are only applicable for some very, very specific application, like design or theme park or, you know, some specific location where it makes sense to move forward with glasses.</p>
<p></strong></p>
<p><strong>I donâ€™t believe that kids will use glasses for &#8212; in our toys and for games in the next months or maybe othe next one or two years. But maybe something will come out shortly and that could be a big breakthrough, and enable us to think another way. ButÂ  from what we have seen so far and from what we know in this hardware market, I donâ€™t believe that currently there is a workable solution.</p>
<p><span style="font-size: small;"></p>
<p></span></strong> <span style="font-size: small;"><strong></p>
<p></strong></span><span style="font-size: medium;"><span style="font-size: small;"><strong>Note: The following section of the interview took place earlier in the Summer.</strong></span></p>
<p></span><span id="yvdi" title="Click to view full content"></p>
<p><strong>Tish Shute:</strong> You are the first commercial AR companyÂ  &#8211; you started in 1999 right?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes you are right. We started the extremely early in this augmented reality market. We were the first company worldwide to start doing augmented reality and to start promoting augmented reality. So it&#8217;s true, we are pretty old players although the market has been getting bigger and bigger for the last year and a half. So for a long time we were only in the market, and the market was not really there.</strong></span></p>
<p><strong>But for the past 8 months, the company has been growing really fast.</strong></p>
<p><strong>Tish Shute:</strong> Yes I&#8217;m sure. Congratulations for hanging in there long enough to get the pay off!</p>
<p><strong> Bruno Uzzan: You know, my background is Financial. So I have been driving the company for many years in a very cash efficient way. So we have been waiting for the markets to reach maturity before starting make some investments. So that&#8217;s the reason we are still here, and that&#8217;s the reason I think we managed pretty smartly the cash that we raised for the company.</strong></p>
<p><strong>Tish Shute:</strong> Yes there is a saying that when a market takes off you can tell a pioneers because they are the ones with the arrows in their backs. But I am glad you are dodging the arrows!</p>
<p><strong>Bruno Uzzan: You know, I&#8217;ve always driven the company with revenue. And because revenue was not there at the beginning I was extremely cautious about the cash. So now that the company is getting some revenue, for sure we are making more and more investments, and taking advantage of our situation as a worldwide leader of augmented reality.</strong></p>
<p><strong>This situation is not easy as it appears today but it&#8217;s now getting better, as you can see, AR, Augmented Reality, has very good momentum and we are benefiting a lot from all this momentum for augmented reality right now.</strong></p>
<p><strong>Tish Shute:</strong> You&#8217;ve been very involved in researching developing augmented reality tools. Are you still as active in the research area, or are you too busy keeping up with work for hire now, to be working on research and building new technology for Augmented Reality?</p>
<p><strong>Bruno Uzzan: Both. First of all, we are part of lot of projects either directly with clients like Mattel or with some partners that are using our technology to promote and develop other AR projects. From what we he have seen, many, many, many, projects augmented projects have been done currently with our solutions.</strong></p>
<p><strong>To continue with your previous question. So we are being perceived as this leader in that space, and weÂ  have some pretty heavy demand for our services. But we are coming up with new technology, of course, still connected to Augmented Reality.Â  But, our R &amp; D is working in two different directions, which of course also bind together.</strong></p>
<p><strong>The first one is platform developments. So we want </strong><strong>Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility</strong><strong>.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Robert Rice said recently, &#8220;markers and webcams equal Photoshop page curls&#8230;&#8221;</p>
<p><span id="dulu" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Yes. There are so many concerns with markers. The quality is extremely bad. As soon as you hide a part of the marker, a slight part of the marker, youâ€™re dead. You canâ€™t track any more of the object. So compared to our solution where I want to say play with cards or where you are going to play with a Mattel toy, even if you hide a part of the toy, itâ€™s still working.</strong></span></p>
<p><strong> Tish Shute:</strong> But you havenâ€™t offered the public an SDK to your engine right? Basically the way people get access to your tools is working in a partnership with Total Immersion right?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Do you think in the future you might open your SDK? Are you considering that?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes, it would be interesting. </strong></p>
<p><strong>Tish Shute:</strong> So that is something we can see coming soon?</p>
<p><span id="short_transcription0" title="Click to view full content"><strong>Bruno Uzzan: Maybe, because itâ€™s true that Total Immersion is starting to be mature enough for these kind of tools. The only thing is that we have to respect good timing for that.Â  Itâ€™s a big decision. You know what I mean?Â  It is a big, big decision. We would then compete with others using our technology. </strong></span></p>
<p><strong>Tish Shute:</strong> Oh I know, it is a big decision when you have so much skin in the game! But it would be nice to have your SDK being THE platform for AR, wouldn&#8217;t it?</p>
<p><strong> Bruno Uzzan: It is a really big decision that we canâ€™t just take like that, you know.Â  There are a lot of friends who told me you have to be extremely careful about timing. This timing is pretty much connected to the maturity of the market. For sure, we see the market being more and more mature. But, there are a lot of low hanging fruits we still want to address</strong></p>
<p><strong>To get the best value possible for all the publicity we have and all the clients we have now. </strong></p>
<p><strong>Tish Shute:</strong> Yes, I know. Youâ€™ve been in this game so long. Now, there is an interesting question here though about tools and platforms because you know, A.R., augmented reality has already expandedÂ  beyond its kind of original purist definition. And when I talk to peopleÂ  about augmented reality, there are actually lot of different ideas and priorities of where the tools should go right now. You know, obviously we have these kind of browser-like applications, but these browser like applications are not dealing with recognizing near field objects yet.Â  What are your priorities for tool development and what are your priorities for AR development in the future? What areas are you going to focus on? Oh dear that is a rambling question!</p>
<p><strong>Bruno Uzzan: [laughter]Â  So, one of our first priorities is we need to create our software with one development, one installer, one software that can be spread on different platforms. The same application, the same software can be used either on a PC, Mac, phone or console. So thatâ€™s a lot of work, because that means that our platform has to address many many different devices and thatâ€™s a big priority for us because we received this request from our clients. We want to be able to use one application on many different platforms and devices. So, thatâ€™s the first one.</p>
<p></strong></p>
<p><strong><span id="hk3z" title="Click to view full content">And the second one is to add more and more interactivity between the real and the virtual world. So, we are working on some improvements to add some real components that will interact with virtual, and that also part of our big strategy and direction and these two worlds can more and more be bridged together, linked together so they can interactÂ  one with the other.</span></strong></p>
<p><strong>Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.</p>
<p><br style="background-color: #ffff00;" /></strong><span id="b1qt" title="Click to view full content"><strong> There are so many different directions for interaction between the real world and virtual world to develop.Â  Iâ€™m sure ten years from now youâ€™re going to have AR applications everywhere.Â  Its not just temporary fashion stuff or a gimmick for few months. I mean we are getting there, its getting stronger and stronger and we are getting a good adoption rate from our consumers. They like it, they test it, they play with it and brands wants more, people want more and its getting bigger and bigger.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Yea and I totally agree, its not a gimmick because the interaction between &#8220;virtual&#8221; and &#8220;real&#8221; enhances the magic of both. Another question about you RandD operation. Is your R&amp;D still in France or have you moved totally out to LA.</span></p>
<p><strong>Bruno Uzzan: We are 50 people in France and I started this LA office two years ago and I moved permanently two years to LA. So Iâ€™m now permanently located in the US to take care of the US office, knowing that revenues are really getting bigger and bigger in the US. So it means that we are getting a lot of traction, working with large company and now Iâ€™m currently located in the US.</strong></p>
<p><strong>Tish Shute:</strong> My sister lives in Paris. Could I visit your R&amp;D lab at some point? Iâ€™d love to visit!</p>
<p><span id="bt1e" title="Click to view full content"><strong>Bruno Uzzan: Yeah sure sure sure. I mean if you want to go. You wonâ€™t have access to all the research. But if you want to go out and meet all the team please do.</strong></span></p>
<p><strong>Tish Shute:</strong> Iâ€™d love to.</p>
<p><strong> Bruno Uzzan: No problem. Shoot me an Email you and I will introduce you to Eric Gehl, COO, he is the COO of the French team. And he can definitely take care of that. </strong></p>
<p><strong>Tish Shute:</strong> That would be fun. Thank you!</p>
<p>Recently, AR browser applications have really caught the imagination of the web community, eg., Layar and Wikitude?Â  Where do you think the most important market for AR is at the moment<span id="k6fx" title="Click to view full content">, entertainment,Â  green tech, business, education?</span></p>
<p><strong>Bruno Uzzan: I think that all that you mention will be important. The first one that did grab my attention is entertainment particularly dual marketing, because they always searching for new ways to interact with players or the consumers.Â  But itâ€™s just the tip of the iceberg, you know, I mean medical applications could be huge using augmented reality. Education, and edutainment is definitely using more and more augmented reality components.Â  And I will just be submitting with big companies â€“ that are considering using augmentation for education. Museums are very important too. Also augmentation as a kind of free sales tool, you know there are so many applications, design, architecture &#8211; so many directions that itâ€™s hard to say today which one will take the lead.</strong></p>
<p><strong>But I do believe that on the short term the ones that are really really moving fast are the entertainment business and the digital marketing business. </strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest shortcomings with current augmented reality and what are the obstacles that no one has solved yet?</p>
<p><strong>Bruno Uzzan: I think the cell phone is not fully ready for augmented reality â€“ a lot of people are working on that but there are still a lot of constraints to get the augmented reality working on a cell phone and I think that from what I heard a lot of manufacturers and a lot of companies are working from direction that are going to help us a lot to develop some great cell phone applications.</strong></p>
<p><strong>And I think thatâ€™s one of the biggest part of the game. All the applications that you see on cell phones so far are just gimmicks â€“ the next big key is how to transform some gimmick cell phone application to a real, industrial, robust application that&#8217;s going to work on a cell phone. So I think thatâ€™s a big challenge for this year. </strong></p>
<p><strong></p>
<p>Most of what we see now is just matching and overlaying some 2d components in a video. This is not what I call AR.Â  Youâ€™re far away â€“ with this kind of application, you are far away from doing the registration that we need to do â€“ you canâ€™t do it. So here&#8217;s the challenge: &#8220;how can you get a Topps is an application working on cell phone. Thatâ€™s the big challengeÂ  How we can make that work!&#8221;</strong> <strong> You can&#8217;t today get a real AR Topps application working on cell phone because there&#8217;s no cell phoneÂ  thatâ€™s actually ready. But we are working on it and the first one that can make that work, itâ€™s going to be huge.</strong></p>
<p><span id="b9-2" title="Click to view full content"><strong>When you are working with good AR components you need a lot of CPU and GPU programs. So today new cell phone have started to be more and more ready for augmented reality but you need a really good cell phone to make it work. You canâ€™t choose an old cell phone to make it work because you have some recognition, you have some tracking, you have some rendering, so you canâ€™t choose a Nokia cell phone two years old to make that work. For sure the newest iPhone is the one that can make it work, but thatâ€™s it for now. There is a lot of research â€“ from large cell phone companies â€“ to get more CPU and GPU into their cell phone.Â  But so far we are also waiting for these devices to be released to consumers.</strong></span></p>
<p><strong>Tish Shute: </strong>And the current economic climate has put a damper on MIDs hasn&#8217;t it. But who can tell? It depends what price points some new MID came out at right?</p>
<p><strong></p>
<p>Bruno Uzzan: Correct.</strong></p>
<p><strong>Tish Shute:</strong> Yes,I agree. But basically whatâ€™s interesting, the interesting thing is, the iPhone can deliver so much of what is necessary and even if Apple hasn&#8217;t given access to the full power of the iphone to AR developers yet, there is really no going back now &#8211; the mobile augmented reality cat is out of the bag!</p>
<p><strong>Bruno Uzzan: Youâ€™re right, youâ€™re fully right. </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/feed/</wfw:commentRss>
		<slash:comments>36</slash:comments>
		</item>
		<item>
		<title>Location Becomes Oxygen at Where 2.0 &amp; WhereCamp</title>
		<link>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/</link>
		<comments>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/#comments</comments>
		<pubDate>Tue, 02 Jun 2009 21:43:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Aaron Straup Cope]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[bottom up urban informatics]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[community sensing]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Dan Catt]]></category>
		<category><![CDATA[Eric Horvitz]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[FireEagle]]></category>
		<category><![CDATA[Flickr Corrections]]></category>
		<category><![CDATA[Flickr Nearby]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo platform]]></category>
		<category><![CDATA[geo platforms]]></category>
		<category><![CDATA[geoblogging]]></category>
		<category><![CDATA[geoplanet]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[geowanking]]></category>
		<category><![CDATA[GigaPan]]></category>
		<category><![CDATA[gigapanning]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[googlewave]]></category>
		<category><![CDATA[headmap manifesto]]></category>
		<category><![CDATA[J.G. Ballard]]></category>
		<category><![CDATA[Jo Walsh]]></category>
		<category><![CDATA[Joshua Schachter]]></category>
		<category><![CDATA[location awaeness]]></category>
		<category><![CDATA[location versus place]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[machine intelligence and human intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[magic words and microsyntax]]></category>
		<category><![CDATA[Mapping Hacks]]></category>
		<category><![CDATA[Marc Powell]]></category>
		<category><![CDATA[Microsyntax]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[Odeo Yokai]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[paleogeography]]></category>
		<category><![CDATA[Papernet]]></category>
		<category><![CDATA[personal informatics]]></category>
		<category><![CDATA[Placemaker]]></category>
		<category><![CDATA[privacy and community sensing]]></category>
		<category><![CDATA[privacy and sensor networks]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[psychosynthography]]></category>
		<category><![CDATA[Raven Zachary]]></category>
		<category><![CDATA[real time web based visualization and mapping]]></category>
		<category><![CDATA[reality mining]]></category>
		<category><![CDATA[Rich Gibson]]></category>
		<category><![CDATA[Schuyler Erie]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shape files]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart cities]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[social geography]]></category>
		<category><![CDATA[social networks]]></category>
		<category><![CDATA[social reality mining]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[The Ubiquitous Media Studio]]></category>
		<category><![CDATA[the web in the world]]></category>
		<category><![CDATA[Tom Carden]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hackers]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[wearable sensory substitution devices for navigation]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[WOEID]]></category>
		<category><![CDATA[yahoo! geotechnologies group]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3567</guid>
		<description><![CDATA[curatingbigdatapost]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime.jpg"><img class="alignnone size-medium wp-image-3578" title="anselmcircletime" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime-300x199.jpg" alt="anselmcircletime" width="300" height="199" /></a></p>
<p>The biggest news at <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0, 2009</a> came from the<a href="http://developer.yahoo.com/geo/" target="_blank"> Yahoo!</a><a href="http://developer.yahoo.com/geo/" target="_blank"> G</a><a href="http://developer.yahoo.com/geo/">eo Technologies Group</a>. Tyler Bell, announced Yahoo! <a href="http://developer.yahoo.com/geo/placemaker">Placemaker</a> and the opening up of the <a href="http://developer.yahoo.com/geo/geoplanet/" target="_blank">GeoPlanet</a> data set, â€œall of the WOEIDs [<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs] available as a free download under Creative Commons in Juneâ€ (see <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrestâ€™s post</a> for more details).</p>
<p><a id="qa9y" title="WhereCamp 2009" href="http://wherecamp.pbworks.com/WhereCamp2009" target="_blank">WhereCamp 2009</a> was held immediately after <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0</a> and was a great place to chew on the events and ideas of Where 2.0.Â  In the picture above Anselm Hook addresses the WhereCamp morning circle in the courtyard outside the <a id="i:ij" title="Social Tex" href="http://www.socialtext.com/" target="_blank">Social Tex</a>t offices in Palo Alto. Anselm pointed out to me:</p>
<p><strong>&#8220;there are interesting implications of placemaker in combination with other yahoo assets &#8211; in particular <a href="http://developer.yahoo.com/yql/" target="_blank">YQL</a> &#8211; placemaker by itself is neat &#8211; but placemaker combined with everything else is a natural missing piece that is a big enabler.Â  Yahoo has been impressive.&#8221;</strong></p>
<p><strong> </strong>With all the Geo platform power available to us now, also (also see<a href="http://radar.oreilly.com/2009/05/new-geo-for-devs-from-google-i.html" target="_blank"> New Geo for Devs from Google I/O</a>), there isnâ€™t a shadow of a doubt in my mind Brady is right when he said, just before the Where 2009 conference: &#8220;<strong>Location is no longer a differentiator it&#8217;s going to become oxygenâ€ </strong> <a href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web" target="_blank">(quote from WebMonkey).</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1.jpg"><img class="alignnone size-medium wp-image-3612" title="spatialjunkies1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1-300x199.jpg" alt="spatialjunkies1" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41.jpg"><img class="alignnone size-medium wp-image-3614" title="yahoogeo41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41-300x199.jpg" alt="yahoogeo41" width="300" height="199" /></a></p>
<p><em>The Yahoo! GeoPlanet team at WhereCamp &#8211; Tyler Bell, (talking to Brady Forrest in picture on the left) is sporting his spatial junkies T-Shirt. Photo on right, Aaron Cope, Tyler Bell, Martin Barnes, Gary Gale.</em></p>
<p>WhereCamp was alive with key figures from the social geography movement who knew the power of these new tools (see <a href="http://www.flickr.com/photos/ugotrade/sets/72157618662411286/" target="_blank">some of my photos of WhereCamp on Flickr here</a>).</p>
<p>The importance of the Yahoo! announcement really became clear to me at <a href="http://www.socialtext.net/wherecamp/index.cgi" target="_blank">WhereCamp</a> where I attended sessions all day Saturday including the Curating Big Data Session led by <a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen Design</a> and <a href="http://www.aaronstraupcope.com/" target="_blank">Aaron Straup Cope</a>, Flickr, (see Aaronâ€™s slides from his<a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank"> Where 2.0 presentation on â€œThe Shape of Alphaâ€ here</a> and video <a href="http://where.blip.tv/file/2167471/" target="_blank">here</a>).</p>
<p>Anselm Hook, a prime mover for WhereCamp, is a leading philosopher of place making and veteran software developer who led <a href="http://platial.com/" target="_blank">Platia</a>l engineering and is now at web consultancy <a rel="nofollow" href="http://makerlab.com/">http://makerlab.com</a><span class="bio">. If you missed Anselm at WhereCamp he will be presenting on, <a href="http://opensourcebridge.org/sessions/246" target="_blank">Ubiquitous Angels</a> at <a href="http://opensourcebridge.org/users/288" target="_blank">The OpenSource Bridge</a>, Portland, Oregon, June 17th -19th, 2009.</span></p>
<p>Anselm describes where he thinks the challenges are:</p>
<p><strong>â€œWe should be mapping information that in some ways has been historically unmappable because it is 1) not valued or is 2) actively seen as threatening or is 3) simply too hard to map using traditional tools.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul.jpg"><img class="alignnone size-medium wp-image-3680" title="wherecampschedul" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul-300x199.jpg" alt="wherecampschedul" width="300" height="199" /></a></p>
<p><em>The WhereCamp Schedule</em></p>
<p><strong><span style="font-size: medium;">The Shape of Alpha</span></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57.png"><img class="alignnone size-medium wp-image-3647" title="picture-57" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57-300x220.png" alt="picture-57" width="300" height="220" /></a></p>
<p><em>Screen capture from Aaron&#8217;s <a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation on â€œThe Shape of Alpha.</a> Original photo from Flickr user <a href="http://www.ï¬‚ickr.com/photos/nickisconfused/3291840240/" target="_blank">&#8220;NickIsConfused&#8221;</a>.</em></p>
<p>Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a> puts key questions about curating big data center stage.</p>
<p>Firstly, the exploration of what it means to curate/collaborate over meaning from â€œthe abundance of data produced in the precise but distant language of machinesâ€ (also see <a href="http://www.archimuse.com/mw2009/abstracts/prg_335001944.html" target="_blank">The Interpretation of Bias (and the bias of interpretation)</a>. The Shape of Alpha uses a process of <a href="http://code.flickr.com/blog/2008/09/04/whos-on-first/">reverse-geocoding</a> to translate machine-generated geographic data into place names that people can understand and relate to.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a> are built with nothing but geotagged photos and some code called clustr (written by the brilliantÂ  <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> &#8211; co-author of <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a>). Anyone can make these <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a>. You can get the shapefiles out of theÂ  <a href="http://www.flickr.com/services/api">Flickr API</a>. Aaron has been keying off WOEIDs (<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs) but as Aaron noted you can key off anything you like &#8211; tags are an obvious choice.</p>
<p>Wow! You can reinvent mapping with this stuff.</p>
<p>Very importantly, <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha,â€</a> tells us something about how we relate to place versus location. The emotions, disputes and behavior related to place also emerge through crowd sourced corrections.Â  For more <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">see this very evocative post by Aaron about corrections and treating airports as cities</a>.Â  There is a glorious thread/riff and ode to the genius ofÂ  J. G. Ballard pursued by Aaron and Dan Catt in their posts (also see Dan Catt&#8217;s, <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes</a>, and Aaron pointed me to <a href="http://www.ballardian.com/the-real-concrete-island" target="_blank">this brilliant &#8220;geo-detective work&#8221; </a>on <a href="http://www.ballardian.com/biblio-concrete-island">Concrete Island</a>, by Mike Bonsall <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">.</a></p>
<p>Dan Catt created <a href="http://geobloggers.com/" target="_blank">geobloggers</a> and â€œseeded the geotagging community around the Web.â€ I met Reverend Dan Catt (Twitter @revdancatt ) at Where 2.0 when he was kind enough to share part of his seat so I could join a very interesting discussion with Aaron on The Shape of Alpha.</p>
<p>As <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">Aaron points out</a> they decided to treat &#8220;the airport itself <em>as</em> the town&#8230;&#8221;Â  not (only) because they admired the work of <a href="http://www.jgballard.com/airports.htm">J.G. Ballard</a>,Â                      &#8220;but because it is the right thing to do.&#8221;</p>
<p>Dan Catt has excellent <a href="http://blog.flickr.net/en/2008/08/08/introducing-a-new-way-to-geotag/">blog posts</a> &#8220;describing                     the nuts and bolts of how &#8216;corrections&#8217; works.&#8221;Â  Aaron points out,Â  &#8220;in <a href="http://code.flickr.com/blog/2008/08/08/location-keeping-it-real-on-the-streets-yo/">the nerdier of                     the two</a> Dan sums it up nicely by saying&#8221;:</p>
<blockquote class="hier"><p><strong>&#8220;On a slightly more philosophical level, itâ€™s a never                         ending process. Weâ€™ll never reach a point where we can                         say â€œRight thatâ€™s in, all borders between places have                         been decided.â€ But what we should end up with are                         boundaries as defined by Flickr users.</strong></p>
<p><strong>&#8230;</strong></p>
<p><strong> </strong></p>
<p><strong>For us, itâ€™s a first small step into an experiment, and actually a pretty big                         experiment as weâ€™re potentially accepting â€œcorrectionsâ€ from our millions and                         millions of users. Weâ€™re not quite sure how itâ€™ll all turn out, but weâ€™re armed                         with Maths, Algorithms and kitten photos.&#8221;</strong></p></blockquote>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Psychosynthography &#8211; &#8220;Wearing Geography as a Perfume&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59.png"><img class="alignnone size-medium wp-image-3649" title="picture-59" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59-300x224.png" alt="picture-59" width="300" height="224" /></a><em> </em></p>
<p><em>Psychosynthography screen capture from Aaron Straup Cope&#8217;s </em><a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation </a><em>. Original photo from Flickr user,Â  <a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">&#8220;</a></em><a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">NiteLynx.&#8221;</a></p>
<p>As I mentioned before, many of the ideas raised at Where 2.0 were unpacked and worked through at WhereCamp. For example, Aaron introduced a word <strong>psychosynthography</strong> in the last 24 seconds of his talk at Where 2.0.</p>
<p>So I spent as much time as I could listening to Aaron at WhereCamp, and asking him about psychosynthography and more (post of this interview upcoming).</p>
<p>Aaron urged the Where 2.0 audience to pay attention to the Psychogeography movement seeded by <a title="Guy Debord" href="http://en.wikipedia.org/wiki/Guy_Debord">Guy Debord</a>, and<strong> â€œto wear geography like a perfume.â€</strong></p>
<p>Joseph Hart writes in a <a href="http://www.utne.com/2004-07-01/a-new-way-of-walking.aspx" target="_blank">â€œNew Way of Walking</a>â€ psychogeography is:<strong> </strong></p>
<p><strong>â€œa whole toy box full of playful, inventive strategies for exploring citiesâ€¦just about anything that takes <span class="mw-redirect">pedestrians</span> off their predictable paths and jolts them into a new awareness of the urban landscape.â€</strong></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Curating Big Data</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden.jpg"><img class="alignnone size-medium wp-image-3625" title="tomcarden" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden-300x199.jpg" alt="tomcarden" width="300" height="199" /></a></p>
<p><em><a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen</a>, (picture above) paired with Aaron for the Curating Big Data session. Tom noted: </em></p>
<p><strong>&#8220;The Curating Big Data session for me was an attempt to learn from other attendees (as opposed to teach/lead, as with the Stamen session, &#8220;Real Time Web-Based Visualization and Mapping&#8221;).Â  Also, it was an excuse to get Aaron to recap parts of the Flickr Shapefile story for WhereCamp folks, and to get *input* on how to do more things like it. I was a bit disappointed that nobody had really good examples for us, but I was happy with Brad Stenger&#8217;s suggestion to look into the upcoming census data as a relevant area.&#8221;</strong></p>
<p>Aaronâ€™s work on the The Shape of Alpha and The Corrections project shows, as Tom noted:</p>
<p><strong>â€œwhat you can do once you have 150 million geotagged photos, and millions of users who are willing to say I took this thing here and my name for that place is â€¦..â€</strong></p>
<p>And part of the significance of opening up the GeoPlanet data set is that now:</p>
<p><strong>â€œwe can try and start talking about the same places, as far as, [for example], these shape files go. So if you are interested in what comes out of the Flickr shape files project and but you also have your own opinion about what shape those places are so the IDs have be open you have to be sure that you are talking about the same thing in the first place.â€</strong></p>
<p>And, as Tom pointed out, collaborating over geo data informs us about curating any big dataset:</p>
<p><strong>â€œit should lead to an overarching discussion about any kind of dataset geo or otherwise and ways in which we can talk about it, and think about patterns for improving that data, for collaborating, even on things like cleanup.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping.jpg"><img class="alignnone size-medium wp-image-3681" title="realtimewebbased-visualizationandmapping" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping-300x199.jpg" alt="realtimewebbased-visualizationandmapping" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost.jpg"><img class="alignnone size-medium wp-image-3739" title="curatingbigdatapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost-300x199.jpg" alt="curatingbigdatapost" width="300" height="199" /></a></p>
<p><em>Warp speed geo-genius Andrew Turner, <a href="http://www.fortiusone.com/" target="_blank">Fortius One</a><a href="http://www.fortiusone.com/" target="_blank">,</a> took these excellent notes for the &#8220;Real Time Web-Based Visualization and Mapping&#8221; (on left) and &#8220;Curating Big Data&#8221; (on the right).</em></p>
<p><em> </em></p>
<p>On my way to Where 2.0 I took the train from SFO to San Jose which was a delight but a little slower than I imagined. So, unfortunately, I arrived on Tuesday just after <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 		<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) presentedÂ  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/20/Maps%20from%20Scratch_%20Online%20Maps%20from%20the%20Ground%20Up%20Presentation.pdf">Maps from Scratch: Online Maps from the Ground Up. </a> This was on my MUST attend list and<em> </em>it was a wonderful opportunity to get into,<em> </em>&#8220;Real Time Web-Based Visualization and Mapping.&#8221;Â Â  I did get a chance to talk to Michal and Shawn a bit later in the conference but I will try to catch up with them soon for an in depth story.Â  Below isÂ  Shawn Allen&#8217;s map of overlapping data sets from, <a href="http://www.flickr.com/photos/shazbot/3282821808/" target="_blank">&#8220;Trees, cabs and crime in San Francisco:&#8221; </a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs.png"><img class="alignnone size-medium wp-image-3743" title="treescrimecabs" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs-300x273.png" alt="treescrimecabs" width="300" height="273" /></a></p>
<p>Another follow up I am really looking forward to making is with <a href="http://lizbarry.com/s+em/contact.htm" target="_blank">Liz Barry</a> and her work on <a href="http://lizbarry.com/s+em/about.htm" target="_blank">S+EM</a>, &#8220;an environmental mapping and social networking design project          that links New York City trees with the people who care for them&#8221; (also see, <a href="http://fuf.net/" target="_blank">Creating a Greener San Francisco Tree by Tree</a>).Â  Also I got a chance to talk to another fellow New Yorker (we have to travel to the West Coast to find time to chat!), <a href="http://radar.oreilly.com/jgeraci/" target="_blank">John Geraci</a> of <a href="http://diycity.org/" target="_blank">DIY City</a> who presented  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/25/DIY%20City_%20An%20Operating%20System%20for%20Cities%20Presentation.zip">DIY City:Â  An Operating System for Cities.</a></p>
<h3>Machine Intelligence and Human Intelligence</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew.jpg"><img class="alignnone size-medium wp-image-3622" title="aaronandandrew" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew-300x199.jpg" alt="aaronandandrew" width="300" height="199" /></a></p>
<p><em>Aaron Cope, Flickr, on the left is talking to Andrew Turner on the right the CTO of FortiusOne (see Andrewâ€™s presentation at Where 2.0, <a href="http://blip.tv/file/2167650" target="_blank">â€œYour Own Private Geo Cloudâ€</a>)</em></p>
<p>Many of the most interesting conversations happened in between sessions at WhereCamp and Where 2.0.</p>
<p>I caught this one in which Aaron Cope and Andrew Turner where discussing some of ideas Aaron raised in his presentation, <a href="http://www.slideshare.net/straup/capacity-planning-for-meaning-presentation-637370?type=powerpoint" target="_blank">â€œCapacity planning for meaning in the age of personal informaticsâ€</a> (see Aaronâ€™s blog post, <a href="http://www.aaronland.info/weblog/2008/10/08/tree/" target="_blank">Tree planting and tree hugging in the age of personal informatics</a>). The core question they were discussing was what happens when you wire the world at the scale people are talking about and it breaksâ€¦ Aaron argues that you already have a whole class of people in systems operations that can tell us a lot about how to answer this question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost.jpg"><img class="alignnone size-medium wp-image-3594" title="rossmayfieldsocialtextpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost-300x199.jpg" alt="rossmayfieldsocialtextpost" width="300" height="199" /></a></p>
<p><em><span class="bio">Ryan and Anselm shared the pulpit for the morning circle pulpit with <a href="http://ross.typepad.com/" target="_blank">Ross Mayfield</a> of <a href="http://www.socialtext.com/" target="_blank">Social Text </a>who was the generous host to WhereCamp.</span></em></p>
<h3>Social Reality Mining</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1.jpg"><img class="alignnone size-medium wp-image-3651" title="benjaminbratton1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1-300x199.jpg" alt="benjaminbratton1" width="300" height="199" /></a></p>
<p><strong>â€œAs it stands today, we have no idea what terms and limits of a cloud based citizenship of the Google Caliphate will entail and curtail. Some amalgam of post-secular cosmopolitanism, agonistic radical democracy, and post-rational actor microecomics, largely driven by intersecting petabyte at-hand datasets and mutant strains of Abrahamaic monotheism. But specifically, what is governance (let alone government) within this?â€ </strong><a href="http://bratton.info/" target="_blank">from Benjamin Brattonâ€™s</a> talk at ETech 2009 (picture above)<strong>, </strong><a href="http://www.bratton.info/emergency.html" target="_blank">Undesigning the Emergency: Against Prophylactic Urban Membranes</a>.</p>
<p>The other big take away from WhereWeek &#8211; Where 2.0 and WhereCamp, was not so much news, but a confirmation of something that has been pretty clear for a while now. (Check out <a href="http://radar.oreilly.com/2008/05/the-results-of-reality-mining.html" target="_blank">Bradyâ€™s posts on reality mining at Where 2.0 last year</a>).</p>
<p>We are moving headlong into the era of reality mining with all its myriad possibilities from: &#8220;hedonistic optimization&#8221; (this term came from <a href="http://brainofstig.ai/" target="_blank">Stig Hackvan</a> when I asked him about some of the ideas central to the <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">HeadMap Manifesto</a> -more about HeadMap later in this post); to new forms of marketing (social reality mining the inside to predict if someone is going to trade business cards in the next 120 seconds &#8211; <a href="http://en.oreilly.com/where2009/public/schedule/speaker/46016" target="_blank">Alex â€œSandyâ€ Pentland, MIT, Where 2.0</a>);Â  to stuff that matters to save us from mass extinction like distributed sustainability &#8211; greening production and consumption and our cities; to open government;Â  empowering indigenous communities (also see Rebecca Moore&#8217;s<a href="http://en.oreilly.com/where2009/public/schedule/speaker/43557" target="_blank"> </a><a class="attach" href="http://assets.en.oreilly.com/1/event/25/Indigenous%20Mapping_%20Emerging%20Cultures%20on%20the%20Geoweb%20Presentation.ppt">Indigenous Mapping: Emerging Cultures on the Geoweb Presentation</a>); and not to be forgotten, the troubling possibility of new forms of social control.</p>
<h3>Smart phones are powerful networked sensor devices in the palm of our hand</h3>
<p>As Sandy Pentland MIT pointed out in his Where 2.0 keynote, <a href="http://en.oreilly.com/where2009/public/schedule/detail/7956" target="_blank">â€œReality Mining for Companies, or, How Social Networks Network Best,â€</a> mobile phones have created an ubiquitous instrumented reality that goes way deeper than location awareness. Smart phones are powerful networked sensor devices in the palm of our hand that know a lot more about us than location. With proximity, motion, (accelerometers), voice, images, call logs, email &#8211; what is enabled is not just knowing where people are but knowing more about them.</p>
<p>Many of the issues raised by <a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield</a> in <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">Everyware</a> and in <a href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">my interview with Adam</a> were on my mind during WhereWeek, also questions that were distilled and explored in this presentation by Matt Jones last year, <a href="http://www.slideshare.net/blackbeltjones/polite-pertinent-and-pretty-designing-for-the-newwave-of-personal-informatics-493301" target="_blank">Polite, Pertinent, andâ€¦ Pretty: Designing for the New-wave of Personal Informatics</a> and <a href="http://www.slideshare.net/tmo/the-web-in-the-world-presentation" target="_blank">Timo Arnallâ€™s presentation, The Web in the World</a>.</p>
<h3>Google Wave, PachubeÂ  Feeds, Sensor Networks and Microsyntax!</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="560" height="340" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="560" height="340" src="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em><a id="o_ok" title="Visualizing 24 hours of @pachube" href="http://is.gd/IYOj" target="_blank">Visualizing 24 hours of Pachube</a> logs, feeds all around the world -Â  built with Processing.</em></p>
<p>I found myself really wishing <a href="http://www.pachube.com/" target="_blank">Pachube</a> founder Usman Haque had been able to come to Where 2.0 this year &#8211; Usman was originally on the Where 2.0 schedule but had to drop out. My small contribution to WhereCamp was to discuss <a href="http://www.pachube.com/" target="_blank">Pachube</a>, <a href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a> and <a href="http://www.shaspa.com/" target="_blank">OpenShaspa</a> in the, Urban Eco-Managment session (<a href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">see my interview with Pachube Founder, Usman Haque here</a>).</p>
<p>Pachube announced &#8211; <a id="du7_" title="mapping mobile feeds in realtime" href="http://is.gd/BjJT" target="_blank">mapping mobile feeds in realtime</a>, with 3d datastream value time &amp; location based graphing just before Where 2.0.</p>
<p>And, as I was writing up this post, I was delighted to see <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">this post by Bruce Sterling on Pachube Feeds</a> and his challenge, offering:</p>
<p><strong>&#8220;(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))&#8221;</strong></p>
<p>Also Anselm Hook, who has an extensive background in video game development, made an interesting point about Google Wave to me:</p>
<p><strong>&#8220;btw &#8211; there is a preexisting metaphor for the wave &#8211; the wave is notable in that it is making the web like a videogame &#8211; its bringing real time many participant shared interaction to the web&#8221;</strong></p>
<div id="a9iz" style="text-align: left;">And see <a href="http://radar.oreilly.com/2009/05/google-wave-what-might-email-l.html" target="_blank">Tim Oâ€™Reillyâ€™s post</a> for more on the significance of Wave, which <a href="http://www.techcrunch.com/2009/05/28/google-wave-drips-with-ambition-can-it-fulfill-googles-grand-web-vision/">Google previewed for developers at its I/O conference</a>:</div>
<p><strong>â€œJens, Lars, and team re-imagined email and instant-messaging in a connected world, a world in which messages no longer need to be sent from one place to another, but could become a conversation in the cloud. Effectively, a message (a wave) is a shared communications space with elements drawn from email, instant messaging, social networking, and even wikis.â€ </strong></p>
<p>For more on microsyntax see <a href="http://www.microsyntax.org/" target="_blank">microsyntax.org</a></p>
<p>Aaron pointed out to me re microsyntax:</p>
<p><strong>&#8220;This is ultimately the &#8220;magic word&#8221; problem, which is essentially the semweb vs. google-is-smarter-than-you problem.&#8221;</strong></p>
<p>I will have some more questions for Aaron on the the &#8220;magic word&#8221; problem in my upcoming interview post.Â  At the moment I am busy studying some of the thoughts in these links.</p>
<p><a href="http://delicious.com/straup/magicwords" target="_blank">http://delicious.com/straup/magicwords</a></p>
<p><a href="http://www.slideshare.net/straup/the-papernet/22" target="_blank">http://www.slideshare.net/straup/the-papernet/22</a></p>
<p><a href="http://www.xml.com/pub/a/2005/02/16/edfg.html" target="_blank">http://www.xml.com/pub/a/2005/02/16/edfg.html</a></p>
<p><a href="http://xtech06.usefulinc.com/schedule/paper/135" target="_blank">http://xtech06.usefulinc.com/schedule/paper/135</a></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Privacy: Towards a Win Win and Community Sensing</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21.jpg"><img class="alignnone size-medium wp-image-3659" title="erichorvitz21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21-300x199.jpg" alt="erichorvitz21" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"><img class="alignnone size-medium wp-image-3655" title="communitysensing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing-300x199.jpg" alt="communitysensing" width="300" height="199" /></a></p>
<p>While a key element ofÂ  Yahoo! Geo Technologies portfolio of platforms, <a href="http://fireeagle.yahoo.net/" target="_blank">FireEagle</a>, not only gives an important set of tools to allow people to &#8220;share their location with sites and services through the Web or a mobile device&#8221; but also offers up some vital privacy tools, the community sensing work of Eric Horvitz takes privacy and data sharing into new terrain.</p>
<p>Eric didnâ€™t have time to discuss his privacy work in his Where 2.0 presentation, <a href="http://en.oreilly.com/where2009/public/schedule/detail/8911" target="_blank">Where, When, Why, and How: Directions in Machine Learning and Reasoning about Location</a>, &#8211; it came up in his very last slide. But I ran up after his talk with my trusty old ipod recorder in hand, and got the part we missed! Fascinating stuff that will be the subject of an upcoming interview post. Hereâ€™s a little taste of what is to come. Eric describes one of the directions his team will be exploring.</p>
<p><strong>â€œOne thing I want to do, on our research team, Iâ€™d like to develop something very simple for people to use. A challenging problem with privacy is usability and controls. Aunt Polly and Uncle Herbie just donâ€™t get all these authentication controls and sliders, nor do they want to invest in figuring them out. They also donâ€™t get why theyâ€™re being asked with pop up windows to yes or no to various questions and so on. One Idea is having a useable privacy lens, that you can hold up anywhere and it tells you what youâ€™re showing anybody or any organization, what does the world know about you. And you would like to have buttons to turn sharing off for some items. You&#8217;d also like to have a way to go back in time and view prior sharing and logging over periods of time, and to have buttons to push to say erase that segment of your logs.â€</strong></p>
<p><strong> </strong></p>
<p>Understanding the social implications of what it means to live in an instrumented world is a topic that we cannot afford not think about. But luckily there are lot of people who have been thinking pretty deeply about this for a while now.</p>
<p>And I did my best at both Where 2.0 and WhereCamp to seek out as many of geothinkers as I could, and do interviews wherever possible (I have not had time to mention everyone I talked to in this post but hopefully all the interviews will get on Ugotrade soon!)</p>
<p><span style="font-family: Arial,Helvetica,sans-serif; font-size: x-small;"> </span></p>
<h3>HeadMap Manifesto</h3>
<p>In the bar of The Fairmont on the last night of Where 2.0, I heard some of the history of Where 2.0, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking</a>, and <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> from Sophia Parafina, Director of Operations for <a href="http://opengeo.org/" target="_blank">OpenGeo</a> and <a href="http://testingrange.com/" target="_blank">Rich Gibson</a>, programmer, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanker</a>,Â <a href="http://gigapan.org/index.php" target="_blank"> Gigapanner</a> and co-author of <a href="http://mappinghacks.com/" target="_blank">Mapping Hacks </a>with <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> and <a href="http://frot.org/" target="_blank">Jo Walsh</a> (Jo did a lot <a href="http://frot.org/s/semantic_city.html" target="_blank">of key early work on bottom up urban informatics </a> but unfortunately couldn&#8217;t make it to WhereWeek this year).</p>
<p>Check <a id="zaq4" title="Gigapan.org" href="http://www.gigapan.org/index.php" target="_blank">Gigapan.org</a> out! <strong>&#8220;The GigaPan<span class="trademark">SM</span> process allows users to upload, share, and explore brilliant gigapixel+ panoramas from around the globe.&#8221;</strong></p>
<p>Also I interviewed Paul Ramsey, Senior Consultant, OpenGeo, so more on OpenGeo is upcoming (see Paulâ€™s <a href="http://blog.cleverelephant.ca/2009/05/where-re-cap.html" target="_blank">Where ReCap</a>). <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43773"> Justin Deoliveira</a> (OpenGeo) andÂ   	 		<a href="http://en.oreilly.com/where2009/public/schedule/speaker/59688">Sophia Parafina</a> did a session, <a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165">GeoServer, GeoWebCache + OpenLayers: The OpenGeo Stack,</a><span class="url uid"> which unfortunately I missed as it </span><span class="url uid">was before I arrived Tuesday.</span><a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165"></a></p>
<div id="page_title"><strong> </strong></div>
<p><span class="bio"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich.jpg"><img class="alignnone size-medium wp-image-3631" title="sophiaandrich" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich-300x199.jpg" alt="sophiaandrich" width="300" height="199" /></a></span></p>
<p>I met Rich Gibson <a href="http://www.flickr.com/photos/ugotrade/sets/72157615022689427/" target="_blank">at Etech 2009 playing Werewolf</a> and Rich introduced me to his co-author on <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a> and alpha geek supreme, Schuyler Erie, who also wrote the clustr code that The Shape of Alpha uses.</p>
<p><a href="http://joshua.schachter.org/" target="_blank">Joshua Schachter</a> founder of Delicious and the <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking mailing list</a>, [and <a href="http://geourl.org/" target="_blank">GEOURL </a>- and <a href="http://memepool.com/" target="_blank">MemePool!] </a> now at Google came to WhereCamp and was mobbed by a small crowd eager to get their hands on one of the developer G Phones he was handing out from a large box.</p>
<p>GeoWanking, which is now run by Oâ€™Reilly Media, has been the incubator for all things location aware and â€œneogeographyâ€ discussions since 2003 &#8211; check out â€˜<a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html" target="_blank">sproke</a> for a <a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html">Paleogeography vs Neogeography </a>(which, as Sophia notes, was a common topic of discussion at Where 2.0) smack down in which geowanking rules in the form of a list traffic comparison.</p>
<p>Sophia and Rich shared some of their perspective on the early days of GeoWanking and the creation of the HeadMap Manifesto with me and pointed me to many other people to talk to. The prime mover of the Headmap manifesto, Ben Russell, has retired from the scene &#8211; perhaps bored by seeing a radical vision gone thoroughly mainstream, or exhausted by the rigors of carrying an idea through the early blue sky years, or just s simply doing something else? I donâ€™t know.</p>
<p><a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> is still vibrant today even as much of what it envisaged has already been realized. HeadMap assembled the future in a poetry of fragments:</p>
<p><strong>â€œyou can search for sadness in new york people within a mile of each other who have never met stop what they are doing and organize spontaneously to help with some task or other.â€</strong></p>
<p>Anselm explained to me what powered all this social cartography revolution, from his POV, was actually IRC.</p>
<p><strong>&#8220;We had a channel on IRC called &#8220;#geo&#8221;. Â And many of us met there.Â  I met Ben Russell at MathEngine in the UK. Ben and I were fascinated by the future of maps.Â  Ben, Jo and I met Schuyler, Dav, Dan Brickley (who worked for Tim Berner&#8217;s Lee who invented the Web), Rich Gibson, Joshua Schachter (who was just a geek at Morgan Stanley at the time ) &#8230; and the snowball took off&#8230;. Â many others.</strong></p>
<p><strong>We stormed ETECH ( Schuyler met Jo there). Â We got invited to FooCamp. Schuyler was married to Jo by Marc Powell (Food Genome) and lived at his house. Â We pushed so hard on the social cartography revolution.</strong></p>
<p><strong>I did a spinny globe for geourl &#8211; a project by some hacker named Joshua Schachter&#8230; Â we were all friends for years and we had never even met.&#8221;</strong></p>
<p><strong></strong></p>
<p><strong></strong></p>
<h3>â€œCan AR researchers harness these new approaches to index reality?â€</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="344" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="425" height="344" src="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Radioheadâ€™s laser (as opposed to video) clip made using <a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a></p>
<p><a id="t7u3" title="If you have read my interview with Ori Inbar," href="../../2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">If you have read my interview with Ori Inbar,</a> you will know how excited I was to attend The Mobile Reality panel.Â  <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">The video is up</a> and it is really awesome to hear <a href="http://en.oreilly.com/where2009/public/schedule/speaker/35457">Raven Zachary</a> (on twitter @<a href="http://www.twitter.com/ravenme">ravenme</a>) get into the fray with augmented reality.</p>
<p>The main take away for me from the Mobile Reality panel was that we shouldn&#8217;t get too hung up on the difficulties of achieving fully immersive visual augmented reality and twiddle our thumbs waiting for the long anticipated sexy lightweight eyeware &#8211; which is still in a coming soon phase (for more on immersive augmented reality see my upcoming interview with <a href="http://www.cc.gatech.edu/%7Eblair/home.html" target="_blank">Blair MacIntyre</a>). Because, in the meantime, there are plenty of delightful and useful ways to augment our experience of the world &#8211; and not all of these augmented realities rely soley on smart phones as John S. Zeleck showed in his presentation on <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigation.â€</a> Also I had an interesting discussion at lunch with Ori Inbar about the use of audio for augmented reality projects.</p>
<p>Where 2.0 clearly demonstrated that we have an unprecedented amount of information from mapping our world, <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar noted in his conference roundup. </a> Ori writes:</p>
<p><strong>&#8220;My point is not a shocker: all we need is to tap into this information and bring it, in context, into people&#8217;s field of view.&#8221;</strong></p>
<p>As Ori noted <strong><a href="http://www.earthmine.com/" target="_blank">Earthmine</a></strong> and <strong><a href="http://www.velodyne.com/lidar/" target="_blank">Velodyne&#8217;s Lidar</a></strong> showed off two new approaches to mapping the world that have potential to create new opportunities for augmented reality:</p>
<p><strong><strong><a href="http://www.earthmine.com/" target="_blank">&#8220;Earthmine</a></strong> uses its own camera-based device to index reality, at the street level, one pixel at a time. They have just announced <a href="http://wildstylecity.com/wsc/" target="_blank">Wild Style City</a> an application that allows anyone to create virtual graffitis on top of designated public spaces. However, at this point, you can only experience it on a pc!&#8221;</strong></p>
<p><a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a>, Ori notes, has also embarked on a mission to map the outdoors. But, the question Ori highlights is:</p>
<p><strong>â€œCan AR researchers harness these new approaches to index reality?â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest.jpg"><img class="alignnone size-medium wp-image-3660" title="johnzelekandbradyforrest" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest-300x199.jpg" alt="johnzelekandbradyforrest" width="300" height="199" /></a></p>
<p>Brady Forrest inspects John S. Zelekâ€™s <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigationâ€</a> at Where Fair before putting it on and being guided by sensory nudges at the cardinal points in the belt.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost.jpg"><img class="alignnone size-medium wp-image-3661" title="bradyforrestpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost-199x300.jpg" alt="bradyforrestpost" width="199" height="300" /></a></p>
<h3>Coolest Mobile Locative Media App. at Where Fair</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png"><img class="alignnone size-full wp-image-3682" title="picture-61" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png" alt="picture-61" width="176" height="269" /></a></p>
<p><a href="http://www.sonycsl.co.jp/person/shio.html" target="_blank">Atsushi Shionozaki </a>of<strong> <a href="http://www.placeengine.com/en" target="_blank">Place Engine</a></strong> &#8211; &#8220;<strong>a core technology that enables a device equipped with Wi-Fi such as a laptop PC or smart phone to determine its current location,&#8221; </strong>demoed the coolest location aware mobile app in Where Fair &#8211; <a id="uwuf" title="Oedo Yokai" href="http://service.koozyt.com/oedo/" target="_blank">Oedo Yokai</a>. Working with ethnologist, Dr. Hiro Kubota and artist Atsushi Morioka, &#8220;Oedo Yokai&#8221; is <a id="gtb2" title="Koozyt's" href="http://www.koozyt.com/" target="_blank">Koozyt&#8217;s</a> <strong>&#8220;first attempt to cross IT (Location Information) and Folkloristics.&#8221; </strong></p>
<p><strong>&#8220;The Japanese &#8220;Yokai&#8221; are known to dwell and appear at specific locations. They can frequently be seen within the grounds of shrines and temples, believed to be the border between this world and the afterlife, or in more common places like on a hill or at a crossroads. If the &#8220;Yokai&#8221; symbolize the mystery, legend, and lore associated with places, as our interests fade from actual locations, the rol, es they play in modern day society will diminish, and the &#8220;Yokai&#8221; might then cease to appear at all.&#8221;</strong></p>
<p><strong></strong>I love this idea of bringing the ancient spirits of place back into our lives with our new tools of location awareness.</p>
<p>Odeo Yokai also reminds me of Aaron Straup Cope&#8217;s work on &#8220;<a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#historybox" target="_blank">the idea of every spot being a &#8220;history box&#8221;</a> which he explained is &#8220;one of the threads behind<a href="http://blog.flickr.net/en/2009/02/24/an-abundant-present/" target="_blank"> the &#8216;nearby&#8217; project at Flickr</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai.jpg"><img class="alignnone size-medium wp-image-3683" title="oedoyokai" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai-300x199.jpg" alt="oedoyokai" width="300" height="199" /></a></p>
<h3>The Food Genome</h3>
<p>I cannot end this roundup of WhereWeek without a mention of <a href="http://www.foodgenome.com/home" target="_blank">The Food Genome</a>.</p>
<p><strong>&#8220;Food Genome is a big hungry brain that scours the internet, trying to learn everything there is to know about food.&#8221;</strong></p>
<p>Watch out for the upcoming launch of this project, it stole the show with an exciting presentation at WhereCamp. You can follow <a href="http://twitter.com/foodgenome">@foodgenome on Twitter</a> now.</p>
<p>To get one of the gorgeous Food Genome brochures you had to ask Mark Powell a good question. Notice an eager hand reaching out in the picture below. I asked, â€œhow would the basic building blocks of the food genome be licensed?â€ I got my brochure and a rain check on an answer to my question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost.jpg"><img class="alignnone size-medium wp-image-3664" title="foodgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost-199x300.jpg" alt="foodgenomepost" width="199" height="300" /></a></p>
<h3>The Ubiquitous Media Studio</h3>
<p><strong></strong>Another highlight of WhereCamp was hearing from <a id="nfup" title="Gene Becker" href="http://lightninglaboratories.com/about.html" target="_blank">Gene Becker</a> about his new project, <a id="bs9-" title="Ubiquitous Media Studio" href="http://ubistudio.org/" target="_blank">Ubiquitous Media Studio</a> which will be located in Palo Alto. The project is still in the early stages of devlopment but it sounds really exciting. I am looking forward to being involved from East Coast.Â  If you&#8217;re curious where this is going, <strong><a href="http://twitter.com/ubistudio">follow @ubistudio on Twitter</a></strong> to stay updated.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene.jpg"><img class="alignnone size-medium wp-image-3684" title="gene" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene-300x300.jpg" alt="gene" width="300" height="300" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/feed/</wfw:commentRss>
		<slash:comments>14</slash:comments>
		</item>
	</channel>
</rss>
