<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; the shape of alpha</title>
	<atom:link href="http://www.ugotrade.com/tag/the-shape-of-alpha/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Total Immersion and the &#8220;Transfigured City:&#8221; Shared Augmented Realities, the &#8220;Web Squared Era,&#8221; and Google Wave</title>
		<link>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/</link>
		<comments>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/#comments</comments>
		<pubDate>Sun, 27 Sep 2009 04:42:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[3D Interactive Live Show]]></category>
		<category><![CDATA[Acrossair]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[anime]]></category>
		<category><![CDATA[Apple iPhone]]></category>
		<category><![CDATA[AR baseball cards for Topps]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[Architectural League of New York]]></category>
		<category><![CDATA[ARML]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented City]]></category>
		<category><![CDATA[augmented city lab]]></category>
		<category><![CDATA[augmented reality books]]></category>
		<category><![CDATA[augmented reality entrpreneurship]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality making visible the invisible]]></category>
		<category><![CDATA[augmented reality mark-up language]]></category>
		<category><![CDATA[augmented reality pollution meter]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[augmented virtuality]]></category>
		<category><![CDATA[Bionic Eye]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruno Uzzan]]></category>
		<category><![CDATA[Conflux]]></category>
		<category><![CDATA[cross platform compatibility for augmented reality]]></category>
		<category><![CDATA[D'Fusion]]></category>
		<category><![CDATA[Daniel Wagner]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[elements of networked urbanism]]></category>
		<category><![CDATA[Elizabeth Goodman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[Fish 'n Microchips]]></category>
		<category><![CDATA[Flickr]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo spatial web]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geoaugmentation]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Protocol]]></category>
		<category><![CDATA[Gov 2.0 Expo Showcase]]></category>
		<category><![CDATA[Gov 2.0 Summit]]></category>
		<category><![CDATA[Graz University of Technology]]></category>
		<category><![CDATA[Imagination]]></category>
		<category><![CDATA[Incheon Free Economic Zone]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[Int13]]></category>
		<category><![CDATA[Interaction Design for Augmented Reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Jonathan Laventhol]]></category>
		<category><![CDATA[Korea's u-Cities]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar 3D]]></category>
		<category><![CDATA[magic lens augmented reality]]></category>
		<category><![CDATA[manga]]></category>
		<category><![CDATA[Mark Shepard]]></category>
		<category><![CDATA[Mark Weiser]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Microsoft Bing]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Mobilizy]]></category>
		<category><![CDATA[multiuser augmented reality]]></category>
		<category><![CDATA[Natalie Jeremijenko]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[near-field object rcognition and tracking]]></category>
		<category><![CDATA[Networked City]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[newer urbanism]]></category>
		<category><![CDATA[open]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[Orange Cone]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[realtime panorama mapping on mobile phones]]></category>
		<category><![CDATA[RobotVision]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Sentient City Survival Kit]]></category>
		<category><![CDATA[Shangri La]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[Sky Writer]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[symbiosis between augmented reality and brands]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the LAN of things]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[the web squared era]]></category>
		<category><![CDATA[ThingM]]></category>
		<category><![CDATA[things as services]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Tod E. Kurt]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Toward the Sentient City]]></category>
		<category><![CDATA[Transfigured City]]></category>
		<category><![CDATA[twitter]]></category>
		<category><![CDATA[u-City]]></category>
		<category><![CDATA[ubiquitous computing and augmented reality]]></category>
		<category><![CDATA[uCity]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Weisarian Ubiquitous Computing]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[xClinic]]></category>
		<category><![CDATA[XMPP versus HTTP]]></category>
		<category><![CDATA[Yocahi Benkler]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4439</guid>
		<description><![CDATA[Above is an image aboveÂ  from Total Immersion&#8217;s augmented reality experience developed for the &#8220;Networked City&#8221; exhibition in South Korea, &#8211; &#8220;a fun scenario created for a u-City&#8217;s infrastructure and city management service&#8221; &#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b.jpg"><img class="alignnone size-medium wp-image-4440" title="dhj5mk2g_338cwpzntgp_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b-300x170.jpg" alt="dhj5mk2g_338cwpzntgp_b" width="300" height="170" /></a></p>
<p><em>Above is an image aboveÂ  from <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> augmented reality experience developed for the <a id="winm" title="&quot;Networked City&quot; exhibition in South Korea, &quot;" href="http://www.tomorrowcity.or.kr/sv_web/en_US/space.SpaceInfo.web?targetMethod=DoUe04Sub1" target="_blank">&#8220;Networked City&#8221; exhibition in South Korea,</a> &#8211; &#8220;a fun scenario created for a<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"> u-City&#8217;s</a> infrastructure and city management service&#8221; </em></p>
<p><strong>&#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special AR goggles a whole new world unfolds â€“ as graphics overlaid on the city model.</strong><em><strong>&#8221; </strong>(<a href="http://gamesalfresco.com/2009/09/14/total-immersion-brings-augmented-reality-to-tomorowcity-todaytomorrow/" target="_blank">Games Alfresco)</a></em></p>
<p>&#8220;The Networked City,&#8221; is a large scale augmented virtuality of a scenario for a networked city. But my guess, reading the &nbsp; &nbsp;    <em><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a></em>, is the plan is to move from an augmented virtuality to an augmented reality as Incheon Free Economic ZoneÂ  (IFEZ) realizes its vision to become a leading u-City &#8211; where reality is turned &#8220;inside out&#8221; (see <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">Inside Out: Interaction Design for Augmented Reality )</a>.Â <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php"> </a>If you are not familiar with South Korea&#8217;s u-Cities, <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">check out this post</a> for a short primer (and note<a href="http://www.google.com/trends?q=augmented+reality&amp;ctab=1986817859&amp;geo=all&amp;date=all" target="_blank"> Google Trends search on Augmented Reality </a>showsÂ  South Korea leaving everyone else in the dust).<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"></p>
<p></a></p>
<h3>Ubiquitous computing and augmented reality are like adenine and thymine &#8211; a DNA base pair.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM.png"><img class="alignnone size-medium wp-image-4442" title="Screen shot 2009-09-24 at 11.34.35 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM-300x256.png" alt="Screen shot 2009-09-24 at 11.34.35 PM" width="300" height="256" /></a></p>
<p><em>A sky view of Incheon Free Economic Zone (<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">from Korean IT Times</a>). For more on the IFEZ vision to become a leading u-City <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">see here</a>.</em></p>
<p><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a> writes about the u-city concept:</p>
<p><strong>&#8220;Korea began using the term u-City after accepting the concept of ubiquitous computing, a post-desktop model of human-computer interaction created by Mark Weiser, the chief technologist of the Xerox Palo Alto Research Center in California, in 1998. There have been a lot of research in this field since 2002. As a result, many local governments in Korea have applied this concept to various development projectsÂ since 2005Â based on a practical approach to it.&#8221;</strong></p>
<p>The back story to many of my recent posts, including this one, is an understanding of a relationship between ubiquitous computing and augmented reality that emerged, for me, in a February conversation with Adam Greenfield, <a title="Permanent Link to Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield" rel="bookmark" href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/">Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</a>.Â  In cased you missed it, here is the link again because I think it holds up very well considering the rapid developments of recent months.Â  Also, importantly for this post, it includes a discussion ofÂ  moving on from Weiserian visions.</p>
<p><a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield&#8217;s Speedbird</a> is one of my key sources for understanding &#8220;networked urbanism,&#8221; and the list he makes of <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank">the elements of networked urbanism here</a> (also see the comments) &#8211; is my mantra for thinking about the DNA base pair relationship of augmented reality and ubiquitous computing.</p>
<p>Adam Greenfield&#8217;s, <strong>&#8220;summary of what those of us who are thinking, writing and speaking about networked urbanism seem to be seeing&#8221;</strong> is:</p>
<p><strong>1. From <em>latent</em> to <em>explicit</em>; 2. From <em>browse</em> to <em>search</em>; 3. From <em>held</em> to <em>shared</em>; 4. From <em>expiring</em> to <em>persistent</em>; 5. From <em>deferred</em> to <em>real-time</em>; 6. From <em>passive</em> to <em>interactive</em>; 7. From <em>component</em> to <em>resource</em>; 8. From <em>constant</em> to <em>variable</em>; 9. From <em>wayfinding</em> to <em>wayshowing</em>; 10. From <em>object</em> to <em>service</em>; 11. From <em>vehicle</em> to <em>mobility</em>; 12. From <em>community</em> to <em>social network</em>; 13. From <em>ownership</em> to <em>use</em>; 14. From <em>consumer</em> to <em>constituent</em>.</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Augmented Reality &#8211; Making Visible the Invisible</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM.png"><img class="alignnone size-medium wp-image-4509" title="Screen shot 2009-09-26 at 2.44.27 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM-300x229.png" alt="Screen shot 2009-09-26 at 2.44.27 PM" width="300" height="229" /></a></p>
<p>The screenshot above is one ofÂ  the coolest &#8220;making visible the invisible&#8221; AR applications. It was developed at Columbia University Graphics and User Interface Lab where <a href="http://www1.cs.columbia.edu/%7Efeiner/" target="_blank">Steven Feiner</a> is Director (see the deep list of projects from the lab <a href="http://graphics.cs.columbia.edu/top.html" target="_blank">here</a>).Â  This app &#8220;shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.&#8221; Credit: Sean White and Steven FeinerÂ  (<a href="http://www.technologyreview.com/computing/23515/page2/" target="_blank">via Technology Review</a>).</p>
<p>The recent emergence of &#8220;magic lens&#8221; augmented reality apps for our smart phones &#8211; <a href="http://www.wikitude.org/" target="_blank">Wikitude</a>, <a href="http://layar.com/" target="_blank">Layar,</a> <a href="http://www.acrossair.com/" target="_blank">Acrossair</a>, <a href="http://support.sekaicamera.com/">Sekai Camera</a>, and many others now, have given us a new window into our cities. But we are yet to realize the full potential of the AR/ubicomp base pair that can &#8220;make visible the invisible&#8221; and give us new opportunities to relate to the invisible data ecosystems of our cities, not merely as a spectator experience,Â  but as an interactive, in context, real time opportunity to reimagine social relations.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">Mark Shepard</a> says in <a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">his curatorial statement</a> for, <a href="http://www.sentientcity.net/exhibit/" target="_blank">&#8220;Toward the Sentient City:&#8221;</a> (Much more soon on this very significant exhibit which runs from Sept. 17th to Nov. 7th, 2009.)</p>
<p><strong>&#8220;In place of natural weather systems, however, today we find the dataclouds of 21st century urban space increasingly shaping our experience of this city and the choices we make there.&#8221;</strong></p>
<p>Augmented Reality, as Joe Lamantia points out, is becoming the great &#8220;<a id="o0mh" title="ambassador of ubiqitous computing" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">ambassador of ubiqitous computing</a>.&#8221; AR is. &#8220;<strong>&#8230;mak[ing] it possible to experience the new world of ubiquitous computing by reifying the digital layer that permeates our inside-out world,&#8221; </strong>and we are only just glimpsing the razor thin end of the wedge in this regard.</p>
<p>I am still working on my <a href="http://www.gov2summit.com/" target="_blank">Gov 2.0 Summit </a>write upÂ  and, amongst other things, I will talk about how an emerging new social contract around open data, here in the US,Â  will put augmented realityÂ  apps center stageÂ  &#8211; &#8220;doing stuff that matters.&#8221; At <a href="http://www.gov2expo.com/gov2expo2009" target="_blank">Gov 2.0 Expo Showcase</a> Tim O&#8217;Reilly tweeted:</p>
<p><a id="i23q" title="Tim O'Reilly" href="http://twitter.com/timoreilly">Tim O&#8217;Reilly</a> Really enjoyed @capttaco (Digital Arch Design) @ #gov20e: &#8220;Augmented Reality could be a new public infrastructure&#8221; <a href="http://bit.ly/18iCx" target="_blank">http://bit.ly/18iCx</a></p>
<p>Also see Tim O&#8217;Reilly and Jennifer Pahlka on Forbes.com discuss the <a href="http://www.forbes.com/2009/09/23/web-squared-oreilly-technology-breakthroughs-web2point0.html" target="_blank">The &#8220;Web Squared&#8221; Era</a> -Â <strong> &#8220;the Web Squared era is an era of augmented reality arriving (like the sensor revolution) stealthily, in more pedestrian clothes than we expected</strong>.<strong>&#8230; &#8230;our world will have &#8220;<a href="http://www.orangecone.com/archives/2009/02/smart_things_an.html" target="_blank">information shadows</a>.&#8221; Augmented reality amounts to information shadows made visible.&#8221;</strong></p>
<p>Again there is back story to how I came to think about Information Shadows in relation to augmented reality.Â  So in case your missed it the first time, here is the link to a conversation that began in a hallway meeting between Tim O&#8217;Reilly, Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a>, Usman Haque, <a href="http://www.pachube.com/" target="_blank">Pachube</a>, and Gavin Starks, <a href="http://www.amee.com/" target="_blank">AMEE</a>, at <a href="http://en.oreilly.com/et2009/" target="_blank">ETech earlier this year</a>,Â  <a title="Permanent Link to Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009" rel="bookmark" href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/">&#8220;Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM.png"><img class="alignnone size-medium wp-image-4547" title="Screen shot 2009-09-26 at 9.32.09 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM-300x225.png" alt="Screen shot 2009-09-26 at 9.32.09 PM" width="300" height="225" /></a></p>
<p><a href="http://www.slideshare.net/rlenz/augmented-city-lab-picnic-09" target="_blank">Slide from Augmented City Lab</a> @ <a href="http://www.picnicnetwork.org/" target="_blank">Picnic &#8217;09</a></p>
<h3>So What&#8217;s Next for Mobile Augmented Reality?</h3>
<p><a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4513" title="Screen shot 2009-09-26 at 3.45.45 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-3.45.45-PM-300x186.png" alt="Screen shot 2009-09-26 at 3.45.45 PM" width="300" height="186" /></a></p>
<p>These videos from Daniel Wagner&#8217;s team from Graz University of Technology showing <a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank">Realtime Panorama Mapping and Tracking on Mobile Phones</a> and <a href="http://www.youtube.com/watch?v=W-mJG3peIXA&amp;feature=player_embedded" target="_blank">Creating an Indoor Panorama in Realtime</a>, as Rouli from Games Alfresco points out,Â  indicate that there is a lot in store for us at <a href="http://www.icg.tugraz.at/Members/daniel/MultipleTargetDetectionAndTrackingWithGuaranteedFrameratesOnMobilePhones/inproceedings_view">ISMAR09</a>.</p>
<p>We may not be so impressed by directory style/&#8221;post it&#8221; AR anymore, as these applications have become common place so quickly!Â  But while these early mobile AR apps may be disappointing in relation to some futurist visions of AR &#8211; merely AR/ubicomp appetizers,Â  there are still good implementations of this model coming out (see new comers to the app store<a id="tzvf" title="Bionic Eye" href="http://mashable.com/2009/09/24/bionic-eye/" target="_blank"> Bionic Eye</a> and <a href="http://www.readwriteweb.com/archives/robotvision_a_bing-powered_iphone_augmented_realit.php" target="_blank">RobotVision</a>). And <a href="http://layar.com/" target="_blank">Layar,</a> always on the ball, has upped the ante for the new cohort of AR Browsers with <a href="http://layar.com/3d/" target="_blank">Layar 3D</a>.</p>
<p>But as Bruce Sterling <a href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-robotvision/" target="_blank">notes here</a>:</p>
<p><strong>*In AR, everybody wants to be the platform and the browser, and nobody wants to be the boring old geolocative database. Look how Tim [creator of RobotVision] here, who is like one guy working on his weekends, can boldly fold-in the multi-billion dollar, multi-million user empires of Apple iPhone, Microsoft Bing, Flickr, and Twitter, all under his right thumb</strong></p>
<p> (watch <a id="qxek" title="video here" href="http://www.youtube.com/watch?v=hWC9gax7SCA&amp;feature=player_embedded">video here</a>)</p>
<p>But ifÂ  you looking for something more from AR, you probably won&#8217;t have to wait too long.Â  The two pioneering companies in AR, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> &#8211; founded in 1999, and <a href="http://www.metaio.com/" target="_blank">Metaio</a> &#8211; founded in 2003 are both coming out with &#8220;mobile augmented reality platforms&#8221; in a matter of weeks (see press releases <a href="http://augmented-reality-news.com/2009/09/14/bringing-its-augmented-reality-to-mobile-applications-total-immersion-partners-with-smartphones-app-provider-int13/" target="_blank">here</a> and <a href="http://gamesalfresco.com/2009/09/18/metaio-announcing-mobile-augmented-reality-platform-junaio/" target="_blank">here</a>).Â  And both companies, it seems, will deploy much more sophisticated AR rendering and tracking than we have seen to date.</p>
<p>I approached Bruno Uzzan, founder and CEO of Total Immersion, for an interview as part of my look at the new industry of augmented reality through the eyes of the founding members of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. These consortium members are some of the first commercial augmented reality companies.</p>
<p><a href="#jumpto">The interview below</a> with Bruno began early this summer and then we both went on vacation and it picks up after the announcement of the <a href="http://www.int13.net/blog/en/" target="_blank">partnership between Total Immersion and Int13</a>.</p>
<p>The significance of this announcement is that Total Immersion is now positioned to take the augmented reality experiences they have developed for a number of top brands onto multiple mobile platforms with, &#8220;<strong>Int13&#8242;s very clever embedded solution that allows our [Total Immersion's] solutions to work across many [mobile] platforms,&#8221; </strong>while Int13 gets to extend their reach.</p>
<p>Total Immersion has a 50 person R&amp;D team and their two main focuses have been, firstly getting:<strong> </strong></p>
<p><strong>&#8220;Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility&#8230;.&#8221;</strong></p>
<p>and, secondly:<strong></p>
<p></strong></p>
<p><strong>&#8220;Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Pandora&#8217;s Box &#8211; Shared Augmented Realities</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM.png"><img class="alignnone size-medium wp-image-4450" title="Screen shot 2009-09-25 at 1.18.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM-186x300.png" alt="Screen shot 2009-09-25 at 1.18.15 AM" width="186" height="300" /></a></p>
<p>Spes or &#8220;Hope&#8221;; <a title="Engraving" href="http://en.wikipedia.org/wiki/Engraving">engraving</a> by <a title="Sebald Beham" href="http://en.wikipedia.org/wiki/Sebald_Beham">Sebald Beham</a>, German c1540 (see <a href="http://en.wikipedia.org/wiki/Pandora%27s_box" target="_blank">Wikipedia article on Pandora&#8217;s Box</a>)</p>
<p>There are many weaknesses to the mobile smart phone AR experiences we have now, and the lack of near field object recognition (to date), and difficulties with accurate positioning aren&#8217;t the only ones.Â  Note re solving positioning problems in mobile AR, we are yet to see ARÂ  leverage public libraries for analyzing scenes like Flickr&#8217;s geo tagged photos, see Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha.â€</a> And for more on this <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">my post here</a>.</p>
<p>But, as Joe Lamantia points out:</p>
<p><strong>â€œOne of the weakest aspects of the existing interaction patterns for augmented reality is their reliance on single-person, socially disconnected user experiences.â€</strong></p>
<p>In my view, <strong>The Pandora&#8217;s Box of Augmented Realities</strong> is an open, distributed, multiuser augmented reality framework, fully integrated with the internet and world wide web.</p>
<p>As Yochai Benkler has pointed out many times, and argues again in, <a href="Capital, Power, and the Next Step in Decentralization" target="_blank">Capital, Power, and the Next Step in Decentralization</a>, it is &#8220;open, collaborative, distributed practices that have been at the core of what made the Internet.&#8221;Â  We have to try to make sure that open, collaborative, distributed practices are at the core of mobile augmented reality.</p>
<p><strong></p>
<p></strong></p>
<h3>Can Google Wave be the basis for an Open, Distributed, Multiuser Augmented Reality Framework?</h3>
<p><a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank"><img class="alignnone size-medium wp-image-4492" title="Screen shot 2009-09-25 at 11.51.20 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-11.51.20-PM-300x141.png" alt="Screen shot 2009-09-25 at 11.51.20 PM" width="300" height="141" /></a></p>
<p>I have been exploring the idea of using <a href="http://wave.google.com/" target="_blank">Google Wave </a>protocol as the basis for a distributed, multiuser open augmented reality framework with a small group of AR enthusiasts and developers. And I am happy to say the proposal is beginning to get fleshed out a little.Â  New collaborators are welcome both for &#8220;gear heady&#8221; input and use case suggestions (but re the latter, you can&#8217;t just say everything you see in <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>..!).</p>
<p>This effort started with Thomas Wrobel&#8217;sÂ  proposal for an Open AR Framework prototyped on IRC &#8211; see <a id="s336" title="here" href="../../2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/">here,</a> and click to enlarge the image above of, <a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank">&#8220;Sky Writer: Basic Concept for an Open Multi-source AR Framework.&#8221;</a></p>
<p>But recently we began looking at the <a href="Wave Federation Protocol" target="_blank">Wave Federation Protocol</a>.Â  And, if you check out <a id="ogbq" title="this post," href="http://www.jasonkolb.com/weblog/2009/09/why-google-wave-is-the-coolest-thing-since-sliced-bread.html#more" target="_blank">this post,</a> and <a id="c0ep" title="this post" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">this post</a>, you may get a glimpse of why Google Wave protocol might be a good basis for an open, distributed, AR Framework.Â  You will notice, if you study what Google Wave has done with the XMPP protocol, that many ofÂ <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank"> the elements of networked urbanism</a> that Adam Greenfield describes resonate strongly with what is being attempted in Wave.</p>
<p>But enough said for now!Â  Regardless of the details of implementation,Â  Google Wave or an AR protocol built from scratch (phew! the latter does seem like a lot of work) -Â  an open, distributed, multiuser AR framework integrated with the internet and web would explode the potential of AR, creating new possibilities for data flows, mashups ,and shared augmented realities.</p>
<p>And we are excited by Google Wave because, as Thomas puts it:</p>
<p><strong>&#8220;The really great thing wave does &#8230;.(aside from being an open standard backed by a major player&#8230;hopefully leading to thousands of worldwide servers )&#8230;.is that it allows anyone to create any number of waves, set precisely who can view or edit them, and for them to be able to be updated quickly and continuously (and even simultaneously!)</strong><strong> Better yet, changes will (if necessary) propagate to all the other servers sharing that wave. It does all this right now. From my eyes this does a lot of the work of an AR infrastructure already.</strong></p>
<p><strong>I cant see any other protocol actually doing anything like this at the moment, although correct me if I&#8217;m wrong, as alternatives are always welcome :)&#8221;</strong></p>
<p>Also, Thomas notes, <strong>&#8220;even the playback system (that is, the ability to playback the changes made to a wave since its creation) &#8230;this could give us automatically some of the ideas Jeremy Hight has mentioned in <a href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">his visionary work here</a>,Â  and <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a> on &#8220;the geo spatial web, interlinked locations and data, immersive augmentation and open source geo augmentation.&#8221;</strong></p>
<p>One of the many reasons why an Open, distributed AR Framework would be so cool is it would open up all kinds of possibilities for <span>GeoAR</span> by providing the over-arching standard protocol for communication of updates necessary for the substandards that will facilitate <span>GeoAR</span>.</p>
<p>Also important to note is theÂ  <a id="o0is" title="Wave Federation Protocol docs which are all publicly available here" href="http://www.waveprotocol.org/" target="_blank">Wave Federation Protocol</a> allows anyone:</p>
<p><strong>&#8220;to run wave servers and become wave providers, for themselves, or as services for their users, and to &#8220;federate&#8221; waves, that is, to share waves with each other and with Google Wave. &#8211; &#8220;the federation gateway and a federation proxy and is based on open extension to <a href="http://www.waveprotocol.org/draft-protocol-spec#RFC3920">XMPP core</a> [RFC3920] protocol to allow near real-time communication between two wave servers.&#8221; See Reuven Cohen&#8217;s blog for more <a id="rmr3" title="here" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">here</a> and <a id="mqxr" title="&quot;HTTP is Dead, Long Live the Real Time Cloud.&quot;" href="http://www.elasticvapor.com/2009/05/http-is-dead-long-live-realtime-cloud.html" target="_blank">here, &#8220;HTTP is Dead, Long Live the Real Time Cloud.&#8221;</a></strong></p>
<p>Still some people have expressed concern that an AR Framework using Google Wave protocol would give Google disproportionate influence. Â  Will Google-specific functionality be an issue?Â  How much stuff is Google specific just because no one else is using it (yet)? And how much is Google specific because it holds no value to anyone else but Google? These are some of the questions that have come up.</p>
<p>You are going to see a variety of suggestions for standards and specs for open AR coming out out in the next few months which as, Robert Rice of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a> points out is: <strong>&#8220;a good thing, we need that competition early on to settle down on best case.&#8221; </strong>Recently,Â <a href="http://www.mobilizy.com/" target="_blank"> Mobilizy</a> have offered up an ARML (&#8220;an augmented reality mark-up language specification based on the OpenGISÂ® KML Encoding Standard (OGC KML) with extensions&#8221;) for consideration see <a href="http://www.mobilizy.com/enpress-release-mobilizy-proposes-arml" target="_blank">here.</a></p>
<p>So it is, perhaps, also important to note, that an Open AR Framework should be neutral/transparent to techniques ofÂ  &#8220;reality recognition,&#8221;Â  and methodologies of registration/tracking, allowing various ones to work on the system as new techniques evolve, and to support as many evolving standards as possible.</p>
<p>Augmented Reality developers, like Total Immersion and others with powerful rendering/tracking AR software, should be able use an Open AR Framework to exchange the data which their tracking will use. And the tracking/rendering problems they and other researchers have solved are much harder than figuring out data exchange on on a standard infrastructure or protocol!</p>
<p>So I pricked up my ears when I heard Bruno Uzzan, CEO of <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> -Â  the first and currently the largest augmented reality company, with a 50 person R&amp;D team in France and offices in LA, where Bruno himself is now based, say: <strong>&#8220;Total Immersion isÂ  only months away from launching shared mobile augmented reality experiences using near field object recognition/tracking across multiple platforms&#8221;</strong> (for more details read my conversation with Bruno Uzzan <a href="#jumpto">below</a>).</p>
<p>I was happy when I asked Bruno about the possibilities for developing an open, distributed, multiuser augmented reality framework fully integrated with the internet and world wide web (possibly using Google Wave protocols), and he replied:</p>
<p><span id="pnk:" title="Click to view full content"><strong>&#8220;I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.&#8221;</strong></span></p>
<p><span title="Click to view full content"><strong></p>
<p></strong></span></p>
<h3>Total Immersion &#8211; working with the &#8220;symbiosis between augmented reality and brands&#8221;</h3>
<p><a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank"><img class="alignnone size-medium wp-image-4457" title="dhj5mk2g_344g64g96cq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_344g64g96cq_b-300x224.png" alt="dhj5mk2g_344g64g96cq_b" width="300" height="224" /></a></p>
<p>Total Immersion has created many of the best known and most ambitious augmented reality experiences for major brands to date, including Mattel&#8217;s <a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php">new AR toys</a><a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php"><img src="http://www.uxmatters.com/mt/archives/images/new-window-arrow.gif" alt="" width="14" height="12" /></a> to be released in conjunction with the James Cameron film Avatar, and <a id="dmas" title="AR baseball cards for Topps" href="http://www.youtube.com/watch?v=I7jm-AsY0lU">AR baseball cards for Topps</a>, <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">video here</a> (or click screenshot above), and the <a href="http://www.publishersweekly.com/article/CA6698612.html?industryid=47152" target="_blank">UK&#8217;s first augmented reality book</a>s.</p>
<p>Bruno founded Total Immersion 10 years ago when he was just 27. And the kind of conviction it took to survive as an augmented reality business in the decade before augmented reality captured the world&#8217;s attention is remarkable.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1.png"><img class="alignnone size-medium wp-image-4456" title="dhj5mk2g_343dbsph2fz_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1-300x225.png" alt="dhj5mk2g_343dbsph2fz_b" width="300" height="225" /></a></p>
<p>AR&#8217;s first steps out into the world after 17 years as predominantly a lab science maybe &#8220;wobbly&#8221; (what new technology isn&#8217;t), and sometimes gloriously kitsch &#8211; check out<a id="d_eu" title="the riotus video of and AR Live Show Total Immersion produced in Korea here." href="http://www.t-immersion.com/en,video-gallery,36.html" target="_blank"> this riotus video of the 3D Interactive Live Show Total Immersion produced in Korea </a> (also see the <a href="http://augmented-reality-news.com/2009/09/15/entertainment-first-interactive-3d-live-show-now-open-in-south-korea/" target="_blank">Total Immersion Augmented Reality Blog</a> for more on the TI&#8217;s turn keyÂ  Interactive 3D Live Show Solution).</p>
<p>As Lamantia points out <a id="eo6x" title="here" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php" target="_blank">here</a>, &#8221; projecting mixed realities into public, common, or social spaces makes them  social by default.&#8221;</p>
<p>However, the potential for shared location based augmented reality experiences is as yet untapped.Â  So I see the entry of the most experienced commercial augmented reality company into mobile as pretty interesting.Â Â  WhileÂ  smart phone AR still has significant limitations, and it certainly does differ from some of the futurist dreams of AR (see <a id="x3:y" title="Mok Oh's post hear on his disappointment in this regard" href="http://allthingsv.com/2009/09/03/you-know-what-really-grinds-my-gears-augmented-reality/">Mok Oh&#8217;s post here on his disappointment in this regard)</a>, it is significant that Total Immersion is committing to becoming a leader in mobile AR.</p>
<p>Our smart phones, the powerful networked sensor devices that so many people carry in their pockets, have proved themselves a &#8220;good enough for now&#8221;Â  mediating device for early manifestations of the ubiquitous computing and augmented reality base pair.Â  And now AR and ubicomp is mixed in theÂ  rich, messy soup of everyday life, commerce, business, marketing, art, entertainment, and government, we should get ready to see these technologies grow up fast, and unfold in some surprising ways that lab science didn&#8217;t necessarily predict.</p>
<p>And, perhaps, the new dialogue between scientists and entrepreneurs may spur both communities to outdo themselves.</p>
<p>Particularly, as <a href="http://programmerjoe.com/" target="_blank">Joe Ludwig</a> notes: &#8220;It seems to me that the biggest disconnect between the academics and the entrepreneurs is that they disagree on how far we are from the finish line.&#8221;</p>
<p>See the comments&#8217;s on Ori Inbar&#8217;s post, <a title="Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?" rel="bookmark" href="http://gamesalfresco.com/2009/09/22/augmented-reality-entrepreneurship-natural-evolution-or-intelligent-design/">Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?</a>, forÂ  a courteous but spirited discussion on the potential benefits and frictions of the newly expanded AR community ofÂ  researchers andÂ  entrepreneurs.</p>
<p>As <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre </a>(see my long conversation with Blair<a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank"> here</a>) notes:</p>
<p><strong>&#8220;not all academics and researchers are only interested in the traditional models of impact. Case in point: I wouldnâ€™t be building unpublishable games, nor investing so much time talking to the press, entrepreneurs and VCs if I did not believe strongly in the value of the impact I am having by doing that â€” and I know others with the same attitude.&#8221;</strong></p>
<p>In this vein, check out the Marble Game (<a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank">video here</a>) developed by Steve Feiner and his team at Columbia U. It&#8217;s enabled by Goblin XNA, an open source AR framework built on top of Microsoft&#8217;s XNA, which powers XBox live games, Zune games, and some Windows games. For more about Goblin XNA and AR from Columbia U <a href="http://graphics.cs.columbia.edu/projects/goblin/index.htm" target="_blank">see here</a>.Â  (Hat tip to <a href="http://www.oreillynet.com/pub/au/125" target="_blank">Brian Jepson</a> for this link)</p>
<p><a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4528" title="Screen shot 2009-09-26 at 5.16.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-5.16.56-PM-300x182.png" alt="Screen shot 2009-09-26 at 5.16.56 PM" width="300" height="182" /></a></p>
<p>While we are still waiting for the kind of sexy AR specs &#8211; nothing totally game changing in <a href="http://gigantico.squarespace.com/336554365346/2009/9/20/eye-for-an-iphone.html" target="_blank">Gigantico&#8217;s AR eyewear rounup</a> (<a href="http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PG01&amp;p=1&amp;u=%2Fnetahtml%2FPTO%2Fsrchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;s1=%2220080088937%22.PGNR.&amp;OS=DN/20080088937&amp;RS=DN/20080088937" target="_blank">maybe note this Apple patent</a>), that might get wide adoption. But at least researchers are not afraid to explore the possibilities of AR Goggles.</p>
<p>But how far are we now, with or without sexy goggles,Â  from a fuller expression of the base pair DNA of ubiquitous computing and augmented reality?</p>
<h3>We may have a LAN of things before we have an Internet of Things</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1.jpg"><img class="alignnone size-medium wp-image-4534" title="dhj5mk2g_345g9bxbwd3_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1-300x199.jpg" alt="dhj5mk2g_345g9bxbwd3_b" width="300" height="199" /></a></p>
<p><em>The picture above is a workshop I attended at <a href="http://confluxfestival.org/2009/about/" target="_blank">Conflux</a> last weekend &#8211; <a href="http://confluxfestival.org/2009/events/workshops/natalie-jeremijenko/" target="_blank">Fish â€˜n microChips</a>, with <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko.</a> We are at the site of the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> project (a commissioned work for <a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">Toward the Sentient City</a>) and &#8220;a collaborative project with <a href="http://www.environmentalhealthclinic.net/environmental-health-clinic/" target="_blank">xClinic</a>, The Living and other intelligent creatures.&#8221;</em></p>
<p>We are probably as far off some grand futurist visions of ubiquitious computing as we are some of the futurist visions of augmented reality. But as it turns out that may not be a bad thing! Recently, <a href="http://twitter.com/mikekuniavsky" target="_blank">@mikekuniavsky</a> noted in a tweet:</p>
<p><span><span>&#8220;Another argument for the LAN of Things before the Internet of Things: <a rel="nofollow" href="http://tinyurl.com/lgp9uq" target="_blank">http://tinyurl.com/lgp9uq&#8221;</a></span></span></p>
<p><span><span>Bert Moore, <a href="http://www.aimglobal.org/members/news/templates/template.aspx?articleid=3553&amp;zoneid=24" target="_blank">in the article Mike linked to points out</a>, the grand vision of an &#8220;internet of things&#8221; with everything connected to everythingÂ  can &#8220;distract people from thinking about the benefits of RFID in smaller, more easily implemented and cost-justified applications.&#8221;Â  The same argument I think applies to sensor networks and augmented reality.</p>
<p></span></span></p>
<p>In New York City, a series of commissioned works for the <a href="http://www.archleague.org/" target="_blank">Architectural League of New York&#8217;s</a> exhibit,<em> </em><a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">&#8220;Toward the Sentient City&#8221;</a><em> </em>are giving us the opportunity to dip our toes into the ocean of a &#8220;networked urbanism.&#8221; Â  For only a small budget, two of the <a href="http://www.sentientcity.net/exhibit/?cat=4" target="_blank">five commissioned works</a>, <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibeous Architecture</a> and <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> demonstrate how sensor networks can allow us to explore new kinds of communities &#8211; connecting people to environments in interesting ways to create new forms of social agency.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">&#8220;Amphibeous Architecture</a>&#8221; -Â  from The Living Architecture Lab at Columbia University Graduate School of Architecture, Planning and Preservation (Directors David Benjamin and Soo-in Yang) and Natalie Jeremijenko, Environmental Health Clinic at New York University, uses a skillfully built (electronics and water are notoriously hard to mix) array of partially submerged sensors to pierce the blinding, reflective surfaces of the riversÂ  surrounding Manhattan and to create a new two way relationship with the ecosystem below &#8211; the water, our neighbors the fish and even a beaver that lives in the water surrounding Manhattan.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM.png"><img class="alignnone size-medium wp-image-4536" title="Screen shot 2009-09-26 at 6.34.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM-300x125.png" alt="Screen shot 2009-09-26 at 6.34.56 PM" width="300" height="125" /></a></p>
<p><em>Image from <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Toward the Sentient City</a></em></p>
<p>In a similar spirit, &#8220;<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a>&#8221; &#8211; Usman Haque, creative director, Nitipak â€˜Dotâ€™ Samsen, designer, Ai Hasegawa, designer, Cesar Harada, designer, Barbara Jasinowicz, producer, creates a network of people and electronically assisted plants to explore what it takes to work together on energy consumption and to experience the consequences of &#8220;selfish&#8221; and &#8220;unselfish&#8221; behavior interactively before it is too late to modify our actions.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM.png"><img class="alignnone size-thumbnail wp-image-4537" title="Screen shot 2009-09-26 at 6.55.29 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM-150x150.png" alt="Screen shot 2009-09-26 at 6.55.29 PM" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM.png"><img class="alignnone size-thumbnail wp-image-4548" title="Screen shot 2009-09-26 at 9.37.06 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM-150x150.png" alt="Screen shot 2009-09-26 at 9.37.06 PM" width="150" height="150" /></a></p>
<p><em>The &#8220;Greedy Switch</em>&#8220;<em> from <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse </a>on the left. On the right &#8220;The System&#8221; &#8211; click to enlarge.<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank"></p>
<p></a></em></p>
<p>Much more to come in another post on these works, and &#8220;Toward the Sentient City.&#8221;Â  Also an update on how <a href="http://www.pachube.com/">Pachube</a> &#8211; an important part of both these projects and a very important contribution to ubiquitous computing because it creates the opportunity to connect environments and create mashups from diverse sensor data feeds &#8211; has matured since my interview with Pachube founder, Usman Haque, <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">&#8220;Pachube, Patching the Planet,&#8221;</a> in January this year.</p>
<p>In the picture above <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a>, and <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol</a> give the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> sensor array a last look over, as it will soon be lowered into the East River. Jonathan is on a busman&#8217;s holiday to help out at the pre launch of Amphibious Architecture, nr Manhattan Bridge, NYC.</p>
<p>I was very happy to getÂ  a chance to talk to <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol </a>- more on our conversation in another post<em>. </em>Jonathan Laventhol is <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">CTO of Imagination &#8211; one of the world&#8217;s leading design, events, and branding agencies.</a> We talked about the importance ofÂ <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank"> Pachube</a>, which Jonathan called the &#8220;The Facebook of Data,&#8221;Â  andÂ  how the <strong>symbiosis between brands and augmented reality</strong>, and healthcare applications, wouldÂ  be key to augmented reality emerging into the mainstream.</p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b.jpg"><img class="alignnone size-medium wp-image-4453" title="dhj5mk2g_340djvd2thc_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b-235x300.jpg" alt="dhj5mk2g_340djvd2thc_b" width="235" height="300" /></a></em></p>
<p>Natalie Jeremijenko&#8217;s workshop at Conflux on the social negotiation of technology and how <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">&#8220;everyware&#8221;</a> can give us the chance to experience new forms of agency and connection was a totally inspiring.Â  And I will cover this too in another post.Â  I have so much awesome stuffÂ  to write about at the moment!</p>
<p>None of the projects in, &#8220;Toward the Sentient City,&#8221; included a mobile augmented reality, or &#8220;magic lens&#8221; component, but they all pointed to why &#8220;enchanted windows into our newly inside-out reality&#8221; are going to be so important. And why the DNA base pair of ubicomp and augmented reality can really do stuff that matters.</p>
<h3>Shangri- La &#8211; &#8220;Transfigured City&#8221;</h3>
<p><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b.png"><img class="alignnone size-medium wp-image-4452" title="dhj5mk2g_342g43n6w7k_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b-300x249.png" alt="dhj5mk2g_342g43n6w7k_b" width="300" height="249" /></a></a></a></p>
<p>Screenshot from <a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a> episode </em><a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a></p>
<p>In my AR Consortium founder member interview series, I have found that, understandably, the visionary founders of these first augmented reality companies are a little reticent about sharing their full vision.Â  They are basically on stealth mode in this regard.Â  So as you will not, from my interview with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> founder and CEO, Bruno Uzzan, get a fully drawn scenario of his vision for a next generation of shared augmented reality experiences, here&#8217;s a really interesting anime episode from the anime Shangri La called, <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, to mull over instead.</p>
<p>As you can tell from this rather long and circuitous intro to my my conversation with Bruno Uzzan, IÂ  have been investigating shared augmented realities pretty intensively recently. And Mike Kuniavsky pointed me to <em><em><a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a></em></em>, and<a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank"> Transfigured City</a>, in a conversation with Mark Shepard, after Mark&#8217;s presentation at Conflux, <a href="http://confluxfestival.org/2009/events/workshops/mark-shepard/" target="_blank">Sentient City Survival Kit.</a></p>
<p><a href="http://thingm.com/about-us/team/mike-kuniavsky.html">Mike Kuniavsky</a> with <a href="http://thingm.com/about-us/team/tod-e-kurt.html">Tod E. Kurt</a> is founder of <a href="http://thingm.com/home.html" target="_blank">ThingM</a>, a ubiquitous computing device studio. Also Mike Kuniavsky researches, designs and writes about people&#8217;s experiences at the intersection of technology and everyday life &#8211; see Mikes blog <a href="http://www.orangecone.com/" target="_blank">Orange Cone</a>.Â  And I interviewed Mike at Etech- see<a href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> here</a>.</p>
<p>In <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, the &#8220;Metal Age&#8221; group has to figure out how to share and communicate in a city transfigured by augmented realities/virtualities, where no-one sees the same place in the same way.Â  Only one character can figure out from her previous experience of the city the relationship between the transfigured city and how it used to be.</p>
<p>The conversation I had with <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> on <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">The Transfigured City</a> continued at a picnic in Washington Square Park the next day with Elizabeth Goodman, who I met at Etech when she gave a brilliant presentation, <a id="eag1" title="Designing for Urban Green Space" href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Designing for Urban Green Space</a>.Â  We covered so many areas at the picnic related to ubiquitous computing and augmented realities that this conversation probably deserves a post of its own (my writing to do list is growing longer!).</p>
<p><a id="on28" title="The Plot Synopsis for Shangri La" href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">The Plot Synopsis for Shangri La</a>:</p>
<p><strong>&#8220;In the mid-21st century, the international committee decided to forcefully reduce CO2 emission levels to mitigate the global warming crisis. As a result, the economic market was transferred mainly into the trade of carbon. A great earthquake destroys much of Japan, yet the carbon tax placed on the country is not lifted, so Tokyo is turned into the worldâ€™s largest &#8220;jungle-polis&#8221; that absorbs carbon dioxide. Project Atlas is commenced to plan the rebuilding of Tokyo and oversee the government organization, which the Metal Age group opposes due to its oppressive nature. However, Atlas is only built with enough room for 3,500,000 people and most people are not allowed to migrate into the city. The disparity between the elite within Atlas and the refugees living in the jungles outside of its walls set up the background of the story.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<p><a name="jumpto"><span style="font-size: medium;"><strong> Talking With Bruno Uzzan</strong></span></a></p>
<p><span style="font-size: medium;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost.jpg"><img class="alignnone size-medium wp-image-4494" title="BrunoUzzanpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost-225x300.jpg" alt="BrunoUzzanpost" width="225" height="300" /></a></p>
<p></strong></span></p>
<p><strong></p>
<p>Tish Shute:</strong> We won&#8217;t have fully opened the Pandora&#8217;s Box of Augmented Realities until we have ubiquitous, shared augmented realities, will we?</p>
<p><span id="p-xo" title="Click to view full content"> <strong>Bruno Uzzan: Yes. The most important for augmented reality is the experience we want to share. Now we are working on the cell phone, we can potentially do some marketing components that we already have developed now on cell phone. Done. Itâ€™s working.</strong></span></p>
<p><strong>But the most interesting part of it is how these new components [cell phone AR] will be used for marketing campaigns by brands. And we are also pretty much well positioned to transform some of the AR that we currently have working on Mac and PC and to transform these to applications working on mobile devices. </strong></p>
<p><strong>Tish Shute: </strong> We havenâ€™t really experienced yet what it means to actually share mobile AR experiences?</p>
<p><strong>Bruno Uzzan: Itâ€™s hard &#8212; we did a Facebook app. Itâ€™s a first try, it has a way to go.Â  But </strong><span id="c8ek" title="Click to view full content"><strong> to go more and more into social, is the way forward for us &#8211; to share and expand AR experiences. But yes, I mean what youâ€™re seeing is how two people on two different applications can share that same expanse.Â  For sure we are going in that direction. We are currently working on those kind of solutions. How people can share and experience together at the same time. Thatâ€™s how we start creating excitement in augmented reality, and itâ€™s coming up.</strong></span></p>
<p><strong>It&#8217;s a new market and thereâ€™s so much more in store for augmented reality. You know, some people are telling me, donâ€™t you believe that augmented reality is a gimmick? It will be a trend for a few weeks or a few months and then gone? I say, youâ€™re kidding me. This is only the beginning. I mean I can assure you that the applications that are on the market today are one percent of what we will have five years from now.</p>
<p></strong></p>
<p><strong>Tish Shute: </strong>I agree.</p>
<p><strong>Bruno Uzzan: And Iâ€™m sure that augmented reality will be a part of a lot of components that we are currently using today &#8211; GPS, web browser, glasses, I mean there are so many applications that will come up shortly. This is only the beginning. Iâ€™m completely convinced that augmented reality will be in three years from now what virtual reality is today, which is a billion dollar market.Â  I know that itâ€™s not just a gimmick of a few weeks or a few months, because so many brands are jumping into it, spending money, exploring solutions.Â  I know that itâ€™s not just short term -what they are willing to do and we are willing to do, but also middle and long term. And thatâ€™s what makes this adventure pretty much unique and what makes creating a cutting edge technology, very, very much exciting for us.</p>
<p></strong></p>
<p><span id="pb9s" title="Click to view full content"><strong>Tish Shute:</strong> First could you explain more to me about your partnership with Int13. I am not sure I understand what is in the arrangement from Total Immersion&#8217;s POV. I mean what happens re your own mobile software development? Haven&#8217;t you only been licensed the Int13 SDK for a limited period of time and have limited access to all it&#8217;s power? </span><span id="p_2y" title="Click to view full content"><a href="http://gamesalfresco.com/2009/09/15/why-int13-got-in-bed-with-total-immersion/" target="_blank">Stephane from Int13 said to Ori on Games Alfresco, here, </a>â€œwe have licensed the SDK4 for two years,â€ and then Ori asks, â€œbut you have basically kept the power to yourselves, right?â€ So if they are the only ones that can enhance it and develop the software, where willÂ  TI be in two years in mobile if you havenâ€™t really had the chance to develop your own software .</span></p>
<p><span id="j5co" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Actually itâ€™s a real win-win situation. Int13 is a very small company and they have so many requests they can&#8217;t possibly fulfill them all. SoÂ  this is a way for both of us to be, as quickly as possible, the first mobile provider for all the requests we have. Also they give us exclusivity so nobody else can use INT13 SDK for such applications.Â  I think that it is a good partnership, </strong></span></p>
<p><strong>And concerning our own mobile applicationâ€¦ First of all we have currently some mobile applications working. But with Int13 we have a mobile solution that can work on many different devices. Thatâ€™s a fact and thatâ€™s working. And, believe me you will hear from us a lot more about this soon. We are fully independent on our mobile development. The reason we closed the partnership with Int 13 isÂ  to be able to deploy mobile in a broad way.</strong></p>
<p><strong> I mean you know that the difficulty with AR mobile is that each separate device needs some customization. Working on the iPhone is different from working on the Nokia, different from working on the Palm; itâ€™s different from working on the Samsung. Each of them have their own operating system inside and so we were interested in Int13&#8242;s very clever embedded solution that allows our solutions to work across many platforms.</strong></p>
<p><strong>The reason we are working with Int13 is that we are able to work on so many mobile devices, thanks to Int13. And in the mobile AR race that we are currently in, the next two years will be extremely important to usâ€¦</strong></p>
<p><span id="z_5s" title="Click to view full content"><strong>Tish Shute:</strong> OK, that definitely clarifies it a lot. So Int13 has done an embedded solution to allow TI developed AR solutions to work easily across many devices?</span></p>
<p><span id="y.wt" title="Click to view full content"><strong>Bruno Uzzan: YesÂ  they have kind of an embedded solution, a way to address extremely quickly new cell phone&#8230; But, currently on our side, we are in discussions with a mobile companyâ€¦ and that only refers to some very specific mobile devices.Â  And what they have is also a way to embed deeper our technology into mobile, so that we can have quickerâ€¦ applications that work on a large number of cell phones.</strong></span><span id="mufh" title="Click to view full content"> </span></p>
<p><strong>Tish Shute:</strong> So, basically it means you don&#8217;t have to go through some complicated negotiations with each of the cell phone companies, is what you are saying?</p>
<p><strong>Bruno Uzzan: Not only negotiations, but also hard development. You know? Working on the Windows mobile is completely different from working on the Palm OS. You know, that&#8217;s different! Its a big work, to have a mobile application working on many other devices. So, INt13,Â  provides us a way for us to save some time and some development cost too.</strong></p>
<p><strong>Tish Shute:</strong> And Int13 doesn&#8217;t have powerful AR development tools like <a href="http://www.t-immersion.com/en,interactive-kiosk,32.html" target="_blank">D&#8217;fusion</a> right?</p>
<p><strong> Bruno Uzzan: Right! That&#8217;s right. That&#8217;s why we say it&#8217;s a true win-win solution. They can benefit from our work too. And we can benefit from their work, in order to deploy quicker and faster mobile solutions. </strong></p>
<p><strong>Tish Shute:</strong> Now, the second thing isâ€¦ there is a lot of debate and disagreement about how far mobile augmented reality is from delivering something more that the &#8220;post it&#8221; approach that has been much publicized in recent months, via all the AR browser apps.</p>
<p>But from my understanding from the conversation we had earlier this summer (see below), Total Immersion is targeting a much higher level of mobile augmented reality than we&#8217;ve seen to date?</p>
<p><strong>Bruno: Yes the browser apps we have seen are a kind of augmented reality, but not exactly the way we see it. Let me explain you why. With this kind of application it&#8217;s true that you can overlay 3D-information and video. That&#8217;s a fact. So, in a sense, that&#8217;s augmented reality. But the way that they are working on the position of the 3D on that video is that they are using compass and GPS-information.. so it means that this AR solution will work only on some building and some physical objects that are FIXED. In a fixed and known position.</strong></p>
<p><strong>So you want to go to a theater?</strong></p>
<p><strong> </strong><span id="a9qv" title="Click to view full content"><strong>The theater is here, for sure it will not move, so you know the position of the theater, and thatâ€™s a fact that you can superimpose an object on the theater. Thatâ€™s what can be done currently. What we are achieving and what we are doing on mobile is more than that. We want to be able to port our solution with trading cards, with brands, into a smart phone.</strong></span></p>
<p><strong>Iâ€™m assuming that you want a can, a drink can, to be able to trigger an experience. The only way you can do it is to be able to understand what the can, it is. And the current solutions that are out there canâ€™t do that, itâ€™s impossible. </strong></p>
<p><strong>Tish Shute:</strong> Right, yes. Thereâ€™s no near-field object at all in these early browser apps.</p>
<p><strong>Bruno Uzzan: And the solution we have is that we can recognize a can and then &#8212; in a very, very precise way and that activates geo-location, so we can superimpose 3D. I mean in that case, it opens up all the applications that we currently have, so they could work on mobile.</strong></p>
<p><strong>Tish Shute:</strong> So for example, if youâ€™re working with a soft drink company, people can trigger that experience wherever they see that can?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Yes. Yes, I assumed that was what youâ€™re doing</p>
<p><strong>Bruno Uzzan: We believe &#8212; and maybe thatâ€™s not the case, but we believe that our marker-less tracking technology is pretty much unique on the mobile devices.</strong></p>
<p><strong>I havenâ€™t seen yet, from anyone, a full augmented reality mobile solution working.</p>
<p></strong></p>
<p><span id="rzqr" title="Click to view full content"><strong>I really see AR being part of the Web 3.0 next generation. I mean the vision I have is that, you know &#8212; today, when you want to have information, you go on a website and then you find your information. AR &#8212; and the future is that I think it will be part of the opposite. You want to have information about a product, you just show it to your computer and the information will automatically pop up. I see here a new way to market some key messages, a new way to get information is that some physical product by themselves could be a way to get information, and you donâ€™t have to search anymore for them, itâ€™s coming out to you.</strong></span></p>
<p><strong>AR is definitely for me, one of these components. Another thing that AR is a solution, another thing that AR itself will create these kind of results in how information is being displayed. But Iâ€™m seeingÂ  here a way that could be part of a new way to have access to information. And thatâ€™s part of the vision I have. Whatever, if it is through mobile phone or web or PC, Mac, whatever, I really believe that now this kind of new generation of receiving information will come shortly and could be a kind of a new &#8212; could be part of the new 3.0 generation of the web. </strong></p>
<p><strong>Tish Shute:</strong> My friend <a id="evae" title="Gene Becker" href="http://www.genebecker.com/" target="_blank">Gene Becke</a>r did <a href="http://www.genebecker.com/2009/09/thinking-about-design-strategies-for-magic-lens-ar/" target="_blank">an interesting post recently on some of the current limitations of mobile AR</a> where he pointed out the problem of:</p>
<p><em><strong>&#8220;S</strong><strong>implistic, non-standard data formats</strong> â€“ POIs, the geo-annotated data that many of these apps display, are mostly very simple one-dimensional points of lat/long coordinates, plus a few bytes of metadata. Despite their simplicity there has been no real standardization of POI formats; so far, data providers and AR app developers are only giving lip service to open interoperability. Furthermore, they are not looking ahead to future capabilities that will require more sophisticated data representations. At the same time, there is a large community of GIS, mapping and Geoweb experts who have defined open formats such asÂ <a href="http://georss.org/" target="_blank">GeoRSS</a>,Â <a href="http://geojson.org/" target="_blank">GeoJSON </a>andÂ <a href="http://code.google.com/apis/kml/documentation/" target="_blank">KML</a> that may be suitable for mobile AR use and standardization.&#8221;</p>
<p></em> <span id="gd8y" title="Click to view full content"></p>
<p><strong></p>
<p></strong></span><span id="v68s" title="Click to view full content"><strong> Bruno Uzzan: Thatâ€™s interesting. I mean &#8212; I know exactly what his is referring to. He is mainly referring to a localization and how you can have a quick, accurate localization.Â  If you look at current solutions, and you look at this 3-D superimposing on the video, the 3-D is shaking a lot. I donâ€™t know if you see that in some of these early efforts.</strong></span></p>
<p><strong>Itâ€™s hard to use because the 3-D, you know, isÂ  part of the magic of augmented reality, that is when the 3-D is being inserted in a very easy way and smooth way in your solution. Here, when you see this overlay, 2-D or 3-D overlaid on the video, itâ€™s shaking a lot. One reason for this is that the GPS compass is not accurate enough to coordinate the perfect location of the user. And here, what Gene says is interesting. I think we are addressing this localization issue in a pretty smart way.</strong></p>
<p><strong>But to be frank with you, I donâ€™t believe mobile augmented reality in the extremely short term &#8212; Iâ€™m talking about three weeks, one, two months is mature enough for good AR applications.Â  It will be shortly.Â  But for now it is more proof of concept than a true and easy application to use. </strong></p>
<p><strong>But we are starting to see a lot of new application coming out, but I really believe that marketing and entertainment are the two key markets for AR right now.</strong></p>
<p><strong>Iâ€™ve been working ten years in augmented reality. And, eight years ago, when I was talking about augmented reality, I was E.T., you know? Nobody understood what I said, and I thought it was crazy. And now, today, yes itâ€™s completely different.</strong><strong> </strong></p>
<p><strong> </strong></p>
<p><strong>Tish Shute:</strong> The Pandora&#8217;s Box of Augmented Realities, in my view, is an open, universal and standard, distributed, multiuser, augmented reality framework fully integrated with the internet and world wide web. I have been looking into Google Wave protocols as a basis for this would you be interested in this? Do you think it is feasable?</p>
<p><span id="ngwf" title="Click to view full content"> </span><span id="vz68" title="Click to view full content"><strong> </strong></span></p>
<p><span id="vz68" title="Click to view full content"><strong>Bruno Uzzan: I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.</strong></span></p>
<p><strong>Tish Shute:</strong> Yes I suppose an open AR Framework involves cooperation and collaboration, it is more about business and politics than technological problems.</p>
<p><strong> Bruno Uzzan: Yes!Â  Actually the Web is politics. Business is politics. </strong></p>
<p><span id="yeg4" title="Click to view full content"><strong>Tish Shute: </strong>I would be interested if anyone in your R&amp;D team would be interested in looking at some of the ideas that are emerging in our little discussion of Google Wave and an Open AR FrameworkÂ  to offer feedback. it is an interesting time now to input on the Wave Federation Protocol docs because nothing is set it stone right now.</span></p>
<p><span id="hzrf" title="Click to view full content"><strong>Bruno Uzzan: Just shoot me an email, I&#8217;ll try to put you in touch with the right person and, and a team member that can input on this.</strong></span></p>
<p><span id="hbcd" title="Click to view full content"><strong>Tish Shute: </strong>For mobile augmented reality the best thing weâ€™ve got now is the phone, right?</span></p>
<p><strong>Bruno Uzzan: Right. </strong></p>
<p><strong>Tish Shute:</strong> And the only way we can use the phone is by holding it up, right?Â  Isnâ€™t this a bit of an an obstacle as you introduce better object recognition and tracking?Â  People are going to have to stop moving to use their phone. What do you feel about that experience? Isn&#8217;t AR eyewear and essential part of a tightly registered AR experience?</p>
<p><strong></p>
<p>Bruno Uzzan: </strong>We donâ€™t do hardware and we donâ€™t have the current solution for eyewear that would do all we need for a good mobile AR experience, so I guess we donâ€™t have the current answer for that.Â  But we are beginning to see the next generation of this &#8212; of these glasses.</p>
<p><strong>Tish Shute:</strong> But youâ€™re happy enough with the mobile experience of augmented reality on smart phones that youâ€™re investing in this next generation of software for this.</p>
<p><strong>Bruno Uzzan: Yes, I know. We know that some application will not work on the iPhone. And yes, whatever you do, you still need to hold the iPhone, so it means that you canâ€™t play with your hands anymore. So we know that partially, some AR solutionsÂ  we have on other platforms will lose the magical effectivities on just the iPhone.</strong></p>
<p><strong>But Iâ€™m starting to see on the market some glasses that could perhaps be not too expensive &#8212; thatâ€™s a challenge!Â  And easy to use &#8212; thatâ€™s another big challenge. And, that could fit on anybodyâ€™s faces and head &#8212; there&#8217;s another big challenge. So yes, Iâ€™m starting to see that, but so far AR glasses are only applicable for some very, very specific application, like design or theme park or, you know, some specific location where it makes sense to move forward with glasses.</p>
<p></strong></p>
<p><strong>I donâ€™t believe that kids will use glasses for &#8212; in our toys and for games in the next months or maybe othe next one or two years. But maybe something will come out shortly and that could be a big breakthrough, and enable us to think another way. ButÂ  from what we have seen so far and from what we know in this hardware market, I donâ€™t believe that currently there is a workable solution.</p>
<p><span style="font-size: small;"></p>
<p></span></strong> <span style="font-size: small;"><strong></p>
<p></strong></span><span style="font-size: medium;"><span style="font-size: small;"><strong>Note: The following section of the interview took place earlier in the Summer.</strong></span></p>
<p></span><span id="yvdi" title="Click to view full content"></p>
<p><strong>Tish Shute:</strong> You are the first commercial AR companyÂ  &#8211; you started in 1999 right?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes you are right. We started the extremely early in this augmented reality market. We were the first company worldwide to start doing augmented reality and to start promoting augmented reality. So it&#8217;s true, we are pretty old players although the market has been getting bigger and bigger for the last year and a half. So for a long time we were only in the market, and the market was not really there.</strong></span></p>
<p><strong>But for the past 8 months, the company has been growing really fast.</strong></p>
<p><strong>Tish Shute:</strong> Yes I&#8217;m sure. Congratulations for hanging in there long enough to get the pay off!</p>
<p><strong> Bruno Uzzan: You know, my background is Financial. So I have been driving the company for many years in a very cash efficient way. So we have been waiting for the markets to reach maturity before starting make some investments. So that&#8217;s the reason we are still here, and that&#8217;s the reason I think we managed pretty smartly the cash that we raised for the company.</strong></p>
<p><strong>Tish Shute:</strong> Yes there is a saying that when a market takes off you can tell a pioneers because they are the ones with the arrows in their backs. But I am glad you are dodging the arrows!</p>
<p><strong>Bruno Uzzan: You know, I&#8217;ve always driven the company with revenue. And because revenue was not there at the beginning I was extremely cautious about the cash. So now that the company is getting some revenue, for sure we are making more and more investments, and taking advantage of our situation as a worldwide leader of augmented reality.</strong></p>
<p><strong>This situation is not easy as it appears today but it&#8217;s now getting better, as you can see, AR, Augmented Reality, has very good momentum and we are benefiting a lot from all this momentum for augmented reality right now.</strong></p>
<p><strong>Tish Shute:</strong> You&#8217;ve been very involved in researching developing augmented reality tools. Are you still as active in the research area, or are you too busy keeping up with work for hire now, to be working on research and building new technology for Augmented Reality?</p>
<p><strong>Bruno Uzzan: Both. First of all, we are part of lot of projects either directly with clients like Mattel or with some partners that are using our technology to promote and develop other AR projects. From what we he have seen, many, many, many, projects augmented projects have been done currently with our solutions.</strong></p>
<p><strong>To continue with your previous question. So we are being perceived as this leader in that space, and weÂ  have some pretty heavy demand for our services. But we are coming up with new technology, of course, still connected to Augmented Reality.Â  But, our R &amp; D is working in two different directions, which of course also bind together.</strong></p>
<p><strong>The first one is platform developments. So we want </strong><strong>Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility</strong><strong>.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Robert Rice said recently, &#8220;markers and webcams equal Photoshop page curls&#8230;&#8221;</p>
<p><span id="dulu" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Yes. There are so many concerns with markers. The quality is extremely bad. As soon as you hide a part of the marker, a slight part of the marker, youâ€™re dead. You canâ€™t track any more of the object. So compared to our solution where I want to say play with cards or where you are going to play with a Mattel toy, even if you hide a part of the toy, itâ€™s still working.</strong></span></p>
<p><strong> Tish Shute:</strong> But you havenâ€™t offered the public an SDK to your engine right? Basically the way people get access to your tools is working in a partnership with Total Immersion right?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Do you think in the future you might open your SDK? Are you considering that?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes, it would be interesting. </strong></p>
<p><strong>Tish Shute:</strong> So that is something we can see coming soon?</p>
<p><span id="short_transcription0" title="Click to view full content"><strong>Bruno Uzzan: Maybe, because itâ€™s true that Total Immersion is starting to be mature enough for these kind of tools. The only thing is that we have to respect good timing for that.Â  Itâ€™s a big decision. You know what I mean?Â  It is a big, big decision. We would then compete with others using our technology. </strong></span></p>
<p><strong>Tish Shute:</strong> Oh I know, it is a big decision when you have so much skin in the game! But it would be nice to have your SDK being THE platform for AR, wouldn&#8217;t it?</p>
<p><strong> Bruno Uzzan: It is a really big decision that we canâ€™t just take like that, you know.Â  There are a lot of friends who told me you have to be extremely careful about timing. This timing is pretty much connected to the maturity of the market. For sure, we see the market being more and more mature. But, there are a lot of low hanging fruits we still want to address</strong></p>
<p><strong>To get the best value possible for all the publicity we have and all the clients we have now. </strong></p>
<p><strong>Tish Shute:</strong> Yes, I know. Youâ€™ve been in this game so long. Now, there is an interesting question here though about tools and platforms because you know, A.R., augmented reality has already expandedÂ  beyond its kind of original purist definition. And when I talk to peopleÂ  about augmented reality, there are actually lot of different ideas and priorities of where the tools should go right now. You know, obviously we have these kind of browser-like applications, but these browser like applications are not dealing with recognizing near field objects yet.Â  What are your priorities for tool development and what are your priorities for AR development in the future? What areas are you going to focus on? Oh dear that is a rambling question!</p>
<p><strong>Bruno Uzzan: [laughter]Â  So, one of our first priorities is we need to create our software with one development, one installer, one software that can be spread on different platforms. The same application, the same software can be used either on a PC, Mac, phone or console. So thatâ€™s a lot of work, because that means that our platform has to address many many different devices and thatâ€™s a big priority for us because we received this request from our clients. We want to be able to use one application on many different platforms and devices. So, thatâ€™s the first one.</p>
<p></strong></p>
<p><strong><span id="hk3z" title="Click to view full content">And the second one is to add more and more interactivity between the real and the virtual world. So, we are working on some improvements to add some real components that will interact with virtual, and that also part of our big strategy and direction and these two worlds can more and more be bridged together, linked together so they can interactÂ  one with the other.</span></strong></p>
<p><strong>Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.</p>
<p><br style="background-color: #ffff00;" /></strong><span id="b1qt" title="Click to view full content"><strong> There are so many different directions for interaction between the real world and virtual world to develop.Â  Iâ€™m sure ten years from now youâ€™re going to have AR applications everywhere.Â  Its not just temporary fashion stuff or a gimmick for few months. I mean we are getting there, its getting stronger and stronger and we are getting a good adoption rate from our consumers. They like it, they test it, they play with it and brands wants more, people want more and its getting bigger and bigger.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Yea and I totally agree, its not a gimmick because the interaction between &#8220;virtual&#8221; and &#8220;real&#8221; enhances the magic of both. Another question about you RandD operation. Is your R&amp;D still in France or have you moved totally out to LA.</span></p>
<p><strong>Bruno Uzzan: We are 50 people in France and I started this LA office two years ago and I moved permanently two years to LA. So Iâ€™m now permanently located in the US to take care of the US office, knowing that revenues are really getting bigger and bigger in the US. So it means that we are getting a lot of traction, working with large company and now Iâ€™m currently located in the US.</strong></p>
<p><strong>Tish Shute:</strong> My sister lives in Paris. Could I visit your R&amp;D lab at some point? Iâ€™d love to visit!</p>
<p><span id="bt1e" title="Click to view full content"><strong>Bruno Uzzan: Yeah sure sure sure. I mean if you want to go. You wonâ€™t have access to all the research. But if you want to go out and meet all the team please do.</strong></span></p>
<p><strong>Tish Shute:</strong> Iâ€™d love to.</p>
<p><strong> Bruno Uzzan: No problem. Shoot me an Email you and I will introduce you to Eric Gehl, COO, he is the COO of the French team. And he can definitely take care of that. </strong></p>
<p><strong>Tish Shute:</strong> That would be fun. Thank you!</p>
<p>Recently, AR browser applications have really caught the imagination of the web community, eg., Layar and Wikitude?Â  Where do you think the most important market for AR is at the moment<span id="k6fx" title="Click to view full content">, entertainment,Â  green tech, business, education?</span></p>
<p><strong>Bruno Uzzan: I think that all that you mention will be important. The first one that did grab my attention is entertainment particularly dual marketing, because they always searching for new ways to interact with players or the consumers.Â  But itâ€™s just the tip of the iceberg, you know, I mean medical applications could be huge using augmented reality. Education, and edutainment is definitely using more and more augmented reality components.Â  And I will just be submitting with big companies â€“ that are considering using augmentation for education. Museums are very important too. Also augmentation as a kind of free sales tool, you know there are so many applications, design, architecture &#8211; so many directions that itâ€™s hard to say today which one will take the lead.</strong></p>
<p><strong>But I do believe that on the short term the ones that are really really moving fast are the entertainment business and the digital marketing business. </strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest shortcomings with current augmented reality and what are the obstacles that no one has solved yet?</p>
<p><strong>Bruno Uzzan: I think the cell phone is not fully ready for augmented reality â€“ a lot of people are working on that but there are still a lot of constraints to get the augmented reality working on a cell phone and I think that from what I heard a lot of manufacturers and a lot of companies are working from direction that are going to help us a lot to develop some great cell phone applications.</strong></p>
<p><strong>And I think thatâ€™s one of the biggest part of the game. All the applications that you see on cell phones so far are just gimmicks â€“ the next big key is how to transform some gimmick cell phone application to a real, industrial, robust application that&#8217;s going to work on a cell phone. So I think thatâ€™s a big challenge for this year. </strong></p>
<p><strong></p>
<p>Most of what we see now is just matching and overlaying some 2d components in a video. This is not what I call AR.Â  Youâ€™re far away â€“ with this kind of application, you are far away from doing the registration that we need to do â€“ you canâ€™t do it. So here&#8217;s the challenge: &#8220;how can you get a Topps is an application working on cell phone. Thatâ€™s the big challengeÂ  How we can make that work!&#8221;</strong> <strong> You can&#8217;t today get a real AR Topps application working on cell phone because there&#8217;s no cell phoneÂ  thatâ€™s actually ready. But we are working on it and the first one that can make that work, itâ€™s going to be huge.</strong></p>
<p><span id="b9-2" title="Click to view full content"><strong>When you are working with good AR components you need a lot of CPU and GPU programs. So today new cell phone have started to be more and more ready for augmented reality but you need a really good cell phone to make it work. You canâ€™t choose an old cell phone to make it work because you have some recognition, you have some tracking, you have some rendering, so you canâ€™t choose a Nokia cell phone two years old to make that work. For sure the newest iPhone is the one that can make it work, but thatâ€™s it for now. There is a lot of research â€“ from large cell phone companies â€“ to get more CPU and GPU into their cell phone.Â  But so far we are also waiting for these devices to be released to consumers.</strong></span></p>
<p><strong>Tish Shute: </strong>And the current economic climate has put a damper on MIDs hasn&#8217;t it. But who can tell? It depends what price points some new MID came out at right?</p>
<p><strong></p>
<p>Bruno Uzzan: Correct.</strong></p>
<p><strong>Tish Shute:</strong> Yes,I agree. But basically whatâ€™s interesting, the interesting thing is, the iPhone can deliver so much of what is necessary and even if Apple hasn&#8217;t given access to the full power of the iphone to AR developers yet, there is really no going back now &#8211; the mobile augmented reality cat is out of the bag!</p>
<p><strong>Bruno Uzzan: Youâ€™re right, youâ€™re fully right. </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/feed/</wfw:commentRss>
		<slash:comments>36</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Location Becomes Oxygen at Where 2.0 &amp; WhereCamp</title>
		<link>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/</link>
		<comments>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/#comments</comments>
		<pubDate>Tue, 02 Jun 2009 21:43:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Aaron Straup Cope]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[bottom up urban informatics]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[community sensing]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Dan Catt]]></category>
		<category><![CDATA[Eric Horvitz]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[FireEagle]]></category>
		<category><![CDATA[Flickr Corrections]]></category>
		<category><![CDATA[Flickr Nearby]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo platform]]></category>
		<category><![CDATA[geo platforms]]></category>
		<category><![CDATA[geoblogging]]></category>
		<category><![CDATA[geoplanet]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[geowanking]]></category>
		<category><![CDATA[GigaPan]]></category>
		<category><![CDATA[gigapanning]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[googlewave]]></category>
		<category><![CDATA[headmap manifesto]]></category>
		<category><![CDATA[J.G. Ballard]]></category>
		<category><![CDATA[Jo Walsh]]></category>
		<category><![CDATA[Joshua Schachter]]></category>
		<category><![CDATA[location awaeness]]></category>
		<category><![CDATA[location versus place]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[machine intelligence and human intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[magic words and microsyntax]]></category>
		<category><![CDATA[Mapping Hacks]]></category>
		<category><![CDATA[Marc Powell]]></category>
		<category><![CDATA[Microsyntax]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[Odeo Yokai]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[paleogeography]]></category>
		<category><![CDATA[Papernet]]></category>
		<category><![CDATA[personal informatics]]></category>
		<category><![CDATA[Placemaker]]></category>
		<category><![CDATA[privacy and community sensing]]></category>
		<category><![CDATA[privacy and sensor networks]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[psychosynthography]]></category>
		<category><![CDATA[Raven Zachary]]></category>
		<category><![CDATA[real time web based visualization and mapping]]></category>
		<category><![CDATA[reality mining]]></category>
		<category><![CDATA[Rich Gibson]]></category>
		<category><![CDATA[Schuyler Erie]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shape files]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart cities]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[social geography]]></category>
		<category><![CDATA[social networks]]></category>
		<category><![CDATA[social reality mining]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[The Ubiquitous Media Studio]]></category>
		<category><![CDATA[the web in the world]]></category>
		<category><![CDATA[Tom Carden]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hackers]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[wearable sensory substitution devices for navigation]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[WOEID]]></category>
		<category><![CDATA[yahoo! geotechnologies group]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3567</guid>
		<description><![CDATA[curatingbigdatapost]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime.jpg"><img class="alignnone size-medium wp-image-3578" title="anselmcircletime" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime-300x199.jpg" alt="anselmcircletime" width="300" height="199" /></a></p>
<p>The biggest news at <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0, 2009</a> came from the<a href="http://developer.yahoo.com/geo/" target="_blank"> Yahoo!</a><a href="http://developer.yahoo.com/geo/" target="_blank"> G</a><a href="http://developer.yahoo.com/geo/">eo Technologies Group</a>. Tyler Bell, announced Yahoo! <a href="http://developer.yahoo.com/geo/placemaker">Placemaker</a> and the opening up of the <a href="http://developer.yahoo.com/geo/geoplanet/" target="_blank">GeoPlanet</a> data set, â€œall of the WOEIDs [<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs] available as a free download under Creative Commons in Juneâ€ (see <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrestâ€™s post</a> for more details).</p>
<p><a id="qa9y" title="WhereCamp 2009" href="http://wherecamp.pbworks.com/WhereCamp2009" target="_blank">WhereCamp 2009</a> was held immediately after <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0</a> and was a great place to chew on the events and ideas of Where 2.0.Â  In the picture above Anselm Hook addresses the WhereCamp morning circle in the courtyard outside the <a id="i:ij" title="Social Tex" href="http://www.socialtext.com/" target="_blank">Social Tex</a>t offices in Palo Alto. Anselm pointed out to me:</p>
<p><strong>&#8220;there are interesting implications of placemaker in combination with other yahoo assets &#8211; in particular <a href="http://developer.yahoo.com/yql/" target="_blank">YQL</a> &#8211; placemaker by itself is neat &#8211; but placemaker combined with everything else is a natural missing piece that is a big enabler.Â  Yahoo has been impressive.&#8221;</strong></p>
<p><strong> </strong>With all the Geo platform power available to us now, also (also see<a href="http://radar.oreilly.com/2009/05/new-geo-for-devs-from-google-i.html" target="_blank"> New Geo for Devs from Google I/O</a>), there isnâ€™t a shadow of a doubt in my mind Brady is right when he said, just before the Where 2009 conference: &#8220;<strong>Location is no longer a differentiator it&#8217;s going to become oxygenâ€ </strong> <a href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web" target="_blank">(quote from WebMonkey).</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1.jpg"><img class="alignnone size-medium wp-image-3612" title="spatialjunkies1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1-300x199.jpg" alt="spatialjunkies1" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41.jpg"><img class="alignnone size-medium wp-image-3614" title="yahoogeo41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41-300x199.jpg" alt="yahoogeo41" width="300" height="199" /></a></p>
<p><em>The Yahoo! GeoPlanet team at WhereCamp &#8211; Tyler Bell, (talking to Brady Forrest in picture on the left) is sporting his spatial junkies T-Shirt. Photo on right, Aaron Cope, Tyler Bell, Martin Barnes, Gary Gale.</em></p>
<p>WhereCamp was alive with key figures from the social geography movement who knew the power of these new tools (see <a href="http://www.flickr.com/photos/ugotrade/sets/72157618662411286/" target="_blank">some of my photos of WhereCamp on Flickr here</a>).</p>
<p>The importance of the Yahoo! announcement really became clear to me at <a href="http://www.socialtext.net/wherecamp/index.cgi" target="_blank">WhereCamp</a> where I attended sessions all day Saturday including the Curating Big Data Session led by <a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen Design</a> and <a href="http://www.aaronstraupcope.com/" target="_blank">Aaron Straup Cope</a>, Flickr, (see Aaronâ€™s slides from his<a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank"> Where 2.0 presentation on â€œThe Shape of Alphaâ€ here</a> and video <a href="http://where.blip.tv/file/2167471/" target="_blank">here</a>).</p>
<p>Anselm Hook, a prime mover for WhereCamp, is a leading philosopher of place making and veteran software developer who led <a href="http://platial.com/" target="_blank">Platia</a>l engineering and is now at web consultancy <a rel="nofollow" href="http://makerlab.com/">http://makerlab.com</a><span class="bio">. If you missed Anselm at WhereCamp he will be presenting on, <a href="http://opensourcebridge.org/sessions/246" target="_blank">Ubiquitous Angels</a> at <a href="http://opensourcebridge.org/users/288" target="_blank">The OpenSource Bridge</a>, Portland, Oregon, June 17th -19th, 2009.</span></p>
<p>Anselm describes where he thinks the challenges are:</p>
<p><strong>â€œWe should be mapping information that in some ways has been historically unmappable because it is 1) not valued or is 2) actively seen as threatening or is 3) simply too hard to map using traditional tools.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul.jpg"><img class="alignnone size-medium wp-image-3680" title="wherecampschedul" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul-300x199.jpg" alt="wherecampschedul" width="300" height="199" /></a></p>
<p><em>The WhereCamp Schedule</em></p>
<p><strong><span style="font-size: medium;">The Shape of Alpha</span></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57.png"><img class="alignnone size-medium wp-image-3647" title="picture-57" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57-300x220.png" alt="picture-57" width="300" height="220" /></a></p>
<p><em>Screen capture from Aaron&#8217;s <a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation on â€œThe Shape of Alpha.</a> Original photo from Flickr user <a href="http://www.ï¬‚ickr.com/photos/nickisconfused/3291840240/" target="_blank">&#8220;NickIsConfused&#8221;</a>.</em></p>
<p>Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a> puts key questions about curating big data center stage.</p>
<p>Firstly, the exploration of what it means to curate/collaborate over meaning from â€œthe abundance of data produced in the precise but distant language of machinesâ€ (also see <a href="http://www.archimuse.com/mw2009/abstracts/prg_335001944.html" target="_blank">The Interpretation of Bias (and the bias of interpretation)</a>. The Shape of Alpha uses a process of <a href="http://code.flickr.com/blog/2008/09/04/whos-on-first/">reverse-geocoding</a> to translate machine-generated geographic data into place names that people can understand and relate to.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a> are built with nothing but geotagged photos and some code called clustr (written by the brilliantÂ  <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> &#8211; co-author of <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a>). Anyone can make these <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a>. You can get the shapefiles out of theÂ  <a href="http://www.flickr.com/services/api">Flickr API</a>. Aaron has been keying off WOEIDs (<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs) but as Aaron noted you can key off anything you like &#8211; tags are an obvious choice.</p>
<p>Wow! You can reinvent mapping with this stuff.</p>
<p>Very importantly, <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha,â€</a> tells us something about how we relate to place versus location. The emotions, disputes and behavior related to place also emerge through crowd sourced corrections.Â  For more <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">see this very evocative post by Aaron about corrections and treating airports as cities</a>.Â  There is a glorious thread/riff and ode to the genius ofÂ  J. G. Ballard pursued by Aaron and Dan Catt in their posts (also see Dan Catt&#8217;s, <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes</a>, and Aaron pointed me to <a href="http://www.ballardian.com/the-real-concrete-island" target="_blank">this brilliant &#8220;geo-detective work&#8221; </a>on <a href="http://www.ballardian.com/biblio-concrete-island">Concrete Island</a>, by Mike Bonsall <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">.</a></p>
<p>Dan Catt created <a href="http://geobloggers.com/" target="_blank">geobloggers</a> and â€œseeded the geotagging community around the Web.â€ I met Reverend Dan Catt (Twitter @revdancatt ) at Where 2.0 when he was kind enough to share part of his seat so I could join a very interesting discussion with Aaron on The Shape of Alpha.</p>
<p>As <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">Aaron points out</a> they decided to treat &#8220;the airport itself <em>as</em> the town&#8230;&#8221;Â  not (only) because they admired the work of <a href="http://www.jgballard.com/airports.htm">J.G. Ballard</a>,Â                      &#8220;but because it is the right thing to do.&#8221;</p>
<p>Dan Catt has excellent <a href="http://blog.flickr.net/en/2008/08/08/introducing-a-new-way-to-geotag/">blog posts</a> &#8220;describing                     the nuts and bolts of how &#8216;corrections&#8217; works.&#8221;Â  Aaron points out,Â  &#8220;in <a href="http://code.flickr.com/blog/2008/08/08/location-keeping-it-real-on-the-streets-yo/">the nerdier of                     the two</a> Dan sums it up nicely by saying&#8221;:</p>
<blockquote class="hier"><p><strong>&#8220;On a slightly more philosophical level, itâ€™s a never                         ending process. Weâ€™ll never reach a point where we can                         say â€œRight thatâ€™s in, all borders between places have                         been decided.â€ But what we should end up with are                         boundaries as defined by Flickr users.</strong></p>
<p><strong>&#8230;</strong></p>
<p><strong> </strong></p>
<p><strong>For us, itâ€™s a first small step into an experiment, and actually a pretty big                         experiment as weâ€™re potentially accepting â€œcorrectionsâ€ from our millions and                         millions of users. Weâ€™re not quite sure how itâ€™ll all turn out, but weâ€™re armed                         with Maths, Algorithms and kitten photos.&#8221;</strong></p></blockquote>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Psychosynthography &#8211; &#8220;Wearing Geography as a Perfume&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59.png"><img class="alignnone size-medium wp-image-3649" title="picture-59" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59-300x224.png" alt="picture-59" width="300" height="224" /></a><em> </em></p>
<p><em>Psychosynthography screen capture from Aaron Straup Cope&#8217;s </em><a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation </a><em>. Original photo from Flickr user,Â  <a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">&#8220;</a></em><a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">NiteLynx.&#8221;</a></p>
<p>As I mentioned before, many of the ideas raised at Where 2.0 were unpacked and worked through at WhereCamp. For example, Aaron introduced a word <strong>psychosynthography</strong> in the last 24 seconds of his talk at Where 2.0.</p>
<p>So I spent as much time as I could listening to Aaron at WhereCamp, and asking him about psychosynthography and more (post of this interview upcoming).</p>
<p>Aaron urged the Where 2.0 audience to pay attention to the Psychogeography movement seeded by <a title="Guy Debord" href="http://en.wikipedia.org/wiki/Guy_Debord">Guy Debord</a>, and<strong> â€œto wear geography like a perfume.â€</strong></p>
<p>Joseph Hart writes in a <a href="http://www.utne.com/2004-07-01/a-new-way-of-walking.aspx" target="_blank">â€œNew Way of Walking</a>â€ psychogeography is:<strong> </strong></p>
<p><strong>â€œa whole toy box full of playful, inventive strategies for exploring citiesâ€¦just about anything that takes <span class="mw-redirect">pedestrians</span> off their predictable paths and jolts them into a new awareness of the urban landscape.â€</strong></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Curating Big Data</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden.jpg"><img class="alignnone size-medium wp-image-3625" title="tomcarden" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden-300x199.jpg" alt="tomcarden" width="300" height="199" /></a></p>
<p><em><a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen</a>, (picture above) paired with Aaron for the Curating Big Data session. Tom noted: </em></p>
<p><strong>&#8220;The Curating Big Data session for me was an attempt to learn from other attendees (as opposed to teach/lead, as with the Stamen session, &#8220;Real Time Web-Based Visualization and Mapping&#8221;).Â  Also, it was an excuse to get Aaron to recap parts of the Flickr Shapefile story for WhereCamp folks, and to get *input* on how to do more things like it. I was a bit disappointed that nobody had really good examples for us, but I was happy with Brad Stenger&#8217;s suggestion to look into the upcoming census data as a relevant area.&#8221;</strong></p>
<p>Aaronâ€™s work on the The Shape of Alpha and The Corrections project shows, as Tom noted:</p>
<p><strong>â€œwhat you can do once you have 150 million geotagged photos, and millions of users who are willing to say I took this thing here and my name for that place is â€¦..â€</strong></p>
<p>And part of the significance of opening up the GeoPlanet data set is that now:</p>
<p><strong>â€œwe can try and start talking about the same places, as far as, [for example], these shape files go. So if you are interested in what comes out of the Flickr shape files project and but you also have your own opinion about what shape those places are so the IDs have be open you have to be sure that you are talking about the same thing in the first place.â€</strong></p>
<p>And, as Tom pointed out, collaborating over geo data informs us about curating any big dataset:</p>
<p><strong>â€œit should lead to an overarching discussion about any kind of dataset geo or otherwise and ways in which we can talk about it, and think about patterns for improving that data, for collaborating, even on things like cleanup.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping.jpg"><img class="alignnone size-medium wp-image-3681" title="realtimewebbased-visualizationandmapping" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping-300x199.jpg" alt="realtimewebbased-visualizationandmapping" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost.jpg"><img class="alignnone size-medium wp-image-3739" title="curatingbigdatapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost-300x199.jpg" alt="curatingbigdatapost" width="300" height="199" /></a></p>
<p><em>Warp speed geo-genius Andrew Turner, <a href="http://www.fortiusone.com/" target="_blank">Fortius One</a><a href="http://www.fortiusone.com/" target="_blank">,</a> took these excellent notes for the &#8220;Real Time Web-Based Visualization and Mapping&#8221; (on left) and &#8220;Curating Big Data&#8221; (on the right).</em></p>
<p><em> </em></p>
<p>On my way to Where 2.0 I took the train from SFO to San Jose which was a delight but a little slower than I imagined. So, unfortunately, I arrived on Tuesday just after <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 		<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) presentedÂ  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/20/Maps%20from%20Scratch_%20Online%20Maps%20from%20the%20Ground%20Up%20Presentation.pdf">Maps from Scratch: Online Maps from the Ground Up. </a> This was on my MUST attend list and<em> </em>it was a wonderful opportunity to get into,<em> </em>&#8220;Real Time Web-Based Visualization and Mapping.&#8221;Â Â  I did get a chance to talk to Michal and Shawn a bit later in the conference but I will try to catch up with them soon for an in depth story.Â  Below isÂ  Shawn Allen&#8217;s map of overlapping data sets from, <a href="http://www.flickr.com/photos/shazbot/3282821808/" target="_blank">&#8220;Trees, cabs and crime in San Francisco:&#8221; </a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs.png"><img class="alignnone size-medium wp-image-3743" title="treescrimecabs" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs-300x273.png" alt="treescrimecabs" width="300" height="273" /></a></p>
<p>Another follow up I am really looking forward to making is with <a href="http://lizbarry.com/s+em/contact.htm" target="_blank">Liz Barry</a> and her work on <a href="http://lizbarry.com/s+em/about.htm" target="_blank">S+EM</a>, &#8220;an environmental mapping and social networking design project          that links New York City trees with the people who care for them&#8221; (also see, <a href="http://fuf.net/" target="_blank">Creating a Greener San Francisco Tree by Tree</a>).Â  Also I got a chance to talk to another fellow New Yorker (we have to travel to the West Coast to find time to chat!), <a href="http://radar.oreilly.com/jgeraci/" target="_blank">John Geraci</a> of <a href="http://diycity.org/" target="_blank">DIY City</a> who presented  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/25/DIY%20City_%20An%20Operating%20System%20for%20Cities%20Presentation.zip">DIY City:Â  An Operating System for Cities.</a></p>
<h3>Machine Intelligence and Human Intelligence</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew.jpg"><img class="alignnone size-medium wp-image-3622" title="aaronandandrew" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew-300x199.jpg" alt="aaronandandrew" width="300" height="199" /></a></p>
<p><em>Aaron Cope, Flickr, on the left is talking to Andrew Turner on the right the CTO of FortiusOne (see Andrewâ€™s presentation at Where 2.0, <a href="http://blip.tv/file/2167650" target="_blank">â€œYour Own Private Geo Cloudâ€</a>)</em></p>
<p>Many of the most interesting conversations happened in between sessions at WhereCamp and Where 2.0.</p>
<p>I caught this one in which Aaron Cope and Andrew Turner where discussing some of ideas Aaron raised in his presentation, <a href="http://www.slideshare.net/straup/capacity-planning-for-meaning-presentation-637370?type=powerpoint" target="_blank">â€œCapacity planning for meaning in the age of personal informaticsâ€</a> (see Aaronâ€™s blog post, <a href="http://www.aaronland.info/weblog/2008/10/08/tree/" target="_blank">Tree planting and tree hugging in the age of personal informatics</a>). The core question they were discussing was what happens when you wire the world at the scale people are talking about and it breaksâ€¦ Aaron argues that you already have a whole class of people in systems operations that can tell us a lot about how to answer this question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost.jpg"><img class="alignnone size-medium wp-image-3594" title="rossmayfieldsocialtextpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost-300x199.jpg" alt="rossmayfieldsocialtextpost" width="300" height="199" /></a></p>
<p><em><span class="bio">Ryan and Anselm shared the pulpit for the morning circle pulpit with <a href="http://ross.typepad.com/" target="_blank">Ross Mayfield</a> of <a href="http://www.socialtext.com/" target="_blank">Social Text </a>who was the generous host to WhereCamp.</span></em></p>
<h3>Social Reality Mining</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1.jpg"><img class="alignnone size-medium wp-image-3651" title="benjaminbratton1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1-300x199.jpg" alt="benjaminbratton1" width="300" height="199" /></a></p>
<p><strong>â€œAs it stands today, we have no idea what terms and limits of a cloud based citizenship of the Google Caliphate will entail and curtail. Some amalgam of post-secular cosmopolitanism, agonistic radical democracy, and post-rational actor microecomics, largely driven by intersecting petabyte at-hand datasets and mutant strains of Abrahamaic monotheism. But specifically, what is governance (let alone government) within this?â€ </strong><a href="http://bratton.info/" target="_blank">from Benjamin Brattonâ€™s</a> talk at ETech 2009 (picture above)<strong>, </strong><a href="http://www.bratton.info/emergency.html" target="_blank">Undesigning the Emergency: Against Prophylactic Urban Membranes</a>.</p>
<p>The other big take away from WhereWeek &#8211; Where 2.0 and WhereCamp, was not so much news, but a confirmation of something that has been pretty clear for a while now. (Check out <a href="http://radar.oreilly.com/2008/05/the-results-of-reality-mining.html" target="_blank">Bradyâ€™s posts on reality mining at Where 2.0 last year</a>).</p>
<p>We are moving headlong into the era of reality mining with all its myriad possibilities from: &#8220;hedonistic optimization&#8221; (this term came from <a href="http://brainofstig.ai/" target="_blank">Stig Hackvan</a> when I asked him about some of the ideas central to the <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">HeadMap Manifesto</a> -more about HeadMap later in this post); to new forms of marketing (social reality mining the inside to predict if someone is going to trade business cards in the next 120 seconds &#8211; <a href="http://en.oreilly.com/where2009/public/schedule/speaker/46016" target="_blank">Alex â€œSandyâ€ Pentland, MIT, Where 2.0</a>);Â  to stuff that matters to save us from mass extinction like distributed sustainability &#8211; greening production and consumption and our cities; to open government;Â  empowering indigenous communities (also see Rebecca Moore&#8217;s<a href="http://en.oreilly.com/where2009/public/schedule/speaker/43557" target="_blank"> </a><a class="attach" href="http://assets.en.oreilly.com/1/event/25/Indigenous%20Mapping_%20Emerging%20Cultures%20on%20the%20Geoweb%20Presentation.ppt">Indigenous Mapping: Emerging Cultures on the Geoweb Presentation</a>); and not to be forgotten, the troubling possibility of new forms of social control.</p>
<h3>Smart phones are powerful networked sensor devices in the palm of our hand</h3>
<p>As Sandy Pentland MIT pointed out in his Where 2.0 keynote, <a href="http://en.oreilly.com/where2009/public/schedule/detail/7956" target="_blank">â€œReality Mining for Companies, or, How Social Networks Network Best,â€</a> mobile phones have created an ubiquitous instrumented reality that goes way deeper than location awareness. Smart phones are powerful networked sensor devices in the palm of our hand that know a lot more about us than location. With proximity, motion, (accelerometers), voice, images, call logs, email &#8211; what is enabled is not just knowing where people are but knowing more about them.</p>
<p>Many of the issues raised by <a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield</a> in <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">Everyware</a> and in <a href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">my interview with Adam</a> were on my mind during WhereWeek, also questions that were distilled and explored in this presentation by Matt Jones last year, <a href="http://www.slideshare.net/blackbeltjones/polite-pertinent-and-pretty-designing-for-the-newwave-of-personal-informatics-493301" target="_blank">Polite, Pertinent, andâ€¦ Pretty: Designing for the New-wave of Personal Informatics</a> and <a href="http://www.slideshare.net/tmo/the-web-in-the-world-presentation" target="_blank">Timo Arnallâ€™s presentation, The Web in the World</a>.</p>
<h3>Google Wave, PachubeÂ  Feeds, Sensor Networks and Microsyntax!</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="560" height="340" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="560" height="340" src="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em><a id="o_ok" title="Visualizing 24 hours of @pachube" href="http://is.gd/IYOj" target="_blank">Visualizing 24 hours of Pachube</a> logs, feeds all around the world -Â  built with Processing.</em></p>
<p>I found myself really wishing <a href="http://www.pachube.com/" target="_blank">Pachube</a> founder Usman Haque had been able to come to Where 2.0 this year &#8211; Usman was originally on the Where 2.0 schedule but had to drop out. My small contribution to WhereCamp was to discuss <a href="http://www.pachube.com/" target="_blank">Pachube</a>, <a href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a> and <a href="http://www.shaspa.com/" target="_blank">OpenShaspa</a> in the, Urban Eco-Managment session (<a href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">see my interview with Pachube Founder, Usman Haque here</a>).</p>
<p>Pachube announced &#8211; <a id="du7_" title="mapping mobile feeds in realtime" href="http://is.gd/BjJT" target="_blank">mapping mobile feeds in realtime</a>, with 3d datastream value time &amp; location based graphing just before Where 2.0.</p>
<p>And, as I was writing up this post, I was delighted to see <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">this post by Bruce Sterling on Pachube Feeds</a> and his challenge, offering:</p>
<p><strong>&#8220;(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))&#8221;</strong></p>
<p>Also Anselm Hook, who has an extensive background in video game development, made an interesting point about Google Wave to me:</p>
<p><strong>&#8220;btw &#8211; there is a preexisting metaphor for the wave &#8211; the wave is notable in that it is making the web like a videogame &#8211; its bringing real time many participant shared interaction to the web&#8221;</strong></p>
<div id="a9iz" style="text-align: left;">And see <a href="http://radar.oreilly.com/2009/05/google-wave-what-might-email-l.html" target="_blank">Tim Oâ€™Reillyâ€™s post</a> for more on the significance of Wave, which <a href="http://www.techcrunch.com/2009/05/28/google-wave-drips-with-ambition-can-it-fulfill-googles-grand-web-vision/">Google previewed for developers at its I/O conference</a>:</div>
<p><strong>â€œJens, Lars, and team re-imagined email and instant-messaging in a connected world, a world in which messages no longer need to be sent from one place to another, but could become a conversation in the cloud. Effectively, a message (a wave) is a shared communications space with elements drawn from email, instant messaging, social networking, and even wikis.â€ </strong></p>
<p>For more on microsyntax see <a href="http://www.microsyntax.org/" target="_blank">microsyntax.org</a></p>
<p>Aaron pointed out to me re microsyntax:</p>
<p><strong>&#8220;This is ultimately the &#8220;magic word&#8221; problem, which is essentially the semweb vs. google-is-smarter-than-you problem.&#8221;</strong></p>
<p>I will have some more questions for Aaron on the the &#8220;magic word&#8221; problem in my upcoming interview post.Â  At the moment I am busy studying some of the thoughts in these links.</p>
<p><a href="http://delicious.com/straup/magicwords" target="_blank">http://delicious.com/straup/magicwords</a></p>
<p><a href="http://www.slideshare.net/straup/the-papernet/22" target="_blank">http://www.slideshare.net/straup/the-papernet/22</a></p>
<p><a href="http://www.xml.com/pub/a/2005/02/16/edfg.html" target="_blank">http://www.xml.com/pub/a/2005/02/16/edfg.html</a></p>
<p><a href="http://xtech06.usefulinc.com/schedule/paper/135" target="_blank">http://xtech06.usefulinc.com/schedule/paper/135</a></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Privacy: Towards a Win Win and Community Sensing</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21.jpg"><img class="alignnone size-medium wp-image-3659" title="erichorvitz21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21-300x199.jpg" alt="erichorvitz21" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"><img class="alignnone size-medium wp-image-3655" title="communitysensing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing-300x199.jpg" alt="communitysensing" width="300" height="199" /></a></p>
<p>While a key element ofÂ  Yahoo! Geo Technologies portfolio of platforms, <a href="http://fireeagle.yahoo.net/" target="_blank">FireEagle</a>, not only gives an important set of tools to allow people to &#8220;share their location with sites and services through the Web or a mobile device&#8221; but also offers up some vital privacy tools, the community sensing work of Eric Horvitz takes privacy and data sharing into new terrain.</p>
<p>Eric didnâ€™t have time to discuss his privacy work in his Where 2.0 presentation, <a href="http://en.oreilly.com/where2009/public/schedule/detail/8911" target="_blank">Where, When, Why, and How: Directions in Machine Learning and Reasoning about Location</a>, &#8211; it came up in his very last slide. But I ran up after his talk with my trusty old ipod recorder in hand, and got the part we missed! Fascinating stuff that will be the subject of an upcoming interview post. Hereâ€™s a little taste of what is to come. Eric describes one of the directions his team will be exploring.</p>
<p><strong>â€œOne thing I want to do, on our research team, Iâ€™d like to develop something very simple for people to use. A challenging problem with privacy is usability and controls. Aunt Polly and Uncle Herbie just donâ€™t get all these authentication controls and sliders, nor do they want to invest in figuring them out. They also donâ€™t get why theyâ€™re being asked with pop up windows to yes or no to various questions and so on. One Idea is having a useable privacy lens, that you can hold up anywhere and it tells you what youâ€™re showing anybody or any organization, what does the world know about you. And you would like to have buttons to turn sharing off for some items. You&#8217;d also like to have a way to go back in time and view prior sharing and logging over periods of time, and to have buttons to push to say erase that segment of your logs.â€</strong></p>
<p><strong> </strong></p>
<p>Understanding the social implications of what it means to live in an instrumented world is a topic that we cannot afford not think about. But luckily there are lot of people who have been thinking pretty deeply about this for a while now.</p>
<p>And I did my best at both Where 2.0 and WhereCamp to seek out as many of geothinkers as I could, and do interviews wherever possible (I have not had time to mention everyone I talked to in this post but hopefully all the interviews will get on Ugotrade soon!)</p>
<p><span style="font-family: Arial,Helvetica,sans-serif; font-size: x-small;"> </span></p>
<h3>HeadMap Manifesto</h3>
<p>In the bar of The Fairmont on the last night of Where 2.0, I heard some of the history of Where 2.0, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking</a>, and <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> from Sophia Parafina, Director of Operations for <a href="http://opengeo.org/" target="_blank">OpenGeo</a> and <a href="http://testingrange.com/" target="_blank">Rich Gibson</a>, programmer, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanker</a>,Â <a href="http://gigapan.org/index.php" target="_blank"> Gigapanner</a> and co-author of <a href="http://mappinghacks.com/" target="_blank">Mapping Hacks </a>with <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> and <a href="http://frot.org/" target="_blank">Jo Walsh</a> (Jo did a lot <a href="http://frot.org/s/semantic_city.html" target="_blank">of key early work on bottom up urban informatics </a> but unfortunately couldn&#8217;t make it to WhereWeek this year).</p>
<p>Check <a id="zaq4" title="Gigapan.org" href="http://www.gigapan.org/index.php" target="_blank">Gigapan.org</a> out! <strong>&#8220;The GigaPan<span class="trademark">SM</span> process allows users to upload, share, and explore brilliant gigapixel+ panoramas from around the globe.&#8221;</strong></p>
<p>Also I interviewed Paul Ramsey, Senior Consultant, OpenGeo, so more on OpenGeo is upcoming (see Paulâ€™s <a href="http://blog.cleverelephant.ca/2009/05/where-re-cap.html" target="_blank">Where ReCap</a>). <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43773"> Justin Deoliveira</a> (OpenGeo) andÂ   	 		<a href="http://en.oreilly.com/where2009/public/schedule/speaker/59688">Sophia Parafina</a> did a session, <a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165">GeoServer, GeoWebCache + OpenLayers: The OpenGeo Stack,</a><span class="url uid"> which unfortunately I missed as it </span><span class="url uid">was before I arrived Tuesday.</span><a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165"></a></p>
<div id="page_title"><strong> </strong></div>
<p><span class="bio"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich.jpg"><img class="alignnone size-medium wp-image-3631" title="sophiaandrich" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich-300x199.jpg" alt="sophiaandrich" width="300" height="199" /></a></span></p>
<p>I met Rich Gibson <a href="http://www.flickr.com/photos/ugotrade/sets/72157615022689427/" target="_blank">at Etech 2009 playing Werewolf</a> and Rich introduced me to his co-author on <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a> and alpha geek supreme, Schuyler Erie, who also wrote the clustr code that The Shape of Alpha uses.</p>
<p><a href="http://joshua.schachter.org/" target="_blank">Joshua Schachter</a> founder of Delicious and the <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking mailing list</a>, [and <a href="http://geourl.org/" target="_blank">GEOURL </a>- and <a href="http://memepool.com/" target="_blank">MemePool!] </a> now at Google came to WhereCamp and was mobbed by a small crowd eager to get their hands on one of the developer G Phones he was handing out from a large box.</p>
<p>GeoWanking, which is now run by Oâ€™Reilly Media, has been the incubator for all things location aware and â€œneogeographyâ€ discussions since 2003 &#8211; check out â€˜<a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html" target="_blank">sproke</a> for a <a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html">Paleogeography vs Neogeography </a>(which, as Sophia notes, was a common topic of discussion at Where 2.0) smack down in which geowanking rules in the form of a list traffic comparison.</p>
<p>Sophia and Rich shared some of their perspective on the early days of GeoWanking and the creation of the HeadMap Manifesto with me and pointed me to many other people to talk to. The prime mover of the Headmap manifesto, Ben Russell, has retired from the scene &#8211; perhaps bored by seeing a radical vision gone thoroughly mainstream, or exhausted by the rigors of carrying an idea through the early blue sky years, or just s simply doing something else? I donâ€™t know.</p>
<p><a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> is still vibrant today even as much of what it envisaged has already been realized. HeadMap assembled the future in a poetry of fragments:</p>
<p><strong>â€œyou can search for sadness in new york people within a mile of each other who have never met stop what they are doing and organize spontaneously to help with some task or other.â€</strong></p>
<p>Anselm explained to me what powered all this social cartography revolution, from his POV, was actually IRC.</p>
<p><strong>&#8220;We had a channel on IRC called &#8220;#geo&#8221;. Â And many of us met there.Â  I met Ben Russell at MathEngine in the UK. Ben and I were fascinated by the future of maps.Â  Ben, Jo and I met Schuyler, Dav, Dan Brickley (who worked for Tim Berner&#8217;s Lee who invented the Web), Rich Gibson, Joshua Schachter (who was just a geek at Morgan Stanley at the time ) &#8230; and the snowball took off&#8230;. Â many others.</strong></p>
<p><strong>We stormed ETECH ( Schuyler met Jo there). Â We got invited to FooCamp. Schuyler was married to Jo by Marc Powell (Food Genome) and lived at his house. Â We pushed so hard on the social cartography revolution.</strong></p>
<p><strong>I did a spinny globe for geourl &#8211; a project by some hacker named Joshua Schachter&#8230; Â we were all friends for years and we had never even met.&#8221;</strong></p>
<p><strong></strong></p>
<p><strong></strong></p>
<h3>â€œCan AR researchers harness these new approaches to index reality?â€</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="344" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="425" height="344" src="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Radioheadâ€™s laser (as opposed to video) clip made using <a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a></p>
<p><a id="t7u3" title="If you have read my interview with Ori Inbar," href="../../2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">If you have read my interview with Ori Inbar,</a> you will know how excited I was to attend The Mobile Reality panel.Â  <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">The video is up</a> and it is really awesome to hear <a href="http://en.oreilly.com/where2009/public/schedule/speaker/35457">Raven Zachary</a> (on twitter @<a href="http://www.twitter.com/ravenme">ravenme</a>) get into the fray with augmented reality.</p>
<p>The main take away for me from the Mobile Reality panel was that we shouldn&#8217;t get too hung up on the difficulties of achieving fully immersive visual augmented reality and twiddle our thumbs waiting for the long anticipated sexy lightweight eyeware &#8211; which is still in a coming soon phase (for more on immersive augmented reality see my upcoming interview with <a href="http://www.cc.gatech.edu/%7Eblair/home.html" target="_blank">Blair MacIntyre</a>). Because, in the meantime, there are plenty of delightful and useful ways to augment our experience of the world &#8211; and not all of these augmented realities rely soley on smart phones as John S. Zeleck showed in his presentation on <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigation.â€</a> Also I had an interesting discussion at lunch with Ori Inbar about the use of audio for augmented reality projects.</p>
<p>Where 2.0 clearly demonstrated that we have an unprecedented amount of information from mapping our world, <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar noted in his conference roundup. </a> Ori writes:</p>
<p><strong>&#8220;My point is not a shocker: all we need is to tap into this information and bring it, in context, into people&#8217;s field of view.&#8221;</strong></p>
<p>As Ori noted <strong><a href="http://www.earthmine.com/" target="_blank">Earthmine</a></strong> and <strong><a href="http://www.velodyne.com/lidar/" target="_blank">Velodyne&#8217;s Lidar</a></strong> showed off two new approaches to mapping the world that have potential to create new opportunities for augmented reality:</p>
<p><strong><strong><a href="http://www.earthmine.com/" target="_blank">&#8220;Earthmine</a></strong> uses its own camera-based device to index reality, at the street level, one pixel at a time. They have just announced <a href="http://wildstylecity.com/wsc/" target="_blank">Wild Style City</a> an application that allows anyone to create virtual graffitis on top of designated public spaces. However, at this point, you can only experience it on a pc!&#8221;</strong></p>
<p><a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a>, Ori notes, has also embarked on a mission to map the outdoors. But, the question Ori highlights is:</p>
<p><strong>â€œCan AR researchers harness these new approaches to index reality?â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest.jpg"><img class="alignnone size-medium wp-image-3660" title="johnzelekandbradyforrest" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest-300x199.jpg" alt="johnzelekandbradyforrest" width="300" height="199" /></a></p>
<p>Brady Forrest inspects John S. Zelekâ€™s <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigationâ€</a> at Where Fair before putting it on and being guided by sensory nudges at the cardinal points in the belt.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost.jpg"><img class="alignnone size-medium wp-image-3661" title="bradyforrestpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost-199x300.jpg" alt="bradyforrestpost" width="199" height="300" /></a></p>
<h3>Coolest Mobile Locative Media App. at Where Fair</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png"><img class="alignnone size-full wp-image-3682" title="picture-61" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png" alt="picture-61" width="176" height="269" /></a></p>
<p><a href="http://www.sonycsl.co.jp/person/shio.html" target="_blank">Atsushi Shionozaki </a>of<strong> <a href="http://www.placeengine.com/en" target="_blank">Place Engine</a></strong> &#8211; &#8220;<strong>a core technology that enables a device equipped with Wi-Fi such as a laptop PC or smart phone to determine its current location,&#8221; </strong>demoed the coolest location aware mobile app in Where Fair &#8211; <a id="uwuf" title="Oedo Yokai" href="http://service.koozyt.com/oedo/" target="_blank">Oedo Yokai</a>. Working with ethnologist, Dr. Hiro Kubota and artist Atsushi Morioka, &#8220;Oedo Yokai&#8221; is <a id="gtb2" title="Koozyt's" href="http://www.koozyt.com/" target="_blank">Koozyt&#8217;s</a> <strong>&#8220;first attempt to cross IT (Location Information) and Folkloristics.&#8221; </strong></p>
<p><strong>&#8220;The Japanese &#8220;Yokai&#8221; are known to dwell and appear at specific locations. They can frequently be seen within the grounds of shrines and temples, believed to be the border between this world and the afterlife, or in more common places like on a hill or at a crossroads. If the &#8220;Yokai&#8221; symbolize the mystery, legend, and lore associated with places, as our interests fade from actual locations, the rol, es they play in modern day society will diminish, and the &#8220;Yokai&#8221; might then cease to appear at all.&#8221;</strong></p>
<p><strong></strong>I love this idea of bringing the ancient spirits of place back into our lives with our new tools of location awareness.</p>
<p>Odeo Yokai also reminds me of Aaron Straup Cope&#8217;s work on &#8220;<a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#historybox" target="_blank">the idea of every spot being a &#8220;history box&#8221;</a> which he explained is &#8220;one of the threads behind<a href="http://blog.flickr.net/en/2009/02/24/an-abundant-present/" target="_blank"> the &#8216;nearby&#8217; project at Flickr</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai.jpg"><img class="alignnone size-medium wp-image-3683" title="oedoyokai" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai-300x199.jpg" alt="oedoyokai" width="300" height="199" /></a></p>
<h3>The Food Genome</h3>
<p>I cannot end this roundup of WhereWeek without a mention of <a href="http://www.foodgenome.com/home" target="_blank">The Food Genome</a>.</p>
<p><strong>&#8220;Food Genome is a big hungry brain that scours the internet, trying to learn everything there is to know about food.&#8221;</strong></p>
<p>Watch out for the upcoming launch of this project, it stole the show with an exciting presentation at WhereCamp. You can follow <a href="http://twitter.com/foodgenome">@foodgenome on Twitter</a> now.</p>
<p>To get one of the gorgeous Food Genome brochures you had to ask Mark Powell a good question. Notice an eager hand reaching out in the picture below. I asked, â€œhow would the basic building blocks of the food genome be licensed?â€ I got my brochure and a rain check on an answer to my question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost.jpg"><img class="alignnone size-medium wp-image-3664" title="foodgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost-199x300.jpg" alt="foodgenomepost" width="199" height="300" /></a></p>
<h3>The Ubiquitous Media Studio</h3>
<p><strong></strong>Another highlight of WhereCamp was hearing from <a id="nfup" title="Gene Becker" href="http://lightninglaboratories.com/about.html" target="_blank">Gene Becker</a> about his new project, <a id="bs9-" title="Ubiquitous Media Studio" href="http://ubistudio.org/" target="_blank">Ubiquitous Media Studio</a> which will be located in Palo Alto. The project is still in the early stages of devlopment but it sounds really exciting. I am looking forward to being involved from East Coast.Â  If you&#8217;re curious where this is going, <strong><a href="http://twitter.com/ubistudio">follow @ubistudio on Twitter</a></strong> to stay updated.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene.jpg"><img class="alignnone size-medium wp-image-3684" title="gene" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene-300x300.jpg" alt="gene" width="300" height="300" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/feed/</wfw:commentRss>
		<slash:comments>14</slash:comments>
		</item>
		<item>
		<title>Creating the Information Landscapes of the Future: Locative Media, Loose Interaction Topologies, and The Shape of Alpha</title>
		<link>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/</link>
		<comments>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/#comments</comments>
		<pubDate>Sun, 17 May 2009 20:13:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[3D mapping for AR]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[augmented reality systems]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[body controllers]]></category>
		<category><![CDATA[community mapping]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[experimental human-computer interfaces]]></category>
		<category><![CDATA[flea market mapping]]></category>
		<category><![CDATA[geotagged photos]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[information landscapes of the future]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[internet 2.0]]></category>
		<category><![CDATA[ITP Spring Show 2009]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative media manifesto]]></category>
		<category><![CDATA[loose interaction topologies]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[mining geotagged photos]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[mud pong]]></category>
		<category><![CDATA[Mud Tub]]></category>
		<category><![CDATA[multi-touch surfaces]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[S Ring]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart mud]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[Where Week 2009]]></category>
		<category><![CDATA[WhereCamp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3521</guid>
		<description><![CDATA[I am excited about going to Where Week 2009 &#8211; Where 2.0 and WhereCamp, this week (for more see Brady Forrest&#8217;s post).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future. As you know, if you have read my previous post &#8211; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/looseinteractionphilosophiespost.jpg"><strong></strong></a><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles.jpg"><img class="alignnone size-medium wp-image-3533" title="shapefiles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles-150x300.jpg" alt="shapefiles" width="150" height="300" /></a></strong></p>
<p>I am excited about going to <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">Where Week</a><a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank"> 2009</a> &#8211; <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0 </a>and <a href="http://wherecamp2009.eventbrite.com/" target="_blank">WhereCamp,</a> this week (for more <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">see Brady Forrest&#8217;s post</a>).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future.</p>
<p>As you know, if you have read <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my previous post &#8211; here</a>, I think the <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">â€œMobile Reality</a>â€ panel is a must.Â  And I have been looking forward to hearing more about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope</a>, Flickr, since <a href="http://en.oreilly.com/et2009" target="_blank">Etech 2009</a> when I was introduced to Aaron by <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> (see<a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> my interview with Mike Kuniavsky at Etech here</a> and more on Mike&#8217;s concept &#8220;information shadows&#8221; <a href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">in his Etech talk</a>).</p>
<p>Shape of Alpha is revealing some fascinating possibilities for mining geotagged Flickr images.</p>
<p>As <a href="http://twitter.com/timoreilly/statuses/1777871797" target="_blank">Tim O&#8217;Reilly noted in a tweet</a>, Aaron Straup Cope&#8217;s recent post,<strong> <a href="http://code.flickr.com/blog/2009/05/06/the-absence-and-the-anchor/" target="_blank">The Absence and the Anchor, </a></strong>describes, <strong>&#8220;some of <span class="status-body"><span class="entry-content">the surprising things Flickr is learning about people from geotagged photos.&#8221;</span></span></strong> Aaron&#8217;s post also announces that the &#8220;donut hole shapes&#8221; are available for developers to use with their developer magic via the <a href="http://www.flickr.com/services/api">Flickr API</a>.</p>
<p><strong>&#8220;If the shapefiles themselves are uncharted territory, the donut holes are the fuzzy horizon even further off in the distance. Weâ€™re not really sure where this will take us but weâ€™re pretty sure thereâ€™s something to it all so weâ€™re eager to share it with people and see what they can make of it too.&#8221;</strong></p>
<p>For more on shape files see Aaron&#8217;s blog post about <strong>&#8220;<a href="http://code.flickr.com/blog/2009/01/12/living-in-the-donut-hole/">some experimental work that Iâ€™d been doing with the shapefile data</a> we derive from geotagged photos.&#8221;</strong></p>
<h3>Creating the Information Landscapes of the Future</h3>
<p>I have been thinking and writing a lot about augmented reality lately.Â  And key thought leaders in this space like <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a>, <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a><strong> </strong>(<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>),<strong> </strong> and<a href="http://gamesalfresco.com/about/" target="_blank"> Ori Inbar</a> (<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>), have clued me in to how vital it is, for an ubiquitous experience,<strong> </strong>for us to find ways to allow people to fill in the stories that can be used for augmented reality.</p>
<p>As Ori noted in conclusion to our recent conversation:</p>
<p><strong> &#8220;in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</strong></p>
<p><a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick,</a> another key thinker in this area (interview upcoming), also notes:</p>
<p><strong>&#8220;you can imagine a crowd sourced set of hints for any location so, AR knows roughly where it is and can do photosynth style matchingÂ  to find out exactly what it&#8217;s looking at and get the extra data it needs about that thing (humans are really good image recognition systems, and are also pretty good at interfacing with networks) instead of marking up real objects with ids you take pictures of real objects, tag them and then search them based on images from your ar system.&#8221;</strong></p>
<p>Ori Inbar suggested to me an idea that I really liked &#8211; the notion of bread crumbs where, <strong>&#8220;</strong><span class="ru_50CCC5_tx"><strong>You don&#8217;t have a constant view of what is happening when you walk but you get images and text and all sorts of things from people who walked there before &#8211; like breadcrumbs.</strong>&#8220;Â  And as </span><a href="http://www.designundersky.com/dus/2008/10/31/geotagged-photo-cartography.html" target="_blank">Design Under Sky</a> points out about Shape of Alpha:</p>
<p><strong>&#8220;The truly amazing part of this process is how the &#8220;community&#8221; has the authority to provide areas previously unmapped.Â Â By uploadingÂ personal photos ofÂ areas not covered by mapping software, members have theÂ power of further shrinking our world through greater visual access and understanding ofÂ locations one might not be willing or unable to visit.&#8221; </strong></p>
<p><strong><br />
</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod.jpg"><img class="alignnone size-medium wp-image-3536" title="aaronmiketod" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod-300x265.jpg" alt="aaronmiketod" width="300" height="265" /></a></p>
<p><em>Aaron Straup Cope, Flickr, Todd E. Kurt, <a href="http://thingm.com/" target="_blank">ThingM</a> and Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a></em></p>
<h3>The Locative Media Manifesto</h3>
<p><a href="http://stamen.com/" target="_blank">@stamen&#8217;s</a> tweet brought AndrÃ© Lemos&#8217; brilliant, thought provoking, &#8221; <a href="http://www.andrelemos.info/2009/05/locative-media-manifesto.html" target="_blank">Locative Media Manifesto</a>,&#8221; to my attention.Â  I am also looking forward to hearing about how old maps &#8220;can shed light on modern geography when placed in counterpoint to the state of art in modern maps from Google or Microsoft&#8221; from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/3486">Michal Migurski</a>, Stamen Design, who will present <a href="http://en.oreilly.com/where2009/public/schedule/detail/7276" target="_blank">Flea Market Mapping</a> at Where 2.0.</p>
<p>AndrÃ© Lemos writes:</p>
<p><strong>&#8220;After uploading to Matrix up there &#8211; Internet 1.0 &#8211; now is the time to &#8220;download cyberspace,&#8221; information about things down here &#8211; Internet 2.0. We are not dealing with what is virtual up there, but of what to do with all this information about things and places down here! How can we relate to things and places? And now that these things and places are provided with digital information and Internet connections? Do we invoke Heidegger and Lefevbre?&#8221;</strong></p>
<p>I will leave it to people smarter than I to invoke Heidegger and Lefevbre as Andre Lemos does so eloquently in Locative Media Manifesto. But by reminding us artists and activists created the term &#8220;locative media&#8221; to &#8220;question the mass use of LBS (location based services) and LBT (location based technologies,&#8221;Â  the manifesto delivers 30 principles to inspire creators of Locative Media and explorers of the,<strong> &#8220;current dimension of cyberculture, comprising the era of &#8220;cyberspace leaking into the real world&#8221; (Russel, 1999); an era of the &#8220;internet of things.&#8221;</strong></p>
<p>I feel well primed for Where Week by my visit to the <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">ITP Spring Show, 2009</a> last Sunday. It was an interaction riot, jam packed with brilliance and off beat explorations of locative media which I experienced through the senses of my 9 year old.Â  His pick for best of show is below. But he had many favorites and I have <a href="http://www.flickr.com/photos/ugotrade/sets/72157618216853047/" target="_blank">put some pictures up on my FLickr stream</a> with links to the creator&#8217;s sites.Â  One of my favorite projects Alexander Reeder&#8217;s <a href="http://artandprogram.com/sring/" target="_blank">S Ring</a> &#8211; <a href="http://tishshute.com/seducing-people-by-talking-with-your-hands" target="_blank">&#8220;seducing people by talking with your hands,&#8221; is up on my Posterous blog</a>.Â  You can see a list of the extensive <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">media coverage the show got here</a>.</p>
<h3>Loose Interaction Topologies</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><img class="alignnone size-medium wp-image-3528" title="mudpongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost-300x199.jpg" alt="mudpongpost" width="300" height="199" /></a></p>
<p>The picture above is of a game of mud pongÂ  in <a href="http://dirtycomputing.com/" target="_blank">Tom Gerhardt&#8217;s Mud Tub</a>.Â  The mud interface &#8211; &#8220;a smart tub with some mud&#8221; knows the topology of the mud and where your hand is. Mud Tub takes advantage ofÂ  a complex material &#8211; to explore loose interaction topologies, including as seen above a game of Mud Pong.Â  Loose interaction topologies are a way we can explore meaning in &#8220;the internet of things.&#8221;</p>
<p>Tom explained his own exploration of the internet of things to me very succinctly:</p>
<p><strong>&#8220;I am not trying to make mud better. I am trying to make computer</strong><strong>s better with mud.&#8221;</strong></p>
<p>He elaborates on the value of Mud Tub in this regard on his site, <a href="http://dirtycomputing.com/" target="_blank">dirtycomputing</a>:</p>
<p><strong>&#8220;The Mud Tub occupies a space similar to other experimental human-computer interfaces, like, multi-touch surfaces, body controllers, augmented reality systems, etc, which push the boundaries of codified interaction models, and drive the development of innovative software applications. Beyond its role as a research topic, the Mud Tub also exists as an open-sourced hardware/software platform on which interactive artists and designers explore new meth</strong><strong>ods for creating and displaying their work.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Composing Reality and Bringing Games into Life: Talking with Ori Inbar about Mobile Augmented Reality</title>
		<link>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/#comments</comments>
		<pubDate>Wed, 06 May 2009 14:50:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Kids With Cameras]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternative reality gaming]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented times]]></category>
		<category><![CDATA[Better Place]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Caryatids]]></category>
		<category><![CDATA[Come Out and Play]]></category>
		<category><![CDATA[composing reality]]></category>
		<category><![CDATA[Cory Doctorow]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[game development conference]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[games for preschoolers on the iphone]]></category>
		<category><![CDATA[games on the iphone]]></category>
		<category><![CDATA[GDC 2009]]></category>
		<category><![CDATA[GE augmented reality ad]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Int 13]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[iPhone OS 3]]></category>
		<category><![CDATA[iphone versus the android]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[jane mcgonigal]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kweekies]]></category>
		<category><![CDATA[Loopt]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microsoft Tag]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile gaming]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Netweaver]]></category>
		<category><![CDATA[open source augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pookatak]]></category>
		<category><![CDATA[Pookatak Games]]></category>
		<category><![CDATA[reality experiences]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Shai Agassi]]></category>
		<category><![CDATA[smart environments]]></category>
		<category><![CDATA[smart objects]]></category>
		<category><![CDATA[The End of Hardware]]></category>
		<category><![CDATA[the Pong for augmented reality]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous augmented reality]]></category>
		<category><![CDATA[ubiquitous experience]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[WARM 09]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[WikiMouse]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3448</guid>
		<description><![CDATA[Recently, I talked to Ori Inbar (above), formerly senior vice- president at SAP.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of Pookatak Games &#8211; a video game company, &#8220;with a vision to upgrade the way people experience the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost.jpg"><img class="alignnone size-medium wp-image-3449" title="oriinbarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost-300x199.jpg" alt="oriinbarpost" width="300" height="199" /></a></p>
<p>Recently, I talked to <a href="http://gamesalfresco.com/">Ori Inbar</a> (above), formerly senior vice- president at <a href="http://www.sap.com/">SAP</a>.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of <a href="http://gamesalfresco.com/about/" target="_blank">Pookatak Games</a> &#8211; a video game company, <strong>&#8220;with a vision to upgrade the way people experience the world.&#8221;</strong> Ori will be participating May 20th, in<a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank"> O&#8217;Reilly&#8217;s Where 2.0 panel, &#8220;Mobile Reality</a>&#8221; -Â  an event not to be missed IMO.</p>
<p>The taste for computing anywhere anytime has entered human culture via the iphone and is spreading like chocolate cake and pizza at a preschool party (see <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_self">why the iPhone changed everything</a>).Â  And while the full flowering of the next step is yet to come &#8211; computing anywhere, anytime by anyone and <strong>anything </strong><a href="http://en.wikipedia.org/wiki/Internet_of_Things" target="_blank">(&#8220;the internet of things&#8221;</a>), our love for these first devices capable of being <strong>mediating artifacts for ubiquitous computing</strong> (Adam Greenfield) is a vital first step to free us from our tethers to computer screens, and fulfill the promise of augmented reality.</p>
<p>If you need more convincing on the pivotal role augmented reality will play as the web moves into the world, check out Tim O&#8217;Reilly&#8217;s recent comments in <a id="iz1_" title="this video clip on Augmented Times" href="http://artimes.rouli.net/2009/04/tim-oreilly-on-recognition-rfid-and-web.html" target="_blank">this video clip posted on Augmented Times</a> and <a id="wtf4" title="here" href="http://radar.oreilly.com/2008/02/augmented-reality-a-practical.html" target="_blank">here</a> early last year.</p>
<p>From another perspective, the gloomy specter of economic and environmental catastropheÂ  is driving a movement to &#8220;<a id="h5pf" title="infuse intelligence into the way the world work's&quot;" href="http://news.bbc.co.uk/2/hi/technology/7992480.stm" target="_blank">infuse intelligence into the way the world work&#8217;s.&#8221;</a> But the challenge for a smart planet is not just about making environments smart, it is about using smart environments to enable people to act smarter (<a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see my interview with Adam Greenfield</a>).</p>
<p>We need a rapid upgrade in both the way the world works, and the way we experience the world.</p>
<p>((Note:Â  It is time to read (if you haven&#8217;t already) <a href="http://search.barnesandnoble.com/The-Caryatids/Bruce-Sterling/e/9780345460622" target="_blank">Bruce Sterling&#8217;s Caryatids</a> (<a href="book of the year for 2009" target="_blank">Cory Doctorow&#8217;s book of the year for 2009</a>) &#8220;as a software design manual&#8221; (<a href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">see Julian Bleeker</a>) because Caryatids reveals the Gordian knots of human folly, greed, compassion and desire entwined in near future designs for technologies to save the world.))</p>
<p>Ori Inbar, worked with Shai Agassi (Shai is now leading the world changing <a id="v5ow" title="Better Place" href="http://www.betterplace.com/" target="_blank">Better Place</a> ) driving <a id="gf_5" title="Netweaver" href="http://en.wikipedia.org/wiki/NetWeaver" target="_blank">Netweaver</a> from a mere concept to a &#8220;major, major business for SAP.&#8221; So Ori has already been through the cycle of working in a very small startup and growing it into a billion dollar business.Â  He has both the experience and the passion to realize his vision for augmented reality.</p>
<p>At Pookatak, he explains :</p>
<p><strong>&#8220;We design â€œreality experiencesâ€ that make usersâ€™ immediate environments more significant to them. We wish to free young and old from getting lost in front of the screen. By delivering the worldâ€™s information to peopleâ€™s field of view, and by weaving real world objects into interactive narratives, we help people rediscover the real world.&#8221;</strong></p>
<p>Pookatak will release their first game this summer. Currently it is under wraps. But Ori gives us some glimpses of what is to come in the interview below.</p>
<p>In addition to founding Pookatak, Ori is involved in a broader effort to move augmented reality forward. On his blog, <a id="ie5s" title="Games Alfresco" href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> &#8211; he recently welcomed <a href="http://gamesalfresco.com/about/" target="_blank">a new partner, Rouli Nir</a>, Ori has focused his eye of wisdom on every significant recent advance in Augmented Reality (check out <a id="zr9y" title="this essence of Ori's thinking in a fast paced video" href="http://gamesalfresco.com/2009/03/09/augmented-reality-today-ori-inbar-speaks-at-warm-2009/" target="_blank">this essence of Ori&#8217;s thinking in a fast paced video</a> presentation for <a href="http://gamesalfresco.com/2009/02/12/live-from-warm-09-the-worlds-best-winter-augmented-reality-event/" target="_blank">WARM â€˜09</a>).</p>
<p>Also Ori is one of the organizers of the interactive media track at <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a>.Â  At ISMAR this year, Ori explained,<strong> &#8220;we are trying to bring in people that develop interactive experiences for consumers, beyond the traditional attendees coming from a research perspective.</strong>&#8221;</p>
<p>In the interview below, Ori explains much of his thinking on how augmented reality will become commercially successful.Â  Enjoy it, think about it, and share it. And most importantly, if you can, get involved with ISMAR 2009.</p>
<p>OriÂ  has inspired me to participate in <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> this year.Â  Ori pointed out:</p>
<p><strong>The </strong> <a href="http://campwww.informatik.tu-muenchen.de/ismar09/lib/exe/fetch.php?id=ismar09%253Astart&amp;cache=cache&amp;media=ismar09:ismar09-cfp_090211_final.pdf" target="_blank">call for papers</a> <strong>is on, and this year it targets well beyond the typical research papers audience and into interactive media and art folks. </strong></p>
<p><strong>There are plenty of opportunities such as:</strong></p>
<p><strong>Art Gallery</strong></p>
<p><strong>Demonstrations</strong></p>
<p><strong>Tutorial</strong></p>
<p><strong>Workshops</strong></p>
<p>It&#8217;s a huge opportunity to shape the emergence of augmented reality.<br />
<br /></br></p>
<h2><strong> Interview With Ori Inbar</strong></h2>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png"><img class="alignnone size-full wp-image-3479" title="picture-41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png" alt="picture-41" width="107" height="146" /></a><br />
<h3>Making Augmented Reality Commercially Successful</h3>
<p><strong>Tish Shute: </strong>You are considered a key trail blazer in AR and you have the go to blog for augmented reality!Â  What are the most important lessons you have learned researching, writing, and developing AR in the last couple of years?</p>
<p><strong>Ori Inbar: You need to have a vision. You need to know where this is going to go in ten or fifteen or twenty years. But you&#8217;ve got to start with something really simple that makes use of the technology you have on hand. And do something that is practical, that people will like, and something they would actually want to buy. Its as simple as that. I&#8217;m currently looking at what we could do with existing technology. First of all, you have to put it in front of people. Right now most people have never heard about the term augmented reality. Go into the street, and ask 100 people about it, maybe 2 would know about it. So you need to put it in front of people because most people think it&#8217;s still science fiction or a special effect you see in movies, not something you can experience in real life. </strong></p>
<p><strong>Tish: </strong>It seems to me to that for augmented reality applications to become popular with existing technology the key breakthrough would be getting people to hold up their phones. What are the obstacles to getting people to use their mobile devices like this?</p>
<p><strong>Ori: There&#8217;s a really nice cartoon by </strong><em> </em><strong><a href="http://www.tonchidot.com/">Tonchidot</a> (below) &#8211; the Japanese company behind the Sekai Camera. It&#8217;s an illustration showing the evolution of man, from ape to man (holding a cell phone looking down), to the developed man holding a device like a camera &#8211; in front of its eyes.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37.png"><img class="alignnone size-medium wp-image-3454" title="picture-37" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37-300x221.png" alt="picture-37" width="300" height="221" /></a><strong></strong></p>
<p><strong>Which is exactly what you&#8217;re talking about. People ask, &#8220;are people going to walk with this like that all day long?&#8221; Probably not. I mean you have to build it in a way that doesn&#8217;t require them to hold it like that all the time. People are used to this gesture with the ubiquitous digital cameras. I tested one of my prototypes on a two and a half year old girl. She had no problem holding it just like she holds a camera.<br />
</strong><br />
<strong>Tish:</strong> <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank"> Blair MacIntyre</a> mentioned, &#8220;The problem with the mobile phone as a AR device is a problem of awareness,&#8221; i.e., you have to have a way of letting people know when there&#8217;s something interesting wherever they are. One of the issues regarding this is if you get too many alerts, then you tune them out.</p>
<p><strong>Ori: First of all Blair is one of the people in academia that get it. Because he looks at it from an experience perspective. Not just as an interesting technical problem to solve. Let&#8217;s start with getting people to enjoy this new experience. The AR demos so far were mostly eye candies, and mostly for advertising &#8211; the<a href="http://ge.ecomagination.com/smartgrid/#/landing_page" target="_blank"> GE AR ad</a> created a lot of buzz; but you look at it for 10 seconds and you forget about it.Â  You need to build something that people would want to experience over time and would be willing to pay for. I think that&#8217;s the big test, right?</strong></p>
<p><strong>Now in terms of having a ubiquitous experience where you&#8217;re continously connected, it doesn&#8217;t have to be an overwhelming experience. Just like some of the social media tools we&#8217;re using today, we decide when to connect, and we filter out the trash. You could get alerts only for things that really matter to you, not for everything that happens in your immediate environment. </strong></p>
<p><strong>There will be many layers of information, and it&#8217;ll be up to you to pick the ones you want to experience. The real benefit is that you get the information in your own field of view and in context of where you are or what you do.</strong></p>
<p><strong>Tish:</strong> So what are you working on these days?</p>
<p><strong>Ori: We are working on a little app that targets a very different audience than what you&#8217;d expect: pre schoolers. We think we can encourage them to get away from a PC or TV screen and learn something while playing &#8211; in the real world. You&#8217;ll hear more about it as soon as this summer. Nuff said.</strong></p>
<p><strong>But, it is a small application that will run on the iPhone. People ask how many pre-schoolers own iPhones? Well, their parents do. </strong></p>
<p><strong>Tish:</strong> Yes there are certainly many New York kids with iPhones &#8211; my kid now has my old iphone.Â  He has pretty much switched from playing games on his DS to the iPhone. I noticed in your WARM video you place a big emphasis on AR as something that will get kids away from screens and engaged with reality.Â  This is something parents will approve of!</p>
<p><strong>Ori: Yes I saw something really interesting at my kids&#8217; party one day; they were all sitting around the room &#8211; looking down at their own DS screens.Â  You could play the DS anywhere, but kids would usually play it on the sofa, looking at the screen, isolated from the world. With an iPhone and a camera, and the application we&#8217;re producing, reality becomes part of the game. Yes that makes it all of a sudden much more interesting for parents. Because kids are spending so much time in front of the screen, all of a sudden they&#8217;re something that will encourage them to interact with real objects, real things. Every parent I&#8217;ve talked to loves that idea.</strong></p>
<p><strong>Tish:</strong> Yes that is what is cool about the work of <a href="http://www.katilondon.com/" target="_blank">Kati London</a> &#8211; I think I saw someone say this on Twitter, &#8220;Kati puts the computer in the game not the game in the computer.&#8221;</p>
<p><strong>Ori: Yes, kids are spending more time in front of games and the computer because it&#8217;s more interesting. It captivates them with &#8220;<a id="x_z0" title="game pleasures" href="http://8kindsoffun.com/">game pleasures</a> &#8221; that tap into their brain&#8217;s dopamine circuitry &#8211; constantly seeking reward and satisfaction. So you&#8217;re not going to be able to tell them to go back to playing in reality without these pleasures. We have to study these mechanics from games and bring them into reality. It&#8217;s about programming real life; and augmented reality helps you achieve that.</strong></p>
<p><strong>Here&#8217;s an example: cause and effect; in a game when you do something you always get an immediate effect. You&#8217;re good, you get a reward. You&#8217;re not good, you get a cue to improve. In real life you do things and you could wait 2 or 3 years until you actually get feedback (if you&#8217;re lucky). Augmented Reality allows you to bring these mechanics into the real world. I think that&#8217;s going to help kids rediscover reality, in a new sense, which is what every parent is dreaming about.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know how much you can say about your app. But in regard to doing augmented reality on the iPhone.. there&#8217;s no compass. Is this a limitation?</p>
<p><strong>Ori: True, no compass yet. But the camera gives you a lot of information that you can interact with. When you run the application, you see the world in front of you, and if the app can recognize real life objects &#8211; it can put virtual elements on top of it.</strong></p>
<p><strong>Tish:</strong> But not with any accuracy unless you&#8217;re using markers. Are you using markers?</p>
<p><strong>Or</strong><strong>i: We&#8217;re using natural feature recognition. It doesn&#8217;t have to be an ugly looking marker. It can be any image.</strong></p>
<p><strong>Tish:</strong> So you&#8217;re using image recognition. Are you working with one of these image recognition startup companies (<a id="nws6" title="list here" href="http://www.educatingsilicon.com/2008/11/25/a-round-up-of-mobile-visual-search-companies/" target="_blank">list here</a> )?</p>
<p><strong>Ori: We&#8217;re working with one of those. What&#8217;s unique about it is it runs very nicely on any cell phone, and on the iPhone it works the best. For this first app, it doesn&#8217;t really matter where you are physically; the geolocation is not part of the experience. </strong><span style="background-color: #ffff00;"><br />
<strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong> For a truly engaging AR experience we will need more of a backend than is currently available?</span><br />
</span><br />
<strong>Ori: I call the backend the cloud, where you have all this information and ways to access it from anywhere. Actually I think it&#8217;s become pretty mature today. If you look at the different elements required to enable an augmented reality experience to work, you have &#8211; first &#8211; the user whose always in the center. Then you have the lens. The lens can be an iPhone, or glasses, even a projector. The lens allows you to watch, sense and track information in the real world: people, places, things. Then in the backend you have the cloud where you store and retrieve information.</strong></p>
<p><strong>So if you look at the maturity of these different elements, I think the cloud is in pretty good shape. Because there&#8217;s so much information we&#8217;re collecting and storing. Anything from Google, Wikipedia, Facebook, all that kind of stuff, it&#8217;s a lot of useful information you can access from anywhere using APIs. And a lot of it is also starting to include geolocation information. Take <a id="zhag" title="Loopt" href="http://www.loopt.com/" target="_blank">Loopt</a> or Google&#8217;s <a href="http://www.google.com/latitude/intro.html" target="_blank">friends service</a> that allows you to see where your friends are and what they&#8217;re doing. There&#8217;s tons of information out there and it&#8217;s pretty easy to access it. Now what do you do with it is the question?</strong></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> is such a simple and brilliant application and nobody thought about doing it until this guy from Salzburg did. It doesn&#8217;t have any sophisticated visual tracking. It knows your position and it&#8217;s simply looking at the angle you&#8217;re pointing to. Based on these parameters it brings information from Wikipedia that pertains to your field of view. So most of it was already there. It&#8217;s just a matter of connecting the pieces in an experience that is valuable for people.</strong></p>
<p><strong>Tish: </strong>It is the uptake of even a very simple technology that puts the magic in it.</p>
<p><strong>Ori:Â  Yes, take Twitter. If you go to its homepage it looks like a very simple boring app but it is something that is both enjoyable and very useful to people.</strong></p>
<h3><strong>Why you should participate in ISMAR 2009</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40.png"><img class="alignnone size-medium wp-image-3478" title="picture-40" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40-222x300.png" alt="picture-40" width="222" height="300" /></a><br />
<strong>Tish: </strong>I know that you are involved in organizingÂ  <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> (picture above from Ori&#8217;s post on <a href="http://gamesalfresco.com/2009/02/23/ismar-2009-the-worlds-best-augmented-reality-event-wants-you-to-contribute/" target="_blank">&#8220;ISMAR 2009: The World&#8217;s Best Augmented Reality Event&#8230;,</a>&#8220;) and there is a call out for papers and for volunteers, can you tell me more about it?</p>
<p><strong>Ori: Yes, we hope to have the first ISMAR where we practice what we have just discussed: let&#8217;s build on all the research invested so far and instead of thinking only about 5-10 years from now, let&#8217;s see what we can do today. So we are bringing people in from other disciplines &#8211; artists, interactive media developers and people from the entertainment industry.Â  The goal is to use the technology to make something interesting for people &#8211; again, something that people would buy, and making it commercially successful.Â  Many people either don&#8217;t know about ISMAR because in the past it was a pure engineering-orientated event and peopleÂ  from a commercial perspective of AR weren&#8217;t attracted to it.Â  The Chair of the Event this year is based in Florida and he is going to bring in a lot of people from the entertainment industry such as Disney. I think this will transform this event into something more like SIGGRAPH &#8211; more of an industry event.Â  As one of the organizers of the interactive media track we are trying to bring in people that want to build applications for consumers.</strong></p>
<p><strong>Tish:</strong> In terms of AR applications what are the flagships today?</p>
<p><strong>Ori: There are very few because it&#8217;s just the beginning. There&#8217;s one tiny studio in France called <a id="z1ln" title="Int 13" href="http://www.int13.net/en/" target="_blank">Int 13</a> . They&#8217;ve created maybe the first commercial game running on a mobile device using AR technology. It&#8217;s called <a href="http://www.youtube.com/watch?v=Te9gj22M_aU" target="_blank">Kweekies</a>. It was one of the contenders for the Nokia Mobile innovation awards. They were one of the ten finalists, but they didn&#8217;t win it. It&#8217;s looks really cool. It&#8217;s somethng that runs on your desk, with a marker. Many AR folks say markers are the past, markers are ugly. But it&#8217;s still a cool experience. I think people will go for it.</strong></p>
<p><strong>Tish:</strong> Yes I think we will have to look to small companies that are free to think creatively to lead the way.Â  It seems many games companies are tied up pulling off huge big budget projects and enterprise is still catching up on how to use social media!</p>
<p><strong>Ori: Yes, last year I was in the game development conference (GDC); there was no mention of augmented reality &#8211; not on the exhibition floor, none of the sessions, nobody talked about it. I was stunned. Then this year, there was a little a change. There were like three demos on the exhibition floor, <a href="http://www.metaio.com/" target="_blank">Metaio,</a> <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and a Dutch company called <a href="http://www.augmented-reality-games.com/" target="_blank">Beyond Realit</a>y.Â  And then there was Blair&#8217;s talk, which was very very cool. The room was packed with people. And after the talk there were dozens of people lining up to talk with him about the topic. There was definitely interest, but still on the very edge. The video game industry is still a hit driven business and publishers spend upward of 20-30 million dollar to create the best AAA game possible. They just can&#8217;t take the risk. So it&#8217;s going to come from smaller companies, from outsiders coming in with a vision and understanding on how to put the AR pieces together to create a totally new experience.</strong></p>
<p><strong>Tish:</strong> But the basic tool set is there isn&#8217;t it?</p>
<p><strong>Ori: I talked to some folks at the games developer conference, many folks with MMO background, and they have great ideas about AR. It&#8217;s great to see different people with different views on what&#8217;s needed first. &#8220;Joe the Programmer&#8221; had this idea of creating a small piece of hardware that you can put in every house and provide accurate geospatial information in your home. That couldÂ  open up many opportunities for AR experiences in homes.</strong></p>
<p><strong>Tish:</strong> Don&#8217;t you think we have enormous resources in terms of image databases that provide a great basis for augmented reality.Â  I was talking to Aaron Cope at ETech about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> &#8211; Flickr&#8217;s vernacular mapping project using all the geotagged photos in Flickr. That is such cool project. <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron will be speaking at Where 2.0</a> also.</p>
<p><strong>Ori: Think of Google Earth. Google Earth leveraged communities to basically map all the major cities around the world into 3D models. And that is an essential step to be able to do augmented reality outdoors. Because if you had to model everything from scratch, it wouldn&#8217;t be realistic.</strong></p>
<h3><strong>Augmented Reality and Becoming Greener.</strong></h3>
<p><strong>Tish:</strong> I am really interested in how AR interfaces might be useful to some of the emerging energy identity/metering projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.wattzon.com/" target="_blank">WATTZON</a> because I think it is very important that people have very intuitive, immediate, and enjoyable ways to relate to energy data so they can make greener choices.</p>
<p><strong>Ori: Back in the day I had an idea to build an Augmented Reality application to become greener. You look at things around your home with the camera and itÂ  recognizes its green gas footprint and makes recommendations to reduce it.Â  I guess it was a bit too early to do that based on visual recognition alone&#8230;you&#8217;d needÂ  additional sensors that would provide related information about what you are looking at.</strong></p>
<p><strong>Tish:</strong> Well as there is more interest in Green technology do you think we may see VC interest in some green AR projects now?</p>
<p><strong>Ori: I talked to some of the investment folks, Angels as well as VC&#8217;s about AR and they had no clue what it is. There&#8217;s a need for a whole lot of education. And there are no proof points (as in successful investments in this domain), and counter to popular belief &#8211; they don&#8217;t like risk so much&#8230;</strong></p>
<p><strong>Tish:</strong> And consumer adoption must lead the way, right?</p>
<p><strong>Ori: Just like with every emerging technology in history, people never bought the technology, they bought the content, the apps, the benefits that came on top of the technology. Whether it was VHS winning over Beta Max, or BluRay winning over HD. It&#8217;s always because of more/better content. Look at the video game console war: Xbox, and Nintendo did better than Sony just because they had more and better games. Even Windows was a success thanks to its applications. People bought it for the applications not the OS. The content is the first to drive demand.</strong></p>
<p><strong>Tish:</strong> One of the challenges to giving people new ways to relate to their energy consumption is that you can just have them looking at graphs of how bad they have been in the past you &#8211; that may make them feel bad but that doesn&#8217;t necessarily give them ways or motivation to change. There perhaps needs to be more immediate relationship to the data to facilitate change. I think the mantra for optimization of anything from energy usage to supply chains is timely, actionable data?</p>
<p><strong>Ori: There are a lot of ideas about measuring information and displaying it to people. For example, the Prius hybrid car, one of its interesting features &#8211; which is kind of game like &#8211; is a constant display of your current fuel consumption. That alone changes how people drive because they try to beat the &#8220;Score&#8221; and as a result conserve more fuel. That model can be applied to our homes&#8230;</strong></p>
<p>Tish: Yes that is something I am very interested in. I have been following several projects in this area &#8211; one of my favorites is the <a href="http://www.arduino.cc/" target="_blank">Arduino</a>, <a href="http://www.currentcost.com/" target="_blank">Current Cost</a>/<a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> integrations <a href="http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/" target="_blank">I saw at Homecamp</a>.</p>
<p>You joined a start up with Shai Agassi which was bought out by SAP right? He has a brilliant approach with Better Place.</p>
<p><strong>Ori:Â  I think what&#8217;s really unique about Better Place&#8217;s approach is that he doesn&#8217;t require people to change their behavior. People are still going to have their own cars. They&#8217;ll be able to drive as far as they want, and for the same (or lower cost). Its not necessarily about a new technology, electric cars have been around for a long time but there was no way people were going to be limited by the 50 or 70 mile range and Better Place is solving that problem. With its infrastructure of charging spots and battery switching stations, drivers are going to be able to drive anywhere. And it&#8217;ll be similar to having to stop once in a while to refuel your car. The price maybe even lower than what you pay today for your transportation needs &#8211; and you&#8217;ll stop generating green gas. It&#8217;s a clever way of taking technology to a whole new level without changing the behavior of people.</strong></p>
<p><strong>Tish: </strong>Better Place is a classic example of things as a service isn&#8217;t it?Â  It is basically a utility company.</p>
<p><strong>Ori: It is similar to a phone carrier model.Â  You pay for a membership that gives you access to the car (equivalent to the phone) and electricity (equivalent to the phone line) for the same price of fuel cost today. And as bonus you get to save the world.</strong></p>
<h3><strong>How the iphone changed the game for AR &#8211; and the iphone versus Android</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38.png"><img class="alignnone size-medium wp-image-3472" title="picture-38" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38-300x198.png" alt="picture-38" width="300" height="198" /></a><em></em></p>
<p><em>Picture from Ori&#8217;s post</em><strong><em>, <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_blank">&#8220;GDC 2009: Why the iphone changed everything&#8221; </a></em></strong></p>
<p><strong>Ori: And back to AR, you have to take the same approach, because nobody&#8217;s wants to don those huge head mounted displays or backpacks. You have to take advantage of people&#8217;s current behavior: they already carry their iPhones or similar devices.</strong></p>
<p><strong>Tish:</strong> As we discussed, you just have to get people raising up their phones and looking through them when that is a useful thing to do. Both Wikitude and Nathan Freitas&#8217;s graffiti app were enough to get me interested in the evolutionary step of raising my phone! Nathan&#8217;s graffiti app is nice. You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art (see my interview, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG finally for Augmented Reality?&#8221;</a>).</p>
<p><strong>Ori: The app store has fundamentally changed the mobile gaming industry. Last year they were in shambles. There was no growth. Everybody was complaining, &#8220;we can&#8217;t handle it, there&#8217;s a million phones, and you have to test it on each phone. And carriers suck, they don&#8217;t care about sharing and promoting your content. Everything was bad. This year mobile gaming is the hottest thing. And it&#8217;s all because of the iPhone. It changed the game.</strong></p>
<p><strong>Tish: </strong>How do you think Android is going to get traction against the iphone?</p>
<p><strong>Ori: Well the number one thing is the form factor &#8211; the iPhone is just much cooler than the G1. Its OK but it doesn&#8217;t have the same feel. People thought it was going to be easy to clone the iPhone but none of the attempts succeeded so far.</strong></p>
<p><strong>Tish: </strong>How much does it matter for AR not being able to runs things persistently in the background on the iphone?</p>
<p><strong>Ori: Actually they have add a such a capability in OS 3.Â  You can now make use of a background service.</strong></p>
<p><strong>Tish:</strong> OS 3 will open up new possibilities for AR?<strong> </strong></p>
<p><strong>Ori: The access to the video API is still not public.Â  But there is a new Microsoft application &#8211; Microsoft Tag that makes use of that API which means it is probably OK to use it.</strong></p>
<p><strong>Tish: </strong>(I ask Ori for his card and he shows me how to read it with my iphone.) Oh nice you have an AR card, of course!</p>
<h3><strong>In Search of Pong for Augmented Reality</strong></h3>
<p><strong>Tish: </strong>So how will AR begin to, as Blair&#8217;s friend put&#8217;s it, &#8220;facilitate a killer existence,&#8221; particularly as we are probably looking at some new and perhaps pricey hardware?</p>
<p><strong>Ori: You could take the Better Place approach. We&#8217;re going to give you a great experience and we&#8217;ll include the devices as part of that experience for the same price. Let&#8217;s say you subscribe to an AR experienceÂ  which offers access to multiuser, support, and all the information you need wherever you go &#8211; exactly according to the vision. You pay for a subscription on a monthly basis and included in that cost we give you a better device that offers aÂ  better AR experience. It&#8217;s following the phone carrier approach, but in a good way.</strong></p>
<p><strong>But first of all we do need our Pong! I was sitting with a couple of AR game enthusiasts at the GDC and we were asking ourselves, &#8220;how do we create the first pong for AR?&#8221;</strong></p>
<p><strong>Was Pong a multiplayer game? Not necessarily! Did it connect to the network? No! We have to create the first dot in a long line of dots that will bring us to our destination.</strong></p>
<p><strong>Tish: </strong>You haven&#8217;t seen a Pong yet have you?</p>
<p><strong>Ori: Not yet. I mean there&#8217;s maybe a handful of games and apps out there, but I don&#8217;t think any of them is a Pong yet. Still, it&#8217;s getting closer.</strong></p>
<p><strong>Tish: </strong>Kati London is doing some very interesting work on bringing games into reality, isn&#8217;t she?</p>
<p><strong>Ori: Yes, she works with Frank Lanz at <a href="http://playareacode.com/" target="_blank">Area/Code</a>. He teaches at NYU and has designed games for the <a href="http://www.comeoutandplay.org/" target="_blank">&#8220;Come Out and Play&#8221;</a> festival here in Manhattan. And a lot of these games are actually low tech.</strong></p>
<p><strong>Tish:</strong> Yes I have a big alternate reality game blog brewing that I haven&#8217;t had time to write yet!</p>
<p><strong>Ori: The city is the gameboard is their slogan. It&#8217;s going to be a great playground for AR games. The city becomes a theme park. The city could become an even bigger touristic attraction. People will come to the city to be part of these games. So you&#8217;re having thousands of people running around the city playing all sorts of games from laser-tag style to history adventures, to treasure hunts.</strong></p>
<h3><strong>Composing Reality</strong></h3>
<p><strong>Tish: </strong>So why haven&#8217;t you focused on one of these kinds of games with your company?</p>
<p><strong>Ori: We have a couple of scenarios along these lines that we&#8217;re planning for 2010-11. But first focus on what&#8217;s possible today.</strong></p>
<p><strong>Tish: </strong>And what&#8217;s stopping you from doing those kind of games today?</p>
<p><strong>Ori: Many things. The devices are not there yet, location services are not accurate enough, ubiquitous sensors are notÂ  there yet.</strong></p>
<p><strong>Tish: </strong>You think alternate reality gaming needs more &#8220;ubiquity&#8221; than is currently available?</p>
<p><strong>Ori: Not necessarily. People are doing alternate reality games with no &#8220;ubiquity&#8221; at all. But my interest is to add the visual aspect. I believe humans are mostly driven visually.</strong></p>
<p><strong>Jane McGonigal said in a talk at GDC, that AR would allow us to program reality, which is exactly how I look at it. Once you can recognize things, some of it with WiFi and RFID and all sorts of sensors. But visual sensors is always going to be the ultimate way to recognize things. And once you recognize things and know what they are, and can pull information about those things (or people and places) from the internet, you can program it (visually). You could program it to be fictional, like in a video game, or it could be programmed as non-fictional, like a documentary. And that allows you to do things that before were unimaginable.</strong></p>
<p><strong>Tish: </strong>But you can&#8217;t forget the visual, it is primary the connection to peoples&#8217; primary sensory relationships.</p>
<p><strong>Ori: Yes, it&#8217;s like you go to a grocery store and you pick your vegetables, a lot of it is by sight and by touch. And what if you could also see just by looking at it that it&#8217;s from a local store, and that it&#8217;s organic?</strong></p>
<p><strong>Tish:</strong> It goes beyond overlays really?</p>
<p><strong>Ori: By the way, I don&#8217;t like the term &#8216;overlay&#8217;. I know that&#8217;s how it looks: you either overlay or superimpose, but I&#8217;m still searching for a better term. A term I prefer to use is &#8220;composing reality&#8221;. Just like painters, they use brushstrokes and colors and compose a painting. We need to take the real element and the virtual element and compose them into something new. It&#8217;s not just about slapping one on top of the other.</strong></p>
<p><strong>Tish: </strong>yes I think the idea of dashboards is not so appealing.</p>
<h3><strong>Pookatak Games</strong></h3>
<p><strong>Tish: </strong>Do you want to explain the evolution of your company? You have an interesting history of success with high end enterprise applications.</p>
<p><strong>Ori: Since I was a kid I wanted to invent and create things. When I discovered software, that was a really cool way of actually creating things from nothing. From thin air; and you can do it very quickly. That&#8217;s what brought me into software. But I was always looking for the intersection between technology and art. Looking for ways to bring these things together. In the early nineties virtual reality was doing it. It had the appeal of cutting edge technology that can be combined with art. But then, as we all know, it crashed. So I joined Shai Agassi&#8217;s startup (who is now doing Better Place) back in the early nineties. I was one of the first employees in his startup which was developing multimedia products. I was leading the development of one of its flagship product. At some point we realized the technology could be great for an enterprise environment.</strong></p>
<p><strong>It was a really great experience. First going through this cycle from a very small startup and growing into this multi billion dollar business. I was responsible for defining and marketing SAP&#8217;s platform, which was called Netweaver. It was just an idea when we joined SAP and by the time I left it was a major, major business for SAP. I learned about the challenges of building a platform. No matter what purpose you&#8217;re building it for, it typically has similar rules. It&#8217;s definitely not just about the technology; the content that comes with it is really key to making a platform successful.</strong></p>
<p><strong>The third part of this platform trifecta is the community. If you don&#8217;t build a community, you won&#8217;t get the critical mass required for adoption. It may be your own platform but it&#8217;s not necessarily the people&#8217;s platform. That experience is very key to what we&#8217;re doing today. Now, a new industry is being born on the basis of a remarkable technology. But to drive adoption, first we&#8217;ll need good content. The content will be created using today&#8217;s technology with internal tools developed to simplify the process. Next step would be to make the tools used internally &#8211; available to other developers. Help scale the industry, enable innovation on a larger scale. That way we have a chance to create a platform. So it isn&#8217;t really just about my company. I&#8217;m so passionate about augmented reality, I want to it to become a healthy and successful industry for the next 5, 10, 15 years.</strong></p>
<p><strong>Tish: </strong>Yes I am so ready to be liberated from the sitting behind a computing screen! And I know that all this hardware is murdering the environment.</p>
<p><strong>Ori: There&#8217;s &#8216;s the book by Rolf Hainich which is called &#8220;<a id="ba8p" title="The End Of Hardware" href="http://www.theendofhardware.com/">The End Of Hardware.</a> &#8221; It&#8217;s about hardware for augmented-reality. Once you use goggles or other AR interfaces you eliminate the need for screens, laptops, etc. It&#8217;s going to be great for the environment. You have read Rainbow&#8217;s End, right? According to the book in few years there will barely be any (visible) hardware. At least it&#8217;ll have a much smaller footprint for the environment. And it&#8217;ll touch every aspect of life, everything you do. It&#8217;ll change the way you interact with the world.</strong></p>
<h3><strong>The Illusive Eyewear for Immersive AR.</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost.jpg"><img class="alignnone size-medium wp-image-3469" title="retroar-googlespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost-300x225.jpg" alt="retroar-googlespost" width="300" height="225" /></a><br />
<em>Friend of Ori&#8217;s in San Francisco wearing retro AR goggles (from <a href="http://gamesalfresco.com/2009/05/04/gdc-2009-roundup-a-tiny-spark-of-augmented-reality/" target="_blank">Games Alfresco, Ori&#8217;s roundup of GDC 2009</a>)</em></p>
<p><strong>Tish:</strong>OK lets talk about goggles.</p>
<p><strong><strong>Ori: Goggles are going to happen, we want to be hands free.</strong></strong></p>
<p><strong>It&#8217;s going to happen because it&#8217;s just a more intuitive way to use this technology. But above all it has to look cool. Because if it&#8217;s not, if it&#8217;s a big headset, then maybe a small percent of the population might use it, but most people won&#8217;t. It has to look like an accessory, like new cool eyeglasses that you just must wear.</strong></p>
<p><strong>I recently talked to a friend, who runs an industrial design firm, and has experience in designing such glasses for companies like Microvision and Lumux. He says that when you try to bring the images so close to our eyes &#8211; there are some really hard problems to solve. Otherwise it can become really annoying and cause dizzyness.</strong></p>
<p><strong>But I&#8217;m optimistic. I believe it&#8217;s going to happen 3 to 5 years from now. It&#8217;s already starting now: Vuzix announced goggles that will be available this year. Some AR apps that are going to take advantage of next year. Initially only a fraction of the population will use it. And that&#8217;s going to help advance it and make it better and better. But it&#8217;s going to take time until it reaches the mass market.</strong></p>
<p><strong>Tish:</strong> In virtual worlds we have seen, I think, a lot of mistakes in terms of reinventing the wheel and producing too many proprietary versions of the same thing and not enough concerted effort on standards and open platforms that could create a vibrant ecosystem.Â  How can augmented reality not make the same mistakes?</p>
<p><strong>Ori: There are some early AR open source efforts ARTookit, ARtag but it is not a movement yet.Â  One of the things we&#8217;re trying to do at ISMAR this year is to put togetherÂ  discussions around key industry issues, such as standards. Some people say it&#8217;s too early, you have to have a defacto standard to start from. But pretty soon it&#8217;s going to be too late. Just like with virtual worlds, all of a sudden you have all these islands that don&#8217;t talk to each other. Why get to that point if we can plan to avoid it? Let&#8217;s start thinking about it right now. On the other front there are devices. There are pockets of people working on adapting devices for AR, second guessing the hardware companies. Why not get them together with the Intels and Nvidias of the world, and discuss what this device should be able to do. And then compete to make it happen.</strong></p>
<p><strong>Tish: </strong>How much luck are you having with this discussion part?</p>
<p><strong>Ori: People are very interested in doing this. We proposed these panels for ISMAR. And I&#8217;ve got some key people already on board. They have tons of input, they want to get involved. We&#8217;ll see how much we can actually get out of it.</strong></p>
<p><strong>Tish: </strong>In virtual worlds it was a while before vibrant opensource communities developed.Â  OpenSim has I think been the breakthrough community in this regard.</p>
<p><strong>Ori: You have to think about the elements up front. The dream job is to architect the industry. Say we agree on the required pieces. Then we could help the right companies succeed in delivering the pieces. Next, we have to collaborate so that these pieces talk to each other. And eventually these communication methods will become defacto standards and most developers will adopt it.</strong></p>
<p><strong>Tish: </strong>So I&#8217;m going to put you in the role. You&#8217;ve got your dream job. You&#8217;re going to architect this community. So what are the key pieces and where would you like to see the open source communities take hold first?</p>
<p><strong>Ori: Open source will not be exclusive. It&#8217;s going to live side by side with proprietary technology.</strong></p>
<p><strong>The key pieces? You have the user at the center. And the user interacts with a lens. The lens includes both the hardware and the software. And then the lens senses and interacts with the world, which includes people, things and places. And these people-things-places emit information &#8211; about who they are, where they are, what they&#8217;re doing, etcÂ  &#8211; which is then stored in the cloud.</strong></p>
<p><strong>And then you have the content providers, the people and companies, composers who weave AR experiences through the pieces we mentioned before. These composers need a platform that glues these pieces together. Pieces of the platform will be on the lens, and in the world, and in the cloud. If you manage to remove the frictions, and connect these pieces into an experience that people like &#8211; then you have a platform. What the platform does it reduces the overhead and accelerates innovation.</strong></p>
<p><strong>Tish: </strong>Another problem virtual worlds faced in their development was their isolation from the world wide web.Â  Will augmented reality avoid this plight?</p>
<p><strong>Ori:Â  Yes, I believe the key, like you said before, is not to reinvent the wheel. The cloud is already there.Â  Take Wikitude for example, all <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a> had to do is buildÂ  a relatively simple client app, connected to wikipedia, and all of a sudden it offered a wealth of information in your field of view.</strong></p>
<p><strong>I think we can learn a lot from web 2.0. For example, in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, you&#8217;ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So let&#8217;s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So let&#8217;s find ways to make people create information that can be used for AR.</strong></p>
<p><object width="425" height="344" data="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /></object></p>
<p><em>Ori Inbar directed <a title="Wiki Mouse" href="http://www.youtube.com/watch?v=GTXtW3W8mzQ" target="_blank">Wiki Mouse</a> &#8211; a WIKI Film co-created by a swarm of movie makers around the world.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009</title>
		<link>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/</link>
		<comments>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/#comments</comments>
		<pubDate>Thu, 19 Mar 2009 03:16:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[#etech]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Ambient Orb]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[BlinkM]]></category>
		<category><![CDATA[Bocci at ETech]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[data shadows]]></category>
		<category><![CDATA[dematerializing products]]></category>
		<category><![CDATA[dematerializing the world]]></category>
		<category><![CDATA[dressing the shadows]]></category>
		<category><![CDATA[ecology of services]]></category>
		<category><![CDATA[econolypse]]></category>
		<category><![CDATA[embodied energy data]]></category>
		<category><![CDATA[energy identity]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[item level identification]]></category>
		<category><![CDATA[LilyPad]]></category>
		<category><![CDATA[LoveM]]></category>
		<category><![CDATA[Maker culture]]></category>
		<category><![CDATA[Makershed]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Moore's Law]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Path Intelligence]]></category>
		<category><![CDATA[RFID tracking]]></category>
		<category><![CDATA[servicization of things]]></category>
		<category><![CDATA[smart LED]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[Steven Levy]]></category>
		<category><![CDATA[sustainable design]]></category>
		<category><![CDATA[the dotted line world]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Thinglink project]]></category>
		<category><![CDATA[ThingM]]></category>
		<category><![CDATA[things as services]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hardware]]></category>
		<category><![CDATA[urban green space]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[WineM]]></category>
		<category><![CDATA[wireless networks]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3191</guid>
		<description><![CDATA[ETech 2009 was all about making interesting and deeply socially effective technological interventions in the world. And dematerializing products into services seemed to be one of the most powerful concepts elaborated there to accomplish this.Â  Mike Kuniavsky in his presentation, &#8220;The dotted-line world, shadows, services, subscriptions,&#8221; noted: &#8220;There&#8217;s great opportunity here to create an ecology [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/bicycleriderdatashadows.jpg"><img class="alignnone size-medium wp-image-3192" title="bicycleriderdatashadows" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/bicycleriderdatashadows-300x230.jpg" alt="bicycleriderdatashadows" width="300" height="230" /></a></p>
<p><a href="http://en.oreilly.com/et2009" target="_blank">ETech 2009</a> was all about making interesting and deeply socially effective technological interventions in the world. And dematerializing products into services seemed to be one of the most powerful concepts elaborated there to accomplish this.Â  Mike Kuniavsky in his presentation, <a href="http://en.oreilly.com/et2009/public/schedule/speaker/1947" target="_blank"><strong>&#8220;The dotted-line world, shadows, services, subscriptions,&#8221;</strong></a> noted:</p>
<p><strong>&#8220;There&#8217;s great opportunity here to create an ecology of services embodied as robust, valuable, exciting new tools with focused, limited functionality, tied together with item-level identification and wireless networks. Whole classes of things that can enrich our lives and bank accounts are now possible thanks to the way ubiquitous computing interweaves services and devices at an intimate, everyday level&#8230;.<br />
</strong><br />
<strong>We now have the technology to create whole new classes of tools for living in a way that is more useful and fun for individuals, more sustainable for society, and more profitable for companies. That way is to recognize the connectedness of all everyday things, and to build on it, rather than ignoring it.&#8221;</strong></p>
<p>The picture opening this post is from Mike&#8217;s presentation (see <a id="zuqd" title="Mike's blog" href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">Mike&#8217;s blog</a> forÂ  <a href="http://www.orangecone.com/tm_etech_2009_0.1.pdf">a PDF with all of the images and notes</a> (884 PDF), and the original presentation description).</p>
<p>An ecosystem usingÂ  item-level identiï¬cation, wireless networking, and data visualization is evolving that links everyday objects to information about those objects &#8211; what Kuniavsky calls their â€œinformation shadow.â€Â  Because every object can be uniquely identified and that identification can be associated with a cluster of metadata, it &#8220;exists simultaneously in the physical world and in the world of data.&#8221;</p>
<p>Mike mentioned Tom Coates&#8217; <a href="http://www.plasticbag.org/archives/2005/04/the_age_of_pointatthings/" target="_blank">&#8220;Age of Point-At Things&#8221;</a> blog post to say that although Tom was talking about TV listings data, the same ideas can be applied to anything that&#8217;s uniquely identified. Also, Mike noted, he often references Ulla-Maaria Mutanen&#8217;s <a href=" http://aula.org/people/ulla/thinglink_white_paper.pdf" target="_blank">Thinglink project</a> and her observation about Amazon ASINs to explain this concept which is, of course, closely related to <a href=" http://en.wikipedia.org/wiki/Internet_of_things" target="_blank">the internet of things.</a></p>
<p>Until recently, Mike explained, accessing the information shadow was difï¬cult. The world of objects and the world ofÂ  information shadows were separated by the difï¬culty of getting at the information. But now, increasingly:</p>
<p><strong>&#8220;we can instantaneously see the world of information shadows as weâ€™re interacting with the world of objects.&#8221; </strong></p>
<p>Mike&#8217;s is not only conceptualizing these ideas, his company with partner Tod E. Kurt, <a id="zh2z" title="Thingm" href="http://thingm.com/" target="_blank">Thing<span class="ru_CC6D50_bk">M,</span></a> is producing hardware that will enable this vision.</p>
<p><strong>&#8220;We&#8217;re a ubiquitous computing consumer electronics company, which sounds fancy, but weâ€™re pretty small. We design, manufacture and sell ubicomp hardware.&#8221;</strong></p>
<p>ThingM may be small now but they are at the leading edge of huge transformation.Â  When asked, &#8220;How do you see the near-future city working with ubiquitous computing&#8230;&#8221; Adam Greenfield put it succinctly to Lalie Nicolas for <a href="http://www.lehub-agence.com/site.php">Le Hub</a>â€™s <a href="http://www.ludigo.net/index.php?rub=0">Ludigo</a> project:</p>
<p><strong>&#8220;I would go so far as to say that there will be no area or domain of urban activity that is not somehow disassembled and recomposed as a digital, networked, interactive process over the next few years. Objects, buildings and spaces will be reconceived as network resources; cars, subways and bicycles will be reimagined as on-demand mobility services; human communities are already well on the way to becoming self-conscious &#8216;social networks.&#8217;â€</strong></p>
<p>For the rest of this short interview <a href="http://speedbird.wordpress.com/2009/03/16/ludigo-interview/" target="_blank">see Adam&#8217;s post</a>, and for my recent long interview with Adam <a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see here</a>.</p>
<h3>&#8220;&#8216;Almost everything in this room is in a landfill, but just doesn&#8217;t know it yet.&#8217;Â  This needs to change&#8221;</h3>
<p>(Tim O&#8217;Reilly responding on Twitter to a quote from <a href="http://twitter.com/AlexSteffen" target="_blank">@AlexSteffen</a>&#8216;s talk)</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/picture-5.png"><img class="alignnone size-medium wp-image-3194" title="picture-5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/picture-5-300x241.png" alt="picture-5" width="300" height="241" /></a></p>
<p><em><span class="caps">Chart above from Jeremy Faludi&#8217;s presentation</span> <a class="attach" href="http://assets.en.oreilly.com/1/event/20/Priorities%20for%20a%20Greener%20World_%20If%20You%20Could%20Design%20Anything,%20What%20Should%20You%20Do_%20Presentation.pdf">Priorities for a Greener World: If You Could Design Anything, What Should You Do? Presentation</a> <span class="en_filetype">[PDF]</span></em> <span class="caps"> </span></p>
<p>Interconnecting themes at ETech,Â  <a id="nn8n" title="Inhabitat notes" href="http://www.inhabitat.com/2009/03/13/the-best-of-green-at-etech-2009/" target="_blank">Inhabitat noted,</a> &#8220;formed bridges between luminary speakers from a variety of backgrounds, as <a href="http://www.inhabitat.com/2006/10/26/worldchanging-the-book-is-out/">Alex Steffen</a>, <a href="http://www.inhabitat.com/2008/02/20/mary-lou-jepsen-at-greener-gadgets/">Mary Lou Jepsen</a>, <a href="http://www.faludidesign.com/">Jeremy Faludi</a>, and others reinforced the need to create repairable, open-source, <a href="http://www.inhabitat.com/2009/03/02/greener-gadgets-2009/">long lasting products</a>, reveal energy usage, and pursue forward-thinking strategies for a greener tomorrow.&#8221; But <a href="http://www.faludidesign.com/" target="_blank">Jeremy Faludi</a>, a sustainable design strategist and researcher<span class="caps">, </span><span class="caps">put the design challenge most directly:</span></p>
<p><span class="caps"> <strong>&#8220;</strong></span><strong>If you really care you need to dematerialize, turn products into services&#8230;&#8221; </strong></p>
<p>The idea of data shadows has been a part of the conversation in ubiquitous computing for a long time (since Marshall McLuhan perhaps?).Â  But, at ETech 2009, it seemed to have come of age.</p>
<p>It came up again and again, in the need to dematerialize stuff that seemed to be part of every conversation, from Faludi&#8217;s comments on the amount of toxic mining waste created in the manufacture of one laptop, to Raffi Krikorian&#8217;s presentation of <a href="http://www.wattzon.com/" target="_blank">Wattzon&#8217;s</a> Embodied Energy Database (<a href="http://www.slideshare.net/raffikrikorian/wattzon-etech-2009" target="_blank">see slides here</a>), and <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> founder, Gavin Stark&#8217;s presentation, <a name="session7799"></a> (also see <a href="http://www.amee.com/blog/2009/03/19/energy-identity/">Gavin&#8217;s blog on Energy Identity here</a>).</p>
<p>The path to dematerializing the burdensome stuff that spells doom for our environment was not only presented conceptually and in creative solutions to specific problems (e.g. ThingM) at ETech. There were also hands on workshops (see <a href="http://www.ugotrade.com/2009/03/10/making-a-rfid-to-web-interface-and-lilypad-electronic-fashion-at-etech-2009/" target="_blank">my post on the two I attended</a>) from Maker gurus, who were also often to be found in the <a href="http://en.oreilly.com/et2009/public/schedule/detail/7281" target="_blank">Makershed</a>, providing opportunities to experiment with and prototype your own solutions (my hat is off to <a href="http://en.oreilly.com/et2009/public/content/about" target="_blank">Brady Forrest and the ETech committee</a> for pulling all this together).</p>
<h3>Connecting the dots&#8230;</h3>
<p>In the wake of an &#8220;econolypse,&#8221; (neologism pulled from Bruce Sterling&#8217;s twitter feed -Â  @bruces) and on the eve of environmental catastrophe, we may well have, as Adam Greenfield <a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">said to me here</a>, &#8220;seriously screwed the pooch.&#8221;</p>
<p>But that does not mean we should not do everything we can to try to save the day.</p>
<p>And in the serendipity peculiar to a conference, I was talking  in the corridor to Gavin Starks of <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> who is working to create &#8220;the world&#8217;s energy meter&#8221; (on the right in the picture below), and Tony Mak from <a id="hc7p" title="O'Reilly AlphaTech Ventures" href="http://www.oatv.com/" target="_blank">O&#8217;Reilly AlphaTech Ventures</a> (to Gavin&#8217;s right), and Usman Haque of <a id="vp25" title="Pachube" href="http://www.pachube.com/">Pachube</a> (on Tony&#8217;s right) <a id="ihta" title="-see my earlier interview here" href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">- see my earlier interview with Usman here</a>), when Tim O&#8217;Reilly (far left) came by with Steven Levy of WiredÂ  (to Tim&#8217;s left).Â  More on <a id="vp25" title="Pachube" href="http://www.pachube.com/">Pachube</a>, <a id="vwro" title="WattzOn" href="http://www.wattzon.com/" target="_blank">WattzOn</a>, <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.pathintelligence.com/" target="_blank">Path Intelligence</a> and how these projects may connect in an upcoming post.Â  Path Intelligence like AMEE is funded by the O&#8217;Reilly Venture group.</p>
<p>And no sooner had I snapped the photo below, Mike Kuniavsky arrived.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_170dxf8g9hg_b.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/timoreillytalkingtogavinstarkspost2.jpg"><img class="alignnone size-medium wp-image-3276" title="timoreillytalkingtogavinstarkspost2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/timoreillytalkingtogavinstarkspost2-300x180.jpg" alt="timoreillytalkingtogavinstarkspost2" width="300" height="180" /></a></p>
<p>It seemed such an historic meeting, I asked everyone if I could switch my recorder on.</p>
<p>Tim had just been explaining how the concept of &#8220;data shadows&#8221; fit with something he&#8217;d learned from Gavin in a breakfast conversation. Â Gavin was talking about what AMEE is learning from smart meter data collected from 1.2 million homes in the UK. Â The energy signature from each device is so unique that you can tell not only the make and model of major appliances in each home, but its age. Â  Gavin is worried about the privacy implications (as we all should be), but nonetheless, you can see the implications for business. Tim framed a vital question:<strong> What new businesses are growing in the data shadows?</strong></p>
<p><strong>Tim O&#8217;Reilly: </strong>Here&#8217;s the other member of this conversation I was trying to broker. This is Mike Kuniavsky, Gavin Starks. I was talking in your session about the point he made in his session&#8230;Steve Levy from Wired&#8230;</p>
<p><strong>Tish Shute:</strong> sorry, could you recap the point?</p>
<p><strong>Tim O&#8217;Reilly:</strong> &#8230;just the idea about data shadows, I just think it&#8217;s just such a powerful metaphor that every .. and you went on to explain that potential for subscriptions and so on&#8230;</p>
<p><strong>Mike Kuniavsky:</strong> Yes well what I was saying was that essentially every object that has an identifier associated with it, and there are a number of different kinds of identifiers out there, simultaneously lives in kind of the world of physical objects, and of the world of data. And the identifier links those two.</p>
<p><strong>Steven Levy:</strong> Just like Sterling&#8217;s Spimes?</p>
<p><strong>Mike Kuniavsky:</strong> A spime, it&#8217;s related obviously because we&#8217;re talking about RFIDs, but I&#8217;m really specifically talking about the fact that there is this information shadow that exists out there.</p>
<p><strong>Tim O&#8217;Reilly:</strong> I think we&#8217;ll find it lots of different ways, that was my excitement in connecting these points.</p>
<p><strong>Gavin Starks:</strong> My take on it is energy identity &#8211; that everything and everybody ends up with an energy identity that is the embodiment of their physical consumption.</p>
<p><strong>Mike Kuniavsky:</strong> And I would say, not to argue, I would say that energy comes as part of my information shadow. Like I carry this baggage of data along with me. And whatever data is potentially appropriate can be glommed on to that. And then that can then be carried to something else that can manipulate it. And also that&#8217;s true about every object. And now that we have RFID tracking of individual objects, it&#8217;s true about literally every object, not just every class of objects.</p>
<p><strong>Usman Haque:</strong> There&#8217;s a really beautiful story by Julio Cortazar where he uses the phrase &#8220;dressing the shadows&#8221; and it&#8217;s about the idea the shadow is not this sort of flat black thing but we can sort of put things onto it and slowly sort of grow it into something. It&#8217;s actually sort of more of a love story. But it&#8217;s a really interesting idea that the shadow&#8217;s not just the absence of but that it&#8217;s kind of the important part of it [for more see Usman&#8217;s paper, <a href="http://www.haque.co.uk/papers/dressingshadowsofarch.pdf" target="_blank">Dressing the shadows of architecture</a> &#8211; which is also available in spanish <a href="http://www.tintank.es/articulo_vestirsombras.html" target="_blank">here</a>.]</p>
<p><strong>Mike Kuniavsky:</strong> It&#8217;s the Peter Pan Barrie [JM Barrie, the author] thing. When Peter Pan&#8217;s shadow gets cut off and Wendy has to resew it back on. Potentially what all of these item level identification technologies are doing is they&#8217;re sewing the shadow back to the objects that they came from. And so you&#8217;re getting the information.</p>
<p><strong>Gavin Starks:</strong> It&#8217;s like the two and a half kilo Macbook which has a 460 kilo carbon shadow.</p>
<p><strong>Tim O&#8217;Reilly:</strong> It&#8217;s just a very powerful concept. That&#8217;s all I&#8217;m saying. I think it&#8217;s a metaphor that as soon as you have it, it makes it very easy to understand and to see a whole lot of things. So I&#8217;m very fond of it. Already it&#8217;s my new favorite toy. And it is great running into you all in the same place in the hall so I could introduce you all.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_173c5f8nvcm_b.png"><img class="alignnone size-medium wp-image-3203" title="dhj5mk2g_173c5f8nvcm_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_173c5f8nvcm_b-300x231.png" alt="dhj5mk2g_173c5f8nvcm_b" width="300" height="231" /></a></p>
<p><em>Image from Mike&#8217;s ETech presentation</em><br />
<strong><br />
&#8220;To create these new experiences we need to think about the design of both digital devices and infrastructures differently. We need step back from standalone tools and think about what service those tools deliver, then construct new avatars that fit better into people&#8217;s everyday experiences. We also need to step back from our infrastructural products and think about what services they enable. The electrical grid did not first start out as an abstract electrical grid in South Manhattan; it started as a way to deliver electric light. The electric bulb was not a standalone device, it was an avatar of Edison&#8217;s light delivery service and it was, first and foremost, designed to solve a specific problem for a large consumer market. Only then did the infrastructure it created expand to solve other kinds of problems.&#8221; Mike Kuniavsky&#8217;s ETech presentation, 2009</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Talking With Mike Kuniavsky</strong></h3>
<p><strong> </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethandmikeballpost.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethgoodmanandmikekuniavskyballpost.jpg"><img class="alignnone size-medium wp-image-3280" title="elizabethgoodmanandmikekuniavskyballpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethgoodmanandmikekuniavskyballpost-300x199.jpg" alt="elizabethgoodmanandmikekuniavskyballpost" width="300" height="199" /></a><br />
</strong></p>
<p><em>Mike Kuniavsky and Elizabeth Goodman playing Bocci after ETech</em></p>
<p>The conversation with Mike began with a discussion about how to encourage participation. Usman Haque was present but he was called to lunch shortly.Â  The question of encouraging participation in deep social change was another recurring theme at ETech.Â  And, as Mike noted in his presentation:</p>
<p><strong>&#8220;The design of these avatars [Kuniavsky's term for objects that are closely tied to services] is quite challenging. They canâ€™t really be as personalized. You just can&#8217;t pimp your City Carshare car. You only get one kind of bike in the Call a Bike program. That&#8217;s an important problem to solve. We love to have our stuff be ours. However, the same technologies can bring that, too. Our key fob can bring our whole world with us, and whether sit down in a minivan, on a chair or in a plane we can bring our world with us. The thing can become our preferred colors, with our favorite music, and a picture of our loved ones on the dahboard, desk, or wall. Is it the same thing as owning it and Â  leaving your stuff in it? No, but it&#8217;s closer.&#8221;<br />
</strong></p>
<p>Moreover:</p>
<p><strong>.. objects have to change at a fundamental level. They have to be designed differently and they have to be described and discussed differently. The â€œownerâ€™sâ€ relationship to the object changes. The very idea of ownership changes. The solid object grows a dotted line that is filled-in as-needed, when-needed, and with the features that are needed. This is not the same thing as renting or co-ownership, its anytime/anywhere nature-enabled by the underlying technology makes these new service objects fundamentally new (Kuniavsky&#8217; presentation at ETech).<br />
</strong><br />
Elizabeth Goodman&#8217;s brilliant presentation at ETech, <a id="eag1" title="Designing for Urban Green Space" href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Designing for Urban Green Space,</a> discussed a study of urban green space volunteership as a way &#8220;to rethink urban green space as a spectrum of places with varying types of ownership and management.&#8221;Â  Mike began the conversation by citing Elizabeth&#8217;s work.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_178gdn22ngf_b.png"><img class="alignnone size-medium wp-image-3208" title="dhj5mk2g_178gdn22ngf_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_178gdn22ngf_b-300x219.png" alt="dhj5mk2g_178gdn22ngf_b" width="300" height="219" /></a></p>
<p><em>Picture from <a href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Elizabeth Goodman&#8217;s presentation</a>.</em></p>
<p><strong>Mike Kuniavsky:</strong> Well what I was saying [re participation], citing my wife Elizabeth Goodman&#8217;s work &#8230;She did all this work at Intel on people&#8217;s health practices and the issues [around] instrumenting people&#8217;s lives in order to produce behavioral change and the problems with that.</p>
<p>The question is how do you, sense to encourage, rather than sense to punish, when all the indicators are going down, like economic indicators, ecological indicators. They&#8217;re just not going to be going up perceptibly in a very long time. You don&#8217;t want to discourage people. The way to create behavioral change is not to essentially keep punishing people for the past. And so I don&#8217;t know if I have a good answer for this, but there is this entire kind of thinking about how do you encourage people to keep doing things even when the actual easy-to-measure indicators like the first order indicators are all pointing down. It&#8217;s the classic thing about how do you get people to stay fit even as they&#8217;re aging. They are never going to be as healthy as they were when they were 50 again.</p>
<p><strong>Usman Haque:</strong> I think you really hit on it when you said it&#8217;s not about the first order but about the second order measurements because that is exactly the kind of thing you want to change. It&#8217;s not that you want to stop it from falling because sometimes it&#8217;s impossible, you want to slow it&#8217;s rate.</p>
<p><strong>Mike Kuniavsky:</strong> Exactly. You want to slow the rate because at the bottom maybe you can start looking at the first order indicator. But you can&#8217;t look at the first order indicator while things are going to hell. And so you can just say it&#8217;s less bad than it would have been. And figuring out how to take the first order sensory data and turn it into this kind of second order data that might be helpful for actually creating behavioral change, because ultimately that&#8217;s what all of this is talking about.</p>
<p><strong>Tish Shute: </strong>This discussion about behavioral change wasn&#8217;t elaborated in your presentation was it?</p>
<p><strong>MK:</strong> I presented on essentiallyÂ  the combination of being able to identify individual objects and the idea of providing services as a way of creating things&#8230; the servicization of things &#8230;turningÂ  things into services is greatly accelerated by network technologies and the ability to track things and what leads this to the potential of having fundamentally different relationships to the devices in our lives and to things like ownership.</p>
<p>Like we now have the technology to create objects that are essentially representatives of services &#8211; things like City Car Share.Â  What you own is not a thing but a possibility space of a thing.Â  This fundamentally changes the design challenges.Â  I am pretty convinced that this is how we should be using a lot of these technologies is to be shifting objects from ownership models to service models.Â  We can do that but there are significant challenges with it. What is happening is that we have had the technology to do this for a while, but we haven&#8217;t be thinking about how to design these services.Â  We haven&#8217;t been thinking about how to design what I call the avatars of these services &#8211; the physical objects that are the manifestation of them, like an ATM is the avatar of a banking service.Â  It is useless without the banking service it is a representative of, essentially.</p>
<p>If you imagine a this as an abstract idea, the ATM pokes out of [the service and into] a specific thing, but so does the bank tellers and so does the web site.</p>
<p><strong>TS:</strong> It seems like this is a major shift in how we conceptualize our economy, culture and even government &#8211; what are the avatars of government?</p>
<p><strong>MK:</strong> I think change in government is very hard. The example I have been using is the light bulb.Â Â  Start by solving a problem. The interesting thing about lightbulbs is that it was not the invention of an incandescent filament that glowed in a vacuum&#8211;that had been invented long before&#8211;it was the system that it was part of.Â  And that is was part of a much larger design project that was created specifically for delivering the service of light to lower Manhattan in 1884.</p>
<p><strong>TS:</strong> The grid hasn&#8217;t changed since Edison right &#8211; one of the earlier speakers mentioned this, that if Edison came back now he would say, &#8220;the grid is where I left it.&#8221;</p>
<p><strong>MK:</strong> My point is that he wasn&#8217;t creating an abstract electrical grid, he was solving a problem by creating a system that had as its avatar &#8211; as its end point this bulb. But the bulb is actually not the system, it is merely the end point.</p>
<p>As we are thinking about the capabilities of these technologies my argument is we have to be designing service systems along that model.</p>
<p><strong>TS:</strong> Web services?</p>
<p><strong>MK:</strong> Not just designing Web services.Â  I am a big fan of thinking about digital tools outside the context of general purpose computing devices. I consider laptops general purpose and I consider phones general purpose.Â  Yes originally the handset started out just as a phone but now it is essentially a computer terminal and now you have netbooks and netbooks are essentially this halfway point between a phone and a laptop because now you are going to get net books with G3 cards.Â  Essentially it is already a big phone.Â  Those are general purpose computing platform, and I am not very interested in those.</p>
<p><strong>TS:</strong> What motivated you to make that move in your thinking?</p>
<p><strong>MK:</strong> I thought it was very narrow kind of thinking.Â  I thought that the costs of computing represented by the technologies in the middle of the Moore&#8217;s Law curve &#8211; rather than on the right &#8211; that the cost of that had dropped so far that it seemed we could be making all kinds of devices that had information processing as part of what it is without being general purpose computing platforms.</p>
<p>The ipod is a good example.Â  The ipod is a computer and you can run linux on it. It has more computing power than an computer did in the seventies. But who cares? The point of it is that you are using that power to solve a problem. You are applying the capabilities of information processing to solve specific problems. I have actually worked on infrastructural stuff. Twenty years ago I was associated with some early distributed computing stuff, then I did ten tears of web site design stuff, but i am essentially done with that. Because what I am really interested in isÂ  creating new kinds of tool, new classes of tools that use information processing as the core of what makes them interesting and valuable.</p>
<p><strong>TS:</strong> Do these tools have to leverage networks to be useful?</p>
<p><strong>MK: </strong> No I think it is possible to use information processing in a small scale without having to be online all the time.Â  That is another one of the big toolboxes.Â  It creates a deep shift in the capabilities of what you can do if you have a network.Â  But the network can be really, really low bandwidth and simple for it still to be useful. You get these things that wake up once a month and spit out a packet with their telemetry.Â  And they are incredibly valuable but they are not what you would normally consider to be an always-on device.Â  It changes what they can do very fundamentally.Â  But it is not this thing that requires there to be blanket wifi.</p>
<p>You can have devices out there and this is the sort of a cliched example but the guy riding a bicycle around with a wifi access point in rural area where you have no infrastructure to do it otherwise.Â  But you have a little computer in every area and as he rides by they will exchange some data.</p>
<p><strong>You don&#8217;t have to have fibre at the curb to really, really make interesting deeply socially effective technological interventions in the world. </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/aaaroncopetodekurtmikekuniavskypost.jpg"><img class="alignnone size-medium wp-image-3210" title="aaaroncopetodekurtmikekuniavskypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/aaaroncopetodekurtmikekuniavskypost-300x199.jpg" alt="aaaroncopetodekurtmikekuniavskypost" width="300" height="199" /></a></strong></p>
<p><em><a id="d3_j" title="Aaron Starup Cope," href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope,</a> Flickr, Tod E. Kurt, and Mike Kuniavsky &#8211; discussing <a id="rzgd" title="The Shape of Alpha" href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">The Shape of Alpha</a> (more on this upcoming!)<strong><br />
</strong></em><br />
<strong>MK:</strong> What we are trying to do is to do that.Â  We make a BlinkM &#8211; we make hardware &#8211; you saw my business partner Tod E. Kurt, he does all the heavy engineering and I am the guy who waves his hands around a lot and sends faxes.Â  We came out with our first product a year ago was a smart LED.Â  It is very simple RGB LED, it has a microcontroller and the microcontroller has firmware on it that kind of abstracts out the complexity of incorporating LEDs into a hobbyist product.Â  So you can do arbitrary colors, so it can do smooth fades between any two points in RGB space, you don&#8217;t need to know anything about Pulse Width Modulation or even microcontrollers.Â  You don&#8217;t have to know anything about anything except a little bit about electricity to use the thing. [In addition to <a id="hy-z" title="Blinkm" href="http://thingm.com/products/blinkm.html" target="_blank">BlinkM</a>, <a id="g8y3" title="Blinkm Maxm" href="http://thingm.com/products/blinkm-maxm.html" target="_blank">BlinkM MaxM</a> &#8211; the smart LED, Thingm has developed prototypes for other products such as the <a id="hqwc" title="Winem" href="http://thingm.com/products/winem.html" target="_blank">WineM</a> RFID wine rack and <a href="http://thingm.com/sketches/lovem.html" target="_blank">LoveM LCD chocolate box</a>.]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_174cf26bcgn_b.png"><img class="alignnone size-medium wp-image-3211" title="dhj5mk2g_174cf26bcgn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_174cf26bcgn_b-224x300.png" alt="dhj5mk2g_174cf26bcgn_b" width="224" height="300" /></a></p>
<p><strong>TS:</strong> I made a <a href="http://www.arduino.cc/en/Main/ArduinoBoardLilyPad" target="_blank">LilyPad</a> enabled Tshirt yesterday, if I used your LED what difference would that make to my Tshirt?</p>
<p><strong>MK:</strong> You could have the LED without changing the circuit at all, you could have it blink in any pattern, be any color, fade between colors. With our new one which is bigger than the old one, we actually have inputs. You could stick a wire on it or weave it into your shirt, and when you touch the wire it would change the behaviour of the LED.</p>
<p><strong>TS:</strong> Nice, you are giving me even more incentive to finish my T-Shirt. I noticed that Tim O&#8217;Reilly was connecting you to Gavin Starks, CEO of AMEE just now, and Usman Haque of Pachube.Â  What is the connection between you work on Thingm and these projects?</p>
<p><strong>MK:</strong> I think what Gavin&#8217;s doing, as I understand it from Tim, he is essentially creating this new kind of sensor network that monitors electrical usage and allows you to feed it back. What that does is that creates a new kind of data in the data shadow of your house, you refrigerator or whatever. It suddenly grows this extra lobe out in the data world that then has these new capabilities that can be attached to.</p>
<p><strong>TS: </strong>In terms of what you do with ThingM how are these ideas expressed through BlinkM?</p>
<p><strong>MK:</strong> We&#8217;re still building stuff that&#8217;s on a slightly lower level, components. Our corporate goal this year is to make our first product, a stand alone solution to something. One of the easiest things you can do with our technology right now is you can replicate an Ambient Orb in about ten minutes. You could tie into their work. But you could also tie into it in a more subtle way where you could make lights smart so that when the net electricity cost goes above a certain threshold the lights know to dim or to turn off. And that can be dependant on how people use them. So rather than having a light you essentially associate a function or purpose with a light. Then the light knows based on electricity usage when it&#8217;s purpose has high priority enough to be on.</p>
<p>Not all of these ideas pour into our products, we can only afford to make LEDs.</p>
<p><strong>TS:</strong> Still it is amazing how ThingM really is a flagship for what is big and important shift in the way we can relate to stuff. And what about Usman&#8217;s Pachube. Where does ThingM fit with that?</p>
<p><strong>MK:</strong> I see Pachube less as a monolithic service than as a standard for device communication. Essentially it&#8217;s a proposal for interdevice communication, and potentially an easy way for people to define the way devices behave within their own personal ecology of smart devices. It&#8217;s something that&#8217;s in the early stages, and I think the barriers are not technological, the barriers are social. The barriers are understanding what this is for and why to use it. It&#8217;s not about will it work. It&#8217;ll work.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_177pc5g76g5_b.png"><img class="alignnone size-medium wp-image-3213" title="dhj5mk2g_177pc5g76g5_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_177pc5g76g5_b-300x230.png" alt="dhj5mk2g_177pc5g76g5_b" width="300" height="230" /></a></p>
<p><em>Image from Mike&#8217;s ETech presentation &#8211; original image source: Yottamark</em></p>
<p><strong>&#8220;You can, hypothetically, look at any object and know where it was made, what it is made of, what your friends think of it, how much it sells for on Ebay, how to cook it, how to ï¬x it, how to recycle it, whatever. Any information thatâ€™s available about an object can now be available immediately and associated with that object.&#8221; </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_179fkxx3bg9_b.png"><img class="alignnone size-medium wp-image-3214" title="dhj5mk2g_179fkxx3bg9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_179fkxx3bg9_b-300x231.png" alt="dhj5mk2g_179fkxx3bg9_b" width="300" height="231" /></a></strong></p>
<p><strong>&#8220;Connect it with location information and you have Location Based Services for anything. This is Cabspotting by Stamen. As Tom Coates says, once we have a handle, you can throw the data around.&#8221; (Kuniavsky)</strong></p>
<p>More to come on Stamen Design later! <a href="http://en.oreilly.com/public/schedule/speaker/2156">Tom Carden</a> (Stamen Design) ran a workshop at ETech 2008, <a id="bcqk" title="&quot;Live, Vast and Deep: Web-native Information Visualization,&quot;" href="http://en.oreilly.com/et2008/public/schedule/detail/1585" target="_blank">&#8220;Live, Vast and Deep: Web-native Information Visualization,&#8221;</a> outlining the process of taking a real data set from an online <span class="caps">API</span> (such as <a href="http://flickr.com/services/api">Flickr</a> or <a href="http://dopplr.pbwiki.com/">Dopplr</a>) and shaping it into an informative, beautiful, and useful interactive graphic presentation and this year, <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 	<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) gave a workshop on <a id="nbzw" title="&quot;Maps from Scratch: Online Maps from the Ground Up.&quot;" href="http://en.oreilly.com/et2009/public/schedule/detail/5555" target="_blank">&#8220;Maps from Scratch: Online Maps from the Ground Up.&#8221;</a> <a id="k6oi" title="Eric Rodenbeck" href="http://en.oreilly.com/et2009/public/schedule/speaker/2160" target="_blank">Eric Rodenbeck</a>, founder and creative director of Stamen Design, presented on, <a id="q4up" title="&quot;New Data Visualization: Reaching Through Maps.&quot;" href="http://en.oreilly.com/et2009/public/schedule/detail/5438" target="_blank">&#8220;New Data Visualization: Reaching Through Maps.&#8221;</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_180g6zstxc4_b.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/ercirodenbeckandshawnallenpost.jpg"><img class="alignnone size-medium wp-image-3279" title="ercirodenbeckandshawnallenpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/ercirodenbeckandshawnallenpost-300x199.jpg" alt="ercirodenbeckandshawnallenpost" width="300" height="199" /></a></p>
<p><em>The picture above is of Eric Rodenbeck and Shawn Allen playing Bocci.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/feed/</wfw:commentRss>
		<slash:comments>16</slash:comments>
		</item>
	</channel>
</rss>
