<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; AMEE</title>
	<atom:link href="http://www.ugotrade.com/tag/amee/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Total Immersion and the &#8220;Transfigured City:&#8221; Shared Augmented Realities, the &#8220;Web Squared Era,&#8221; and Google Wave</title>
		<link>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/</link>
		<comments>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/#comments</comments>
		<pubDate>Sun, 27 Sep 2009 04:42:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[3D Interactive Live Show]]></category>
		<category><![CDATA[Acrossair]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[anime]]></category>
		<category><![CDATA[Apple iPhone]]></category>
		<category><![CDATA[AR baseball cards for Topps]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[Architectural League of New York]]></category>
		<category><![CDATA[ARML]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented City]]></category>
		<category><![CDATA[augmented city lab]]></category>
		<category><![CDATA[augmented reality books]]></category>
		<category><![CDATA[augmented reality entrpreneurship]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality making visible the invisible]]></category>
		<category><![CDATA[augmented reality mark-up language]]></category>
		<category><![CDATA[augmented reality pollution meter]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[augmented virtuality]]></category>
		<category><![CDATA[Bionic Eye]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruno Uzzan]]></category>
		<category><![CDATA[Conflux]]></category>
		<category><![CDATA[cross platform compatibility for augmented reality]]></category>
		<category><![CDATA[D'Fusion]]></category>
		<category><![CDATA[Daniel Wagner]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[elements of networked urbanism]]></category>
		<category><![CDATA[Elizabeth Goodman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[Fish 'n Microchips]]></category>
		<category><![CDATA[Flickr]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo spatial web]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geoaugmentation]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Protocol]]></category>
		<category><![CDATA[Gov 2.0 Expo Showcase]]></category>
		<category><![CDATA[Gov 2.0 Summit]]></category>
		<category><![CDATA[Graz University of Technology]]></category>
		<category><![CDATA[Imagination]]></category>
		<category><![CDATA[Incheon Free Economic Zone]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[Int13]]></category>
		<category><![CDATA[Interaction Design for Augmented Reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Jonathan Laventhol]]></category>
		<category><![CDATA[Korea's u-Cities]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar 3D]]></category>
		<category><![CDATA[magic lens augmented reality]]></category>
		<category><![CDATA[manga]]></category>
		<category><![CDATA[Mark Shepard]]></category>
		<category><![CDATA[Mark Weiser]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Microsoft Bing]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Mobilizy]]></category>
		<category><![CDATA[multiuser augmented reality]]></category>
		<category><![CDATA[Natalie Jeremijenko]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[near-field object rcognition and tracking]]></category>
		<category><![CDATA[Networked City]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[newer urbanism]]></category>
		<category><![CDATA[open]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[Orange Cone]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[realtime panorama mapping on mobile phones]]></category>
		<category><![CDATA[RobotVision]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Sentient City Survival Kit]]></category>
		<category><![CDATA[Shangri La]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[Sky Writer]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[symbiosis between augmented reality and brands]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the LAN of things]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[the web squared era]]></category>
		<category><![CDATA[ThingM]]></category>
		<category><![CDATA[things as services]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Tod E. Kurt]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Toward the Sentient City]]></category>
		<category><![CDATA[Transfigured City]]></category>
		<category><![CDATA[twitter]]></category>
		<category><![CDATA[u-City]]></category>
		<category><![CDATA[ubiquitous computing and augmented reality]]></category>
		<category><![CDATA[uCity]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Weisarian Ubiquitous Computing]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[xClinic]]></category>
		<category><![CDATA[XMPP versus HTTP]]></category>
		<category><![CDATA[Yocahi Benkler]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4439</guid>
		<description><![CDATA[Above is an image aboveÂ  from Total Immersion&#8217;s augmented reality experience developed for the &#8220;Networked City&#8221; exhibition in South Korea, &#8211; &#8220;a fun scenario created for a u-City&#8217;s infrastructure and city management service&#8221; &#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b.jpg"><img class="alignnone size-medium wp-image-4440" title="dhj5mk2g_338cwpzntgp_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_338cwpzntgp_b-300x170.jpg" alt="dhj5mk2g_338cwpzntgp_b" width="300" height="170" /></a></p>
<p><em>Above is an image aboveÂ  from <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> augmented reality experience developed for the <a id="winm" title="&quot;Networked City&quot; exhibition in South Korea, &quot;" href="http://www.tomorrowcity.or.kr/sv_web/en_US/space.SpaceInfo.web?targetMethod=DoUe04Sub1" target="_blank">&#8220;Networked City&#8221; exhibition in South Korea,</a> &#8211; &#8220;a fun scenario created for a<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"> u-City&#8217;s</a> infrastructure and city management service&#8221; </em></p>
<p><strong>&#8220;To the naked eye, the exhibit looks like a bare bones model of a city. But when visitors put on the special AR goggles a whole new world unfolds â€“ as graphics overlaid on the city model.</strong><em><strong>&#8221; </strong>(<a href="http://gamesalfresco.com/2009/09/14/total-immersion-brings-augmented-reality-to-tomorowcity-todaytomorrow/" target="_blank">Games Alfresco)</a></em></p>
<p>&#8220;The Networked City,&#8221; is a large scale augmented virtuality of a scenario for a networked city. But my guess, reading the &nbsp; &nbsp;    <em><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a></em>, is the plan is to move from an augmented virtuality to an augmented reality as Incheon Free Economic ZoneÂ  (IFEZ) realizes its vision to become a leading u-City &#8211; where reality is turned &#8220;inside out&#8221; (see <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">Inside Out: Interaction Design for Augmented Reality )</a>.Â <a id="x:2w" title="Inside Out Reality" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php"> </a>If you are not familiar with South Korea&#8217;s u-Cities, <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">check out this post</a> for a short primer (and note<a href="http://www.google.com/trends?q=augmented+reality&amp;ctab=1986817859&amp;geo=all&amp;date=all" target="_blank"> Google Trends search on Augmented Reality </a>showsÂ  South Korea leaving everyone else in the dust).<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank"></p>
<p></a></p>
<h3>Ubiquitous computing and augmented reality are like adenine and thymine &#8211; a DNA base pair.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM.png"><img class="alignnone size-medium wp-image-4442" title="Screen shot 2009-09-24 at 11.34.35 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-24-at-11.34.35-PM-300x256.png" alt="Screen shot 2009-09-24 at 11.34.35 PM" width="300" height="256" /></a></p>
<p><em>A sky view of Incheon Free Economic Zone (<a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">from Korean IT Times</a>). For more on the IFEZ vision to become a leading u-City <a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">see here</a>.</em></p>
<p><a href="http://www.koreaittimes.com/story/4371/leading-global-u-city" target="_blank">Korea IT Times</a> writes about the u-city concept:</p>
<p><strong>&#8220;Korea began using the term u-City after accepting the concept of ubiquitous computing, a post-desktop model of human-computer interaction created by Mark Weiser, the chief technologist of the Xerox Palo Alto Research Center in California, in 1998. There have been a lot of research in this field since 2002. As a result, many local governments in Korea have applied this concept to various development projectsÂ since 2005Â based on a practical approach to it.&#8221;</strong></p>
<p>The back story to many of my recent posts, including this one, is an understanding of a relationship between ubiquitous computing and augmented reality that emerged, for me, in a February conversation with Adam Greenfield, <a title="Permanent Link to Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield" rel="bookmark" href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/">Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</a>.Â  In cased you missed it, here is the link again because I think it holds up very well considering the rapid developments of recent months.Â  Also, importantly for this post, it includes a discussion ofÂ  moving on from Weiserian visions.</p>
<p><a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield&#8217;s Speedbird</a> is one of my key sources for understanding &#8220;networked urbanism,&#8221; and the list he makes of <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank">the elements of networked urbanism here</a> (also see the comments) &#8211; is my mantra for thinking about the DNA base pair relationship of augmented reality and ubiquitous computing.</p>
<p>Adam Greenfield&#8217;s, <strong>&#8220;summary of what those of us who are thinking, writing and speaking about networked urbanism seem to be seeing&#8221;</strong> is:</p>
<p><strong>1. From <em>latent</em> to <em>explicit</em>; 2. From <em>browse</em> to <em>search</em>; 3. From <em>held</em> to <em>shared</em>; 4. From <em>expiring</em> to <em>persistent</em>; 5. From <em>deferred</em> to <em>real-time</em>; 6. From <em>passive</em> to <em>interactive</em>; 7. From <em>component</em> to <em>resource</em>; 8. From <em>constant</em> to <em>variable</em>; 9. From <em>wayfinding</em> to <em>wayshowing</em>; 10. From <em>object</em> to <em>service</em>; 11. From <em>vehicle</em> to <em>mobility</em>; 12. From <em>community</em> to <em>social network</em>; 13. From <em>ownership</em> to <em>use</em>; 14. From <em>consumer</em> to <em>constituent</em>.</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Augmented Reality &#8211; Making Visible the Invisible</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM.png"><img class="alignnone size-medium wp-image-4509" title="Screen shot 2009-09-26 at 2.44.27 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-2.44.27-PM-300x229.png" alt="Screen shot 2009-09-26 at 2.44.27 PM" width="300" height="229" /></a></p>
<p>The screenshot above is one ofÂ  the coolest &#8220;making visible the invisible&#8221; AR applications. It was developed at Columbia University Graphics and User Interface Lab where <a href="http://www1.cs.columbia.edu/%7Efeiner/" target="_blank">Steven Feiner</a> is Director (see the deep list of projects from the lab <a href="http://graphics.cs.columbia.edu/top.html" target="_blank">here</a>).Â  This app &#8220;shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.&#8221; Credit: Sean White and Steven FeinerÂ  (<a href="http://www.technologyreview.com/computing/23515/page2/" target="_blank">via Technology Review</a>).</p>
<p>The recent emergence of &#8220;magic lens&#8221; augmented reality apps for our smart phones &#8211; <a href="http://www.wikitude.org/" target="_blank">Wikitude</a>, <a href="http://layar.com/" target="_blank">Layar,</a> <a href="http://www.acrossair.com/" target="_blank">Acrossair</a>, <a href="http://support.sekaicamera.com/">Sekai Camera</a>, and many others now, have given us a new window into our cities. But we are yet to realize the full potential of the AR/ubicomp base pair that can &#8220;make visible the invisible&#8221; and give us new opportunities to relate to the invisible data ecosystems of our cities, not merely as a spectator experience,Â  but as an interactive, in context, real time opportunity to reimagine social relations.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">Mark Shepard</a> says in <a href="http://www.sentientcity.net/exhibit/?p=3" target="_blank">his curatorial statement</a> for, <a href="http://www.sentientcity.net/exhibit/" target="_blank">&#8220;Toward the Sentient City:&#8221;</a> (Much more soon on this very significant exhibit which runs from Sept. 17th to Nov. 7th, 2009.)</p>
<p><strong>&#8220;In place of natural weather systems, however, today we find the dataclouds of 21st century urban space increasingly shaping our experience of this city and the choices we make there.&#8221;</strong></p>
<p>Augmented Reality, as Joe Lamantia points out, is becoming the great &#8220;<a id="o0mh" title="ambassador of ubiqitous computing" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php">ambassador of ubiqitous computing</a>.&#8221; AR is. &#8220;<strong>&#8230;mak[ing] it possible to experience the new world of ubiquitous computing by reifying the digital layer that permeates our inside-out world,&#8221; </strong>and we are only just glimpsing the razor thin end of the wedge in this regard.</p>
<p>I am still working on my <a href="http://www.gov2summit.com/" target="_blank">Gov 2.0 Summit </a>write upÂ  and, amongst other things, I will talk about how an emerging new social contract around open data, here in the US,Â  will put augmented realityÂ  apps center stageÂ  &#8211; &#8220;doing stuff that matters.&#8221; At <a href="http://www.gov2expo.com/gov2expo2009" target="_blank">Gov 2.0 Expo Showcase</a> Tim O&#8217;Reilly tweeted:</p>
<p><a id="i23q" title="Tim O'Reilly" href="http://twitter.com/timoreilly">Tim O&#8217;Reilly</a> Really enjoyed @capttaco (Digital Arch Design) @ #gov20e: &#8220;Augmented Reality could be a new public infrastructure&#8221; <a href="http://bit.ly/18iCx" target="_blank">http://bit.ly/18iCx</a></p>
<p>Also see Tim O&#8217;Reilly and Jennifer Pahlka on Forbes.com discuss the <a href="http://www.forbes.com/2009/09/23/web-squared-oreilly-technology-breakthroughs-web2point0.html" target="_blank">The &#8220;Web Squared&#8221; Era</a> -Â <strong> &#8220;the Web Squared era is an era of augmented reality arriving (like the sensor revolution) stealthily, in more pedestrian clothes than we expected</strong>.<strong>&#8230; &#8230;our world will have &#8220;<a href="http://www.orangecone.com/archives/2009/02/smart_things_an.html" target="_blank">information shadows</a>.&#8221; Augmented reality amounts to information shadows made visible.&#8221;</strong></p>
<p>Again there is back story to how I came to think about Information Shadows in relation to augmented reality.Â  So in case your missed it the first time, here is the link to a conversation that began in a hallway meeting between Tim O&#8217;Reilly, Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a>, Usman Haque, <a href="http://www.pachube.com/" target="_blank">Pachube</a>, and Gavin Starks, <a href="http://www.amee.com/" target="_blank">AMEE</a>, at <a href="http://en.oreilly.com/et2009/" target="_blank">ETech earlier this year</a>,Â  <a title="Permanent Link to Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009" rel="bookmark" href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/">&#8220;Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM.png"><img class="alignnone size-medium wp-image-4547" title="Screen shot 2009-09-26 at 9.32.09 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.32.09-PM-300x225.png" alt="Screen shot 2009-09-26 at 9.32.09 PM" width="300" height="225" /></a></p>
<p><a href="http://www.slideshare.net/rlenz/augmented-city-lab-picnic-09" target="_blank">Slide from Augmented City Lab</a> @ <a href="http://www.picnicnetwork.org/" target="_blank">Picnic &#8217;09</a></p>
<h3>So What&#8217;s Next for Mobile Augmented Reality?</h3>
<p><a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4513" title="Screen shot 2009-09-26 at 3.45.45 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-3.45.45-PM-300x186.png" alt="Screen shot 2009-09-26 at 3.45.45 PM" width="300" height="186" /></a></p>
<p>These videos from Daniel Wagner&#8217;s team from Graz University of Technology showing <a href="http://www.youtube.com/watch?v=434zw201iN8&amp;feature=player_embedded" target="_blank">Realtime Panorama Mapping and Tracking on Mobile Phones</a> and <a href="http://www.youtube.com/watch?v=W-mJG3peIXA&amp;feature=player_embedded" target="_blank">Creating an Indoor Panorama in Realtime</a>, as Rouli from Games Alfresco points out,Â  indicate that there is a lot in store for us at <a href="http://www.icg.tugraz.at/Members/daniel/MultipleTargetDetectionAndTrackingWithGuaranteedFrameratesOnMobilePhones/inproceedings_view">ISMAR09</a>.</p>
<p>We may not be so impressed by directory style/&#8221;post it&#8221; AR anymore, as these applications have become common place so quickly!Â  But while these early mobile AR apps may be disappointing in relation to some futurist visions of AR &#8211; merely AR/ubicomp appetizers,Â  there are still good implementations of this model coming out (see new comers to the app store<a id="tzvf" title="Bionic Eye" href="http://mashable.com/2009/09/24/bionic-eye/" target="_blank"> Bionic Eye</a> and <a href="http://www.readwriteweb.com/archives/robotvision_a_bing-powered_iphone_augmented_realit.php" target="_blank">RobotVision</a>). And <a href="http://layar.com/" target="_blank">Layar,</a> always on the ball, has upped the ante for the new cohort of AR Browsers with <a href="http://layar.com/3d/" target="_blank">Layar 3D</a>.</p>
<p>But as Bruce Sterling <a href="http://www.wired.com/beyond_the_beyond/2009/09/augmented-reality-robotvision/" target="_blank">notes here</a>:</p>
<p><strong>*In AR, everybody wants to be the platform and the browser, and nobody wants to be the boring old geolocative database. Look how Tim [creator of RobotVision] here, who is like one guy working on his weekends, can boldly fold-in the multi-billion dollar, multi-million user empires of Apple iPhone, Microsoft Bing, Flickr, and Twitter, all under his right thumb</strong></p>
<p> (watch <a id="qxek" title="video here" href="http://www.youtube.com/watch?v=hWC9gax7SCA&amp;feature=player_embedded">video here</a>)</p>
<p>But ifÂ  you looking for something more from AR, you probably won&#8217;t have to wait too long.Â  The two pioneering companies in AR, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> &#8211; founded in 1999, and <a href="http://www.metaio.com/" target="_blank">Metaio</a> &#8211; founded in 2003 are both coming out with &#8220;mobile augmented reality platforms&#8221; in a matter of weeks (see press releases <a href="http://augmented-reality-news.com/2009/09/14/bringing-its-augmented-reality-to-mobile-applications-total-immersion-partners-with-smartphones-app-provider-int13/" target="_blank">here</a> and <a href="http://gamesalfresco.com/2009/09/18/metaio-announcing-mobile-augmented-reality-platform-junaio/" target="_blank">here</a>).Â  And both companies, it seems, will deploy much more sophisticated AR rendering and tracking than we have seen to date.</p>
<p>I approached Bruno Uzzan, founder and CEO of Total Immersion, for an interview as part of my look at the new industry of augmented reality through the eyes of the founding members of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. These consortium members are some of the first commercial augmented reality companies.</p>
<p><a href="#jumpto">The interview below</a> with Bruno began early this summer and then we both went on vacation and it picks up after the announcement of the <a href="http://www.int13.net/blog/en/" target="_blank">partnership between Total Immersion and Int13</a>.</p>
<p>The significance of this announcement is that Total Immersion is now positioned to take the augmented reality experiences they have developed for a number of top brands onto multiple mobile platforms with, &#8220;<strong>Int13&#8242;s very clever embedded solution that allows our [Total Immersion's] solutions to work across many [mobile] platforms,&#8221; </strong>while Int13 gets to extend their reach.</p>
<p>Total Immersion has a 50 person R&amp;D team and their two main focuses have been, firstly getting:<strong> </strong></p>
<p><strong>&#8220;Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility&#8230;.&#8221;</strong></p>
<p>and, secondly:<strong></p>
<p></strong></p>
<p><strong>&#8220;Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<h3>Pandora&#8217;s Box &#8211; Shared Augmented Realities</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM.png"><img class="alignnone size-medium wp-image-4450" title="Screen shot 2009-09-25 at 1.18.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-1.18.15-AM-186x300.png" alt="Screen shot 2009-09-25 at 1.18.15 AM" width="186" height="300" /></a></p>
<p>Spes or &#8220;Hope&#8221;; <a title="Engraving" href="http://en.wikipedia.org/wiki/Engraving">engraving</a> by <a title="Sebald Beham" href="http://en.wikipedia.org/wiki/Sebald_Beham">Sebald Beham</a>, German c1540 (see <a href="http://en.wikipedia.org/wiki/Pandora%27s_box" target="_blank">Wikipedia article on Pandora&#8217;s Box</a>)</p>
<p>There are many weaknesses to the mobile smart phone AR experiences we have now, and the lack of near field object recognition (to date), and difficulties with accurate positioning aren&#8217;t the only ones.Â  Note re solving positioning problems in mobile AR, we are yet to see ARÂ  leverage public libraries for analyzing scenes like Flickr&#8217;s geo tagged photos, see Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha.â€</a> And for more on this <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">my post here</a>.</p>
<p>But, as Joe Lamantia points out:</p>
<p><strong>â€œOne of the weakest aspects of the existing interaction patterns for augmented reality is their reliance on single-person, socially disconnected user experiences.â€</strong></p>
<p>In my view, <strong>The Pandora&#8217;s Box of Augmented Realities</strong> is an open, distributed, multiuser augmented reality framework, fully integrated with the internet and world wide web.</p>
<p>As Yochai Benkler has pointed out many times, and argues again in, <a href="Capital, Power, and the Next Step in Decentralization" target="_blank">Capital, Power, and the Next Step in Decentralization</a>, it is &#8220;open, collaborative, distributed practices that have been at the core of what made the Internet.&#8221;Â  We have to try to make sure that open, collaborative, distributed practices are at the core of mobile augmented reality.</p>
<p><strong></p>
<p></strong></p>
<h3>Can Google Wave be the basis for an Open, Distributed, Multiuser Augmented Reality Framework?</h3>
<p><a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank"><img class="alignnone size-medium wp-image-4492" title="Screen shot 2009-09-25 at 11.51.20 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-25-at-11.51.20-PM-300x141.png" alt="Screen shot 2009-09-25 at 11.51.20 PM" width="300" height="141" /></a></p>
<p>I have been exploring the idea of using <a href="http://wave.google.com/" target="_blank">Google Wave </a>protocol as the basis for a distributed, multiuser open augmented reality framework with a small group of AR enthusiasts and developers. And I am happy to say the proposal is beginning to get fleshed out a little.Â  New collaborators are welcome both for &#8220;gear heady&#8221; input and use case suggestions (but re the latter, you can&#8217;t just say everything you see in <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>..!).</p>
<p>This effort started with Thomas Wrobel&#8217;sÂ  proposal for an Open AR Framework prototyped on IRC &#8211; see <a id="s336" title="here" href="../../2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/">here,</a> and click to enlarge the image above of, <a href="http://www.lostagain.nl/tempspace/PrototypeDiagram.html" target="_blank">&#8220;Sky Writer: Basic Concept for an Open Multi-source AR Framework.&#8221;</a></p>
<p>But recently we began looking at the <a href="Wave Federation Protocol" target="_blank">Wave Federation Protocol</a>.Â  And, if you check out <a id="ogbq" title="this post," href="http://www.jasonkolb.com/weblog/2009/09/why-google-wave-is-the-coolest-thing-since-sliced-bread.html#more" target="_blank">this post,</a> and <a id="c0ep" title="this post" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">this post</a>, you may get a glimpse of why Google Wave protocol might be a good basis for an open, distributed, AR Framework.Â  You will notice, if you study what Google Wave has done with the XMPP protocol, that many ofÂ <a href="http://speedbird.wordpress.com/2009/03/22/the-elements-of-networked-urbanism/" target="_blank"> the elements of networked urbanism</a> that Adam Greenfield describes resonate strongly with what is being attempted in Wave.</p>
<p>But enough said for now!Â  Regardless of the details of implementation,Â  Google Wave or an AR protocol built from scratch (phew! the latter does seem like a lot of work) -Â  an open, distributed, multiuser AR framework integrated with the internet and web would explode the potential of AR, creating new possibilities for data flows, mashups ,and shared augmented realities.</p>
<p>And we are excited by Google Wave because, as Thomas puts it:</p>
<p><strong>&#8220;The really great thing wave does &#8230;.(aside from being an open standard backed by a major player&#8230;hopefully leading to thousands of worldwide servers )&#8230;.is that it allows anyone to create any number of waves, set precisely who can view or edit them, and for them to be able to be updated quickly and continuously (and even simultaneously!)</strong><strong> Better yet, changes will (if necessary) propagate to all the other servers sharing that wave. It does all this right now. From my eyes this does a lot of the work of an AR infrastructure already.</strong></p>
<p><strong>I cant see any other protocol actually doing anything like this at the moment, although correct me if I&#8217;m wrong, as alternatives are always welcome :)&#8221;</strong></p>
<p>Also, Thomas notes, <strong>&#8220;even the playback system (that is, the ability to playback the changes made to a wave since its creation) &#8230;this could give us automatically some of the ideas Jeremy Hight has mentioned in <a href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">his visionary work here</a>,Â  and <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a> on &#8220;the geo spatial web, interlinked locations and data, immersive augmentation and open source geo augmentation.&#8221;</strong></p>
<p>One of the many reasons why an Open, distributed AR Framework would be so cool is it would open up all kinds of possibilities for <span>GeoAR</span> by providing the over-arching standard protocol for communication of updates necessary for the substandards that will facilitate <span>GeoAR</span>.</p>
<p>Also important to note is theÂ  <a id="o0is" title="Wave Federation Protocol docs which are all publicly available here" href="http://www.waveprotocol.org/" target="_blank">Wave Federation Protocol</a> allows anyone:</p>
<p><strong>&#8220;to run wave servers and become wave providers, for themselves, or as services for their users, and to &#8220;federate&#8221; waves, that is, to share waves with each other and with Google Wave. &#8211; &#8220;the federation gateway and a federation proxy and is based on open extension to <a href="http://www.waveprotocol.org/draft-protocol-spec#RFC3920">XMPP core</a> [RFC3920] protocol to allow near real-time communication between two wave servers.&#8221; See Reuven Cohen&#8217;s blog for more <a id="rmr3" title="here" href="http://reuvencohen.sys-con.com/node/980762" target="_blank">here</a> and <a id="mqxr" title="&quot;HTTP is Dead, Long Live the Real Time Cloud.&quot;" href="http://www.elasticvapor.com/2009/05/http-is-dead-long-live-realtime-cloud.html" target="_blank">here, &#8220;HTTP is Dead, Long Live the Real Time Cloud.&#8221;</a></strong></p>
<p>Still some people have expressed concern that an AR Framework using Google Wave protocol would give Google disproportionate influence. Â  Will Google-specific functionality be an issue?Â  How much stuff is Google specific just because no one else is using it (yet)? And how much is Google specific because it holds no value to anyone else but Google? These are some of the questions that have come up.</p>
<p>You are going to see a variety of suggestions for standards and specs for open AR coming out out in the next few months which as, Robert Rice of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a> points out is: <strong>&#8220;a good thing, we need that competition early on to settle down on best case.&#8221; </strong>Recently,Â <a href="http://www.mobilizy.com/" target="_blank"> Mobilizy</a> have offered up an ARML (&#8220;an augmented reality mark-up language specification based on the OpenGISÂ® KML Encoding Standard (OGC KML) with extensions&#8221;) for consideration see <a href="http://www.mobilizy.com/enpress-release-mobilizy-proposes-arml" target="_blank">here.</a></p>
<p>So it is, perhaps, also important to note, that an Open AR Framework should be neutral/transparent to techniques ofÂ  &#8220;reality recognition,&#8221;Â  and methodologies of registration/tracking, allowing various ones to work on the system as new techniques evolve, and to support as many evolving standards as possible.</p>
<p>Augmented Reality developers, like Total Immersion and others with powerful rendering/tracking AR software, should be able use an Open AR Framework to exchange the data which their tracking will use. And the tracking/rendering problems they and other researchers have solved are much harder than figuring out data exchange on on a standard infrastructure or protocol!</p>
<p>So I pricked up my ears when I heard Bruno Uzzan, CEO of <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> -Â  the first and currently the largest augmented reality company, with a 50 person R&amp;D team in France and offices in LA, where Bruno himself is now based, say: <strong>&#8220;Total Immersion isÂ  only months away from launching shared mobile augmented reality experiences using near field object recognition/tracking across multiple platforms&#8221;</strong> (for more details read my conversation with Bruno Uzzan <a href="#jumpto">below</a>).</p>
<p>I was happy when I asked Bruno about the possibilities for developing an open, distributed, multiuser augmented reality framework fully integrated with the internet and world wide web (possibly using Google Wave protocols), and he replied:</p>
<p><span id="pnk:" title="Click to view full content"><strong>&#8220;I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.&#8221;</strong></span></p>
<p><span title="Click to view full content"><strong></p>
<p></strong></span></p>
<h3>Total Immersion &#8211; working with the &#8220;symbiosis between augmented reality and brands&#8221;</h3>
<p><a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank"><img class="alignnone size-medium wp-image-4457" title="dhj5mk2g_344g64g96cq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_344g64g96cq_b-300x224.png" alt="dhj5mk2g_344g64g96cq_b" width="300" height="224" /></a></p>
<p>Total Immersion has created many of the best known and most ambitious augmented reality experiences for major brands to date, including Mattel&#8217;s <a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php">new AR toys</a><a title="new toys" href="http://www.readwriteweb.com/archives/mattels_new_web-enabled_avatar_toys_will_offer_augmented_reality.php"><img src="http://www.uxmatters.com/mt/archives/images/new-window-arrow.gif" alt="" width="14" height="12" /></a> to be released in conjunction with the James Cameron film Avatar, and <a id="dmas" title="AR baseball cards for Topps" href="http://www.youtube.com/watch?v=I7jm-AsY0lU">AR baseball cards for Topps</a>, <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">video here</a> (or click screenshot above), and the <a href="http://www.publishersweekly.com/article/CA6698612.html?industryid=47152" target="_blank">UK&#8217;s first augmented reality book</a>s.</p>
<p>Bruno founded Total Immersion 10 years ago when he was just 27. And the kind of conviction it took to survive as an augmented reality business in the decade before augmented reality captured the world&#8217;s attention is remarkable.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1.png"><img class="alignnone size-medium wp-image-4456" title="dhj5mk2g_343dbsph2fz_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_343dbsph2fz_b1-300x225.png" alt="dhj5mk2g_343dbsph2fz_b" width="300" height="225" /></a></p>
<p>AR&#8217;s first steps out into the world after 17 years as predominantly a lab science maybe &#8220;wobbly&#8221; (what new technology isn&#8217;t), and sometimes gloriously kitsch &#8211; check out<a id="d_eu" title="the riotus video of and AR Live Show Total Immersion produced in Korea here." href="http://www.t-immersion.com/en,video-gallery,36.html" target="_blank"> this riotus video of the 3D Interactive Live Show Total Immersion produced in Korea </a> (also see the <a href="http://augmented-reality-news.com/2009/09/15/entertainment-first-interactive-3d-live-show-now-open-in-south-korea/" target="_blank">Total Immersion Augmented Reality Blog</a> for more on the TI&#8217;s turn keyÂ  Interactive 3D Live Show Solution).</p>
<p>As Lamantia points out <a id="eo6x" title="here" href="http://www.uxmatters.com/mt/archives/2009/08/inside-out-interaction-design-for-augmented-reality.php" target="_blank">here</a>, &#8221; projecting mixed realities into public, common, or social spaces makes them  social by default.&#8221;</p>
<p>However, the potential for shared location based augmented reality experiences is as yet untapped.Â  So I see the entry of the most experienced commercial augmented reality company into mobile as pretty interesting.Â Â  WhileÂ  smart phone AR still has significant limitations, and it certainly does differ from some of the futurist dreams of AR (see <a id="x3:y" title="Mok Oh's post hear on his disappointment in this regard" href="http://allthingsv.com/2009/09/03/you-know-what-really-grinds-my-gears-augmented-reality/">Mok Oh&#8217;s post here on his disappointment in this regard)</a>, it is significant that Total Immersion is committing to becoming a leader in mobile AR.</p>
<p>Our smart phones, the powerful networked sensor devices that so many people carry in their pockets, have proved themselves a &#8220;good enough for now&#8221;Â  mediating device for early manifestations of the ubiquitous computing and augmented reality base pair.Â  And now AR and ubicomp is mixed in theÂ  rich, messy soup of everyday life, commerce, business, marketing, art, entertainment, and government, we should get ready to see these technologies grow up fast, and unfold in some surprising ways that lab science didn&#8217;t necessarily predict.</p>
<p>And, perhaps, the new dialogue between scientists and entrepreneurs may spur both communities to outdo themselves.</p>
<p>Particularly, as <a href="http://programmerjoe.com/" target="_blank">Joe Ludwig</a> notes: &#8220;It seems to me that the biggest disconnect between the academics and the entrepreneurs is that they disagree on how far we are from the finish line.&#8221;</p>
<p>See the comments&#8217;s on Ori Inbar&#8217;s post, <a title="Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?" rel="bookmark" href="http://gamesalfresco.com/2009/09/22/augmented-reality-entrepreneurship-natural-evolution-or-intelligent-design/">Augmented Reality Entrepreneurship: Natural Evolution or IntelligentÂ Design?</a>, forÂ  a courteous but spirited discussion on the potential benefits and frictions of the newly expanded AR community ofÂ  researchers andÂ  entrepreneurs.</p>
<p>As <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre </a>(see my long conversation with Blair<a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank"> here</a>) notes:</p>
<p><strong>&#8220;not all academics and researchers are only interested in the traditional models of impact. Case in point: I wouldnâ€™t be building unpublishable games, nor investing so much time talking to the press, entrepreneurs and VCs if I did not believe strongly in the value of the impact I am having by doing that â€” and I know others with the same attitude.&#8221;</strong></p>
<p>In this vein, check out the Marble Game (<a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank">video here</a>) developed by Steve Feiner and his team at Columbia U. It&#8217;s enabled by Goblin XNA, an open source AR framework built on top of Microsoft&#8217;s XNA, which powers XBox live games, Zune games, and some Windows games. For more about Goblin XNA and AR from Columbia U <a href="http://graphics.cs.columbia.edu/projects/goblin/index.htm" target="_blank">see here</a>.Â  (Hat tip to <a href="http://www.oreillynet.com/pub/au/125" target="_blank">Brian Jepson</a> for this link)</p>
<p><a href="http://www.youtube.com/watch?v=6AKgH4On65A&amp;feature=player_embedded" target="_blank"><img class="alignnone size-medium wp-image-4528" title="Screen shot 2009-09-26 at 5.16.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-5.16.56-PM-300x182.png" alt="Screen shot 2009-09-26 at 5.16.56 PM" width="300" height="182" /></a></p>
<p>While we are still waiting for the kind of sexy AR specs &#8211; nothing totally game changing in <a href="http://gigantico.squarespace.com/336554365346/2009/9/20/eye-for-an-iphone.html" target="_blank">Gigantico&#8217;s AR eyewear rounup</a> (<a href="http://appft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&amp;Sect2=HITOFF&amp;d=PG01&amp;p=1&amp;u=%2Fnetahtml%2FPTO%2Fsrchnum.html&amp;r=1&amp;f=G&amp;l=50&amp;s1=%2220080088937%22.PGNR.&amp;OS=DN/20080088937&amp;RS=DN/20080088937" target="_blank">maybe note this Apple patent</a>), that might get wide adoption. But at least researchers are not afraid to explore the possibilities of AR Goggles.</p>
<p>But how far are we now, with or without sexy goggles,Â  from a fuller expression of the base pair DNA of ubiquitous computing and augmented reality?</p>
<h3>We may have a LAN of things before we have an Internet of Things</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1.jpg"><img class="alignnone size-medium wp-image-4534" title="dhj5mk2g_345g9bxbwd3_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_345g9bxbwd3_b1-300x199.jpg" alt="dhj5mk2g_345g9bxbwd3_b" width="300" height="199" /></a></p>
<p><em>The picture above is a workshop I attended at <a href="http://confluxfestival.org/2009/about/" target="_blank">Conflux</a> last weekend &#8211; <a href="http://confluxfestival.org/2009/events/workshops/natalie-jeremijenko/" target="_blank">Fish â€˜n microChips</a>, with <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko.</a> We are at the site of the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> project (a commissioned work for <a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">Toward the Sentient City</a>) and &#8220;a collaborative project with <a href="http://www.environmentalhealthclinic.net/environmental-health-clinic/" target="_blank">xClinic</a>, The Living and other intelligent creatures.&#8221;</em></p>
<p>We are probably as far off some grand futurist visions of ubiquitious computing as we are some of the futurist visions of augmented reality. But as it turns out that may not be a bad thing! Recently, <a href="http://twitter.com/mikekuniavsky" target="_blank">@mikekuniavsky</a> noted in a tweet:</p>
<p><span><span>&#8220;Another argument for the LAN of Things before the Internet of Things: <a rel="nofollow" href="http://tinyurl.com/lgp9uq" target="_blank">http://tinyurl.com/lgp9uq&#8221;</a></span></span></p>
<p><span><span>Bert Moore, <a href="http://www.aimglobal.org/members/news/templates/template.aspx?articleid=3553&amp;zoneid=24" target="_blank">in the article Mike linked to points out</a>, the grand vision of an &#8220;internet of things&#8221; with everything connected to everythingÂ  can &#8220;distract people from thinking about the benefits of RFID in smaller, more easily implemented and cost-justified applications.&#8221;Â  The same argument I think applies to sensor networks and augmented reality.</p>
<p></span></span></p>
<p>In New York City, a series of commissioned works for the <a href="http://www.archleague.org/" target="_blank">Architectural League of New York&#8217;s</a> exhibit,<em> </em><a href="http://www.sentientcity.net/exhibit/?cat=3" target="_blank">&#8220;Toward the Sentient City&#8221;</a><em> </em>are giving us the opportunity to dip our toes into the ocean of a &#8220;networked urbanism.&#8221; Â  For only a small budget, two of the <a href="http://www.sentientcity.net/exhibit/?cat=4" target="_blank">five commissioned works</a>, <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibeous Architecture</a> and <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> demonstrate how sensor networks can allow us to explore new kinds of communities &#8211; connecting people to environments in interesting ways to create new forms of social agency.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">&#8220;Amphibeous Architecture</a>&#8221; -Â  from The Living Architecture Lab at Columbia University Graduate School of Architecture, Planning and Preservation (Directors David Benjamin and Soo-in Yang) and Natalie Jeremijenko, Environmental Health Clinic at New York University, uses a skillfully built (electronics and water are notoriously hard to mix) array of partially submerged sensors to pierce the blinding, reflective surfaces of the riversÂ  surrounding Manhattan and to create a new two way relationship with the ecosystem below &#8211; the water, our neighbors the fish and even a beaver that lives in the water surrounding Manhattan.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM.png"><img class="alignnone size-medium wp-image-4536" title="Screen shot 2009-09-26 at 6.34.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.34.56-PM-300x125.png" alt="Screen shot 2009-09-26 at 6.34.56 PM" width="300" height="125" /></a></p>
<p><em>Image from <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Toward the Sentient City</a></em></p>
<p>In a similar spirit, &#8220;<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a>&#8221; &#8211; Usman Haque, creative director, Nitipak â€˜Dotâ€™ Samsen, designer, Ai Hasegawa, designer, Cesar Harada, designer, Barbara Jasinowicz, producer, creates a network of people and electronically assisted plants to explore what it takes to work together on energy consumption and to experience the consequences of &#8220;selfish&#8221; and &#8220;unselfish&#8221; behavior interactively before it is too late to modify our actions.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM.png"><img class="alignnone size-thumbnail wp-image-4537" title="Screen shot 2009-09-26 at 6.55.29 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-6.55.29-PM-150x150.png" alt="Screen shot 2009-09-26 at 6.55.29 PM" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM.png"><img class="alignnone size-thumbnail wp-image-4548" title="Screen shot 2009-09-26 at 9.37.06 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/Screen-shot-2009-09-26-at-9.37.06-PM-150x150.png" alt="Screen shot 2009-09-26 at 9.37.06 PM" width="150" height="150" /></a></p>
<p><em>The &#8220;Greedy Switch</em>&#8220;<em> from <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse </a>on the left. On the right &#8220;The System&#8221; &#8211; click to enlarge.<a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank"></p>
<p></a></em></p>
<p>Much more to come in another post on these works, and &#8220;Toward the Sentient City.&#8221;Â  Also an update on how <a href="http://www.pachube.com/">Pachube</a> &#8211; an important part of both these projects and a very important contribution to ubiquitous computing because it creates the opportunity to connect environments and create mashups from diverse sensor data feeds &#8211; has matured since my interview with Pachube founder, Usman Haque, <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">&#8220;Pachube, Patching the Planet,&#8221;</a> in January this year.</p>
<p>In the picture above <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a>, and <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol</a> give the <a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">Amphibious Architecture</a> sensor array a last look over, as it will soon be lowered into the East River. Jonathan is on a busman&#8217;s holiday to help out at the pre launch of Amphibious Architecture, nr Manhattan Bridge, NYC.</p>
<p>I was very happy to getÂ  a chance to talk to <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">Jonathan Laventhol </a>- more on our conversation in another post<em>. </em>Jonathan Laventhol is <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank">CTO of Imagination &#8211; one of the world&#8217;s leading design, events, and branding agencies.</a> We talked about the importance ofÂ <a id="r_oi" title="Jonathan Laventhol, Imagination" href="http://www.laventhol.com/about" target="_blank"> Pachube</a>, which Jonathan called the &#8220;The Facebook of Data,&#8221;Â  andÂ  how the <strong>symbiosis between brands and augmented reality</strong>, and healthcare applications, wouldÂ  be key to augmented reality emerging into the mainstream.</p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b.jpg"><img class="alignnone size-medium wp-image-4453" title="dhj5mk2g_340djvd2thc_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_340djvd2thc_b-235x300.jpg" alt="dhj5mk2g_340djvd2thc_b" width="235" height="300" /></a></em></p>
<p>Natalie Jeremijenko&#8217;s workshop at Conflux on the social negotiation of technology and how <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">&#8220;everyware&#8221;</a> can give us the chance to experience new forms of agency and connection was a totally inspiring.Â  And I will cover this too in another post.Â  I have so much awesome stuffÂ  to write about at the moment!</p>
<p>None of the projects in, &#8220;Toward the Sentient City,&#8221; included a mobile augmented reality, or &#8220;magic lens&#8221; component, but they all pointed to why &#8220;enchanted windows into our newly inside-out reality&#8221; are going to be so important. And why the DNA base pair of ubicomp and augmented reality can really do stuff that matters.</p>
<h3>Shangri- La &#8211; &#8220;Transfigured City&#8221;</h3>
<p><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b.png"><img class="alignnone size-medium wp-image-4452" title="dhj5mk2g_342g43n6w7k_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/dhj5mk2g_342g43n6w7k_b-300x249.png" alt="dhj5mk2g_342g43n6w7k_b" width="300" height="249" /></a></a></a></p>
<p>Screenshot from <a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a> episode </em><a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a></p>
<p>In my AR Consortium founder member interview series, I have found that, understandably, the visionary founders of these first augmented reality companies are a little reticent about sharing their full vision.Â  They are basically on stealth mode in this regard.Â  So as you will not, from my interview with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> founder and CEO, Bruno Uzzan, get a fully drawn scenario of his vision for a next generation of shared augmented reality experiences, here&#8217;s a really interesting anime episode from the anime Shangri La called, <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, to mull over instead.</p>
<p>As you can tell from this rather long and circuitous intro to my my conversation with Bruno Uzzan, IÂ  have been investigating shared augmented realities pretty intensively recently. And Mike Kuniavsky pointed me to <em><em><a href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">Shangri-La</a></em></em>, and<a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank"> Transfigured City</a>, in a conversation with Mark Shepard, after Mark&#8217;s presentation at Conflux, <a href="http://confluxfestival.org/2009/events/workshops/mark-shepard/" target="_blank">Sentient City Survival Kit.</a></p>
<p><a href="http://thingm.com/about-us/team/mike-kuniavsky.html">Mike Kuniavsky</a> with <a href="http://thingm.com/about-us/team/tod-e-kurt.html">Tod E. Kurt</a> is founder of <a href="http://thingm.com/home.html" target="_blank">ThingM</a>, a ubiquitous computing device studio. Also Mike Kuniavsky researches, designs and writes about people&#8217;s experiences at the intersection of technology and everyday life &#8211; see Mikes blog <a href="http://www.orangecone.com/" target="_blank">Orange Cone</a>.Â  And I interviewed Mike at Etech- see<a href="../../2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> here</a>.</p>
<p>In <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">Transfigured City</a>, the &#8220;Metal Age&#8221; group has to figure out how to share and communicate in a city transfigured by augmented realities/virtualities, where no-one sees the same place in the same way.Â  Only one character can figure out from her previous experience of the city the relationship between the transfigured city and how it used to be.</p>
<p>The conversation I had with <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> on <a id="cwnc" title="The Transfigured City," href="http://www.kazeebo.com/view/17506/shangrila-episode-14-transfigured-city/" target="_blank">The Transfigured City</a> continued at a picnic in Washington Square Park the next day with Elizabeth Goodman, who I met at Etech when she gave a brilliant presentation, <a id="eag1" title="Designing for Urban Green Space" href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Designing for Urban Green Space</a>.Â  We covered so many areas at the picnic related to ubiquitous computing and augmented realities that this conversation probably deserves a post of its own (my writing to do list is growing longer!).</p>
<p><a id="on28" title="The Plot Synopsis for Shangri La" href="http://en.wikipedia.org/wiki/Shangri-La_%28novel%29" target="_blank">The Plot Synopsis for Shangri La</a>:</p>
<p><strong>&#8220;In the mid-21st century, the international committee decided to forcefully reduce CO2 emission levels to mitigate the global warming crisis. As a result, the economic market was transferred mainly into the trade of carbon. A great earthquake destroys much of Japan, yet the carbon tax placed on the country is not lifted, so Tokyo is turned into the worldâ€™s largest &#8220;jungle-polis&#8221; that absorbs carbon dioxide. Project Atlas is commenced to plan the rebuilding of Tokyo and oversee the government organization, which the Metal Age group opposes due to its oppressive nature. However, Atlas is only built with enough room for 3,500,000 people and most people are not allowed to migrate into the city. The disparity between the elite within Atlas and the refugees living in the jungles outside of its walls set up the background of the story.&#8221;</strong></p>
<p><strong></p>
<p></strong></p>
<p><a name="jumpto"><span style="font-size: medium;"><strong> Talking With Bruno Uzzan</strong></span></a></p>
<p><span style="font-size: medium;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost.jpg"><img class="alignnone size-medium wp-image-4494" title="BrunoUzzanpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/09/BrunoUzzanpost-225x300.jpg" alt="BrunoUzzanpost" width="225" height="300" /></a></p>
<p></strong></span></p>
<p><strong></p>
<p>Tish Shute:</strong> We won&#8217;t have fully opened the Pandora&#8217;s Box of Augmented Realities until we have ubiquitous, shared augmented realities, will we?</p>
<p><span id="p-xo" title="Click to view full content"> <strong>Bruno Uzzan: Yes. The most important for augmented reality is the experience we want to share. Now we are working on the cell phone, we can potentially do some marketing components that we already have developed now on cell phone. Done. Itâ€™s working.</strong></span></p>
<p><strong>But the most interesting part of it is how these new components [cell phone AR] will be used for marketing campaigns by brands. And we are also pretty much well positioned to transform some of the AR that we currently have working on Mac and PC and to transform these to applications working on mobile devices. </strong></p>
<p><strong>Tish Shute: </strong> We havenâ€™t really experienced yet what it means to actually share mobile AR experiences?</p>
<p><strong>Bruno Uzzan: Itâ€™s hard &#8212; we did a Facebook app. Itâ€™s a first try, it has a way to go.Â  But </strong><span id="c8ek" title="Click to view full content"><strong> to go more and more into social, is the way forward for us &#8211; to share and expand AR experiences. But yes, I mean what youâ€™re seeing is how two people on two different applications can share that same expanse.Â  For sure we are going in that direction. We are currently working on those kind of solutions. How people can share and experience together at the same time. Thatâ€™s how we start creating excitement in augmented reality, and itâ€™s coming up.</strong></span></p>
<p><strong>It&#8217;s a new market and thereâ€™s so much more in store for augmented reality. You know, some people are telling me, donâ€™t you believe that augmented reality is a gimmick? It will be a trend for a few weeks or a few months and then gone? I say, youâ€™re kidding me. This is only the beginning. I mean I can assure you that the applications that are on the market today are one percent of what we will have five years from now.</p>
<p></strong></p>
<p><strong>Tish Shute: </strong>I agree.</p>
<p><strong>Bruno Uzzan: And Iâ€™m sure that augmented reality will be a part of a lot of components that we are currently using today &#8211; GPS, web browser, glasses, I mean there are so many applications that will come up shortly. This is only the beginning. Iâ€™m completely convinced that augmented reality will be in three years from now what virtual reality is today, which is a billion dollar market.Â  I know that itâ€™s not just a gimmick of a few weeks or a few months, because so many brands are jumping into it, spending money, exploring solutions.Â  I know that itâ€™s not just short term -what they are willing to do and we are willing to do, but also middle and long term. And thatâ€™s what makes this adventure pretty much unique and what makes creating a cutting edge technology, very, very much exciting for us.</p>
<p></strong></p>
<p><span id="pb9s" title="Click to view full content"><strong>Tish Shute:</strong> First could you explain more to me about your partnership with Int13. I am not sure I understand what is in the arrangement from Total Immersion&#8217;s POV. I mean what happens re your own mobile software development? Haven&#8217;t you only been licensed the Int13 SDK for a limited period of time and have limited access to all it&#8217;s power? </span><span id="p_2y" title="Click to view full content"><a href="http://gamesalfresco.com/2009/09/15/why-int13-got-in-bed-with-total-immersion/" target="_blank">Stephane from Int13 said to Ori on Games Alfresco, here, </a>â€œwe have licensed the SDK4 for two years,â€ and then Ori asks, â€œbut you have basically kept the power to yourselves, right?â€ So if they are the only ones that can enhance it and develop the software, where willÂ  TI be in two years in mobile if you havenâ€™t really had the chance to develop your own software .</span></p>
<p><span id="j5co" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Actually itâ€™s a real win-win situation. Int13 is a very small company and they have so many requests they can&#8217;t possibly fulfill them all. SoÂ  this is a way for both of us to be, as quickly as possible, the first mobile provider for all the requests we have. Also they give us exclusivity so nobody else can use INT13 SDK for such applications.Â  I think that it is a good partnership, </strong></span></p>
<p><strong>And concerning our own mobile applicationâ€¦ First of all we have currently some mobile applications working. But with Int13 we have a mobile solution that can work on many different devices. Thatâ€™s a fact and thatâ€™s working. And, believe me you will hear from us a lot more about this soon. We are fully independent on our mobile development. The reason we closed the partnership with Int 13 isÂ  to be able to deploy mobile in a broad way.</strong></p>
<p><strong> I mean you know that the difficulty with AR mobile is that each separate device needs some customization. Working on the iPhone is different from working on the Nokia, different from working on the Palm; itâ€™s different from working on the Samsung. Each of them have their own operating system inside and so we were interested in Int13&#8242;s very clever embedded solution that allows our solutions to work across many platforms.</strong></p>
<p><strong>The reason we are working with Int13 is that we are able to work on so many mobile devices, thanks to Int13. And in the mobile AR race that we are currently in, the next two years will be extremely important to usâ€¦</strong></p>
<p><span id="z_5s" title="Click to view full content"><strong>Tish Shute:</strong> OK, that definitely clarifies it a lot. So Int13 has done an embedded solution to allow TI developed AR solutions to work easily across many devices?</span></p>
<p><span id="y.wt" title="Click to view full content"><strong>Bruno Uzzan: YesÂ  they have kind of an embedded solution, a way to address extremely quickly new cell phone&#8230; But, currently on our side, we are in discussions with a mobile companyâ€¦ and that only refers to some very specific mobile devices.Â  And what they have is also a way to embed deeper our technology into mobile, so that we can have quickerâ€¦ applications that work on a large number of cell phones.</strong></span><span id="mufh" title="Click to view full content"> </span></p>
<p><strong>Tish Shute:</strong> So, basically it means you don&#8217;t have to go through some complicated negotiations with each of the cell phone companies, is what you are saying?</p>
<p><strong>Bruno Uzzan: Not only negotiations, but also hard development. You know? Working on the Windows mobile is completely different from working on the Palm OS. You know, that&#8217;s different! Its a big work, to have a mobile application working on many other devices. So, INt13,Â  provides us a way for us to save some time and some development cost too.</strong></p>
<p><strong>Tish Shute:</strong> And Int13 doesn&#8217;t have powerful AR development tools like <a href="http://www.t-immersion.com/en,interactive-kiosk,32.html" target="_blank">D&#8217;fusion</a> right?</p>
<p><strong> Bruno Uzzan: Right! That&#8217;s right. That&#8217;s why we say it&#8217;s a true win-win solution. They can benefit from our work too. And we can benefit from their work, in order to deploy quicker and faster mobile solutions. </strong></p>
<p><strong>Tish Shute:</strong> Now, the second thing isâ€¦ there is a lot of debate and disagreement about how far mobile augmented reality is from delivering something more that the &#8220;post it&#8221; approach that has been much publicized in recent months, via all the AR browser apps.</p>
<p>But from my understanding from the conversation we had earlier this summer (see below), Total Immersion is targeting a much higher level of mobile augmented reality than we&#8217;ve seen to date?</p>
<p><strong>Bruno: Yes the browser apps we have seen are a kind of augmented reality, but not exactly the way we see it. Let me explain you why. With this kind of application it&#8217;s true that you can overlay 3D-information and video. That&#8217;s a fact. So, in a sense, that&#8217;s augmented reality. But the way that they are working on the position of the 3D on that video is that they are using compass and GPS-information.. so it means that this AR solution will work only on some building and some physical objects that are FIXED. In a fixed and known position.</strong></p>
<p><strong>So you want to go to a theater?</strong></p>
<p><strong> </strong><span id="a9qv" title="Click to view full content"><strong>The theater is here, for sure it will not move, so you know the position of the theater, and thatâ€™s a fact that you can superimpose an object on the theater. Thatâ€™s what can be done currently. What we are achieving and what we are doing on mobile is more than that. We want to be able to port our solution with trading cards, with brands, into a smart phone.</strong></span></p>
<p><strong>Iâ€™m assuming that you want a can, a drink can, to be able to trigger an experience. The only way you can do it is to be able to understand what the can, it is. And the current solutions that are out there canâ€™t do that, itâ€™s impossible. </strong></p>
<p><strong>Tish Shute:</strong> Right, yes. Thereâ€™s no near-field object at all in these early browser apps.</p>
<p><strong>Bruno Uzzan: And the solution we have is that we can recognize a can and then &#8212; in a very, very precise way and that activates geo-location, so we can superimpose 3D. I mean in that case, it opens up all the applications that we currently have, so they could work on mobile.</strong></p>
<p><strong>Tish Shute:</strong> So for example, if youâ€™re working with a soft drink company, people can trigger that experience wherever they see that can?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Yes. Yes, I assumed that was what youâ€™re doing</p>
<p><strong>Bruno Uzzan: We believe &#8212; and maybe thatâ€™s not the case, but we believe that our marker-less tracking technology is pretty much unique on the mobile devices.</strong></p>
<p><strong>I havenâ€™t seen yet, from anyone, a full augmented reality mobile solution working.</p>
<p></strong></p>
<p><span id="rzqr" title="Click to view full content"><strong>I really see AR being part of the Web 3.0 next generation. I mean the vision I have is that, you know &#8212; today, when you want to have information, you go on a website and then you find your information. AR &#8212; and the future is that I think it will be part of the opposite. You want to have information about a product, you just show it to your computer and the information will automatically pop up. I see here a new way to market some key messages, a new way to get information is that some physical product by themselves could be a way to get information, and you donâ€™t have to search anymore for them, itâ€™s coming out to you.</strong></span></p>
<p><strong>AR is definitely for me, one of these components. Another thing that AR is a solution, another thing that AR itself will create these kind of results in how information is being displayed. But Iâ€™m seeingÂ  here a way that could be part of a new way to have access to information. And thatâ€™s part of the vision I have. Whatever, if it is through mobile phone or web or PC, Mac, whatever, I really believe that now this kind of new generation of receiving information will come shortly and could be a kind of a new &#8212; could be part of the new 3.0 generation of the web. </strong></p>
<p><strong>Tish Shute:</strong> My friend <a id="evae" title="Gene Becker" href="http://www.genebecker.com/" target="_blank">Gene Becke</a>r did <a href="http://www.genebecker.com/2009/09/thinking-about-design-strategies-for-magic-lens-ar/" target="_blank">an interesting post recently on some of the current limitations of mobile AR</a> where he pointed out the problem of:</p>
<p><em><strong>&#8220;S</strong><strong>implistic, non-standard data formats</strong> â€“ POIs, the geo-annotated data that many of these apps display, are mostly very simple one-dimensional points of lat/long coordinates, plus a few bytes of metadata. Despite their simplicity there has been no real standardization of POI formats; so far, data providers and AR app developers are only giving lip service to open interoperability. Furthermore, they are not looking ahead to future capabilities that will require more sophisticated data representations. At the same time, there is a large community of GIS, mapping and Geoweb experts who have defined open formats such asÂ <a href="http://georss.org/" target="_blank">GeoRSS</a>,Â <a href="http://geojson.org/" target="_blank">GeoJSON </a>andÂ <a href="http://code.google.com/apis/kml/documentation/" target="_blank">KML</a> that may be suitable for mobile AR use and standardization.&#8221;</p>
<p></em> <span id="gd8y" title="Click to view full content"></p>
<p><strong></p>
<p></strong></span><span id="v68s" title="Click to view full content"><strong> Bruno Uzzan: Thatâ€™s interesting. I mean &#8212; I know exactly what his is referring to. He is mainly referring to a localization and how you can have a quick, accurate localization.Â  If you look at current solutions, and you look at this 3-D superimposing on the video, the 3-D is shaking a lot. I donâ€™t know if you see that in some of these early efforts.</strong></span></p>
<p><strong>Itâ€™s hard to use because the 3-D, you know, isÂ  part of the magic of augmented reality, that is when the 3-D is being inserted in a very easy way and smooth way in your solution. Here, when you see this overlay, 2-D or 3-D overlaid on the video, itâ€™s shaking a lot. One reason for this is that the GPS compass is not accurate enough to coordinate the perfect location of the user. And here, what Gene says is interesting. I think we are addressing this localization issue in a pretty smart way.</strong></p>
<p><strong>But to be frank with you, I donâ€™t believe mobile augmented reality in the extremely short term &#8212; Iâ€™m talking about three weeks, one, two months is mature enough for good AR applications.Â  It will be shortly.Â  But for now it is more proof of concept than a true and easy application to use. </strong></p>
<p><strong>But we are starting to see a lot of new application coming out, but I really believe that marketing and entertainment are the two key markets for AR right now.</strong></p>
<p><strong>Iâ€™ve been working ten years in augmented reality. And, eight years ago, when I was talking about augmented reality, I was E.T., you know? Nobody understood what I said, and I thought it was crazy. And now, today, yes itâ€™s completely different.</strong><strong> </strong></p>
<p><strong> </strong></p>
<p><strong>Tish Shute:</strong> The Pandora&#8217;s Box of Augmented Realities, in my view, is an open, universal and standard, distributed, multiuser, augmented reality framework fully integrated with the internet and world wide web. I have been looking into Google Wave protocols as a basis for this would you be interested in this? Do you think it is feasable?</p>
<p><span id="ngwf" title="Click to view full content"> </span><span id="vz68" title="Click to view full content"><strong> </strong></span></p>
<p><span id="vz68" title="Click to view full content"><strong>Bruno Uzzan: I think this is feasible. I think that&#8217;s doable, that&#8217;s justÂ  in my opinion. I mean some people might have another kind of opinion but I think that that&#8217;s definitely doable.</strong></span></p>
<p><strong>Tish Shute:</strong> Yes I suppose an open AR Framework involves cooperation and collaboration, it is more about business and politics than technological problems.</p>
<p><strong> Bruno Uzzan: Yes!Â  Actually the Web is politics. Business is politics. </strong></p>
<p><span id="yeg4" title="Click to view full content"><strong>Tish Shute: </strong>I would be interested if anyone in your R&amp;D team would be interested in looking at some of the ideas that are emerging in our little discussion of Google Wave and an Open AR FrameworkÂ  to offer feedback. it is an interesting time now to input on the Wave Federation Protocol docs because nothing is set it stone right now.</span></p>
<p><span id="hzrf" title="Click to view full content"><strong>Bruno Uzzan: Just shoot me an email, I&#8217;ll try to put you in touch with the right person and, and a team member that can input on this.</strong></span></p>
<p><span id="hbcd" title="Click to view full content"><strong>Tish Shute: </strong>For mobile augmented reality the best thing weâ€™ve got now is the phone, right?</span></p>
<p><strong>Bruno Uzzan: Right. </strong></p>
<p><strong>Tish Shute:</strong> And the only way we can use the phone is by holding it up, right?Â  Isnâ€™t this a bit of an an obstacle as you introduce better object recognition and tracking?Â  People are going to have to stop moving to use their phone. What do you feel about that experience? Isn&#8217;t AR eyewear and essential part of a tightly registered AR experience?</p>
<p><strong></p>
<p>Bruno Uzzan: </strong>We donâ€™t do hardware and we donâ€™t have the current solution for eyewear that would do all we need for a good mobile AR experience, so I guess we donâ€™t have the current answer for that.Â  But we are beginning to see the next generation of this &#8212; of these glasses.</p>
<p><strong>Tish Shute:</strong> But youâ€™re happy enough with the mobile experience of augmented reality on smart phones that youâ€™re investing in this next generation of software for this.</p>
<p><strong>Bruno Uzzan: Yes, I know. We know that some application will not work on the iPhone. And yes, whatever you do, you still need to hold the iPhone, so it means that you canâ€™t play with your hands anymore. So we know that partially, some AR solutionsÂ  we have on other platforms will lose the magical effectivities on just the iPhone.</strong></p>
<p><strong>But Iâ€™m starting to see on the market some glasses that could perhaps be not too expensive &#8212; thatâ€™s a challenge!Â  And easy to use &#8212; thatâ€™s another big challenge. And, that could fit on anybodyâ€™s faces and head &#8212; there&#8217;s another big challenge. So yes, Iâ€™m starting to see that, but so far AR glasses are only applicable for some very, very specific application, like design or theme park or, you know, some specific location where it makes sense to move forward with glasses.</p>
<p></strong></p>
<p><strong>I donâ€™t believe that kids will use glasses for &#8212; in our toys and for games in the next months or maybe othe next one or two years. But maybe something will come out shortly and that could be a big breakthrough, and enable us to think another way. ButÂ  from what we have seen so far and from what we know in this hardware market, I donâ€™t believe that currently there is a workable solution.</p>
<p><span style="font-size: small;"></p>
<p></span></strong> <span style="font-size: small;"><strong></p>
<p></strong></span><span style="font-size: medium;"><span style="font-size: small;"><strong>Note: The following section of the interview took place earlier in the Summer.</strong></span></p>
<p></span><span id="yvdi" title="Click to view full content"></p>
<p><strong>Tish Shute:</strong> You are the first commercial AR companyÂ  &#8211; you started in 1999 right?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes you are right. We started the extremely early in this augmented reality market. We were the first company worldwide to start doing augmented reality and to start promoting augmented reality. So it&#8217;s true, we are pretty old players although the market has been getting bigger and bigger for the last year and a half. So for a long time we were only in the market, and the market was not really there.</strong></span></p>
<p><strong>But for the past 8 months, the company has been growing really fast.</strong></p>
<p><strong>Tish Shute:</strong> Yes I&#8217;m sure. Congratulations for hanging in there long enough to get the pay off!</p>
<p><strong> Bruno Uzzan: You know, my background is Financial. So I have been driving the company for many years in a very cash efficient way. So we have been waiting for the markets to reach maturity before starting make some investments. So that&#8217;s the reason we are still here, and that&#8217;s the reason I think we managed pretty smartly the cash that we raised for the company.</strong></p>
<p><strong>Tish Shute:</strong> Yes there is a saying that when a market takes off you can tell a pioneers because they are the ones with the arrows in their backs. But I am glad you are dodging the arrows!</p>
<p><strong>Bruno Uzzan: You know, I&#8217;ve always driven the company with revenue. And because revenue was not there at the beginning I was extremely cautious about the cash. So now that the company is getting some revenue, for sure we are making more and more investments, and taking advantage of our situation as a worldwide leader of augmented reality.</strong></p>
<p><strong>This situation is not easy as it appears today but it&#8217;s now getting better, as you can see, AR, Augmented Reality, has very good momentum and we are benefiting a lot from all this momentum for augmented reality right now.</strong></p>
<p><strong>Tish Shute:</strong> You&#8217;ve been very involved in researching developing augmented reality tools. Are you still as active in the research area, or are you too busy keeping up with work for hire now, to be working on research and building new technology for Augmented Reality?</p>
<p><strong>Bruno Uzzan: Both. First of all, we are part of lot of projects either directly with clients like Mattel or with some partners that are using our technology to promote and develop other AR projects. From what we he have seen, many, many, many, projects augmented projects have been done currently with our solutions.</strong></p>
<p><strong>To continue with your previous question. So we are being perceived as this leader in that space, and weÂ  have some pretty heavy demand for our services. But we are coming up with new technology, of course, still connected to Augmented Reality.Â  But, our R &amp; D is working in two different directions, which of course also bind together.</strong></p>
<p><strong>The first one is platform developments. So we want </strong><strong>Augmented Reality to work with as many platforms as possible &#8211; PC, Mac, Mobile, Game Consoles, all those are the platforms that we are targeting. We are currently doing lot of work in the R &amp; D team in cross platform compatibility</strong><strong>.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Robert Rice said recently, &#8220;markers and webcams equal Photoshop page curls&#8230;&#8221;</p>
<p><span id="dulu" title="Click to view full content"></p>
<p><strong>Bruno Uzzan: Yes. There are so many concerns with markers. The quality is extremely bad. As soon as you hide a part of the marker, a slight part of the marker, youâ€™re dead. You canâ€™t track any more of the object. So compared to our solution where I want to say play with cards or where you are going to play with a Mattel toy, even if you hide a part of the toy, itâ€™s still working.</strong></span></p>
<p><strong> Tish Shute:</strong> But you havenâ€™t offered the public an SDK to your engine right? Basically the way people get access to your tools is working in a partnership with Total Immersion right?</p>
<p><strong>Bruno Uzzan: Correct. </strong></p>
<p><strong>Tish Shute:</strong> Do you think in the future you might open your SDK? Are you considering that?</p>
<p><strong></p>
<p>Bruno Uzzan: Yes, it would be interesting. </strong></p>
<p><strong>Tish Shute:</strong> So that is something we can see coming soon?</p>
<p><span id="short_transcription0" title="Click to view full content"><strong>Bruno Uzzan: Maybe, because itâ€™s true that Total Immersion is starting to be mature enough for these kind of tools. The only thing is that we have to respect good timing for that.Â  Itâ€™s a big decision. You know what I mean?Â  It is a big, big decision. We would then compete with others using our technology. </strong></span></p>
<p><strong>Tish Shute:</strong> Oh I know, it is a big decision when you have so much skin in the game! But it would be nice to have your SDK being THE platform for AR, wouldn&#8217;t it?</p>
<p><strong> Bruno Uzzan: It is a really big decision that we canâ€™t just take like that, you know.Â  There are a lot of friends who told me you have to be extremely careful about timing. This timing is pretty much connected to the maturity of the market. For sure, we see the market being more and more mature. But, there are a lot of low hanging fruits we still want to address</strong></p>
<p><strong>To get the best value possible for all the publicity we have and all the clients we have now. </strong></p>
<p><strong>Tish Shute:</strong> Yes, I know. Youâ€™ve been in this game so long. Now, there is an interesting question here though about tools and platforms because you know, A.R., augmented reality has already expandedÂ  beyond its kind of original purist definition. And when I talk to peopleÂ  about augmented reality, there are actually lot of different ideas and priorities of where the tools should go right now. You know, obviously we have these kind of browser-like applications, but these browser like applications are not dealing with recognizing near field objects yet.Â  What are your priorities for tool development and what are your priorities for AR development in the future? What areas are you going to focus on? Oh dear that is a rambling question!</p>
<p><strong>Bruno Uzzan: [laughter]Â  So, one of our first priorities is we need to create our software with one development, one installer, one software that can be spread on different platforms. The same application, the same software can be used either on a PC, Mac, phone or console. So thatâ€™s a lot of work, because that means that our platform has to address many many different devices and thatâ€™s a big priority for us because we received this request from our clients. We want to be able to use one application on many different platforms and devices. So, thatâ€™s the first one.</p>
<p></strong></p>
<p><strong><span id="hk3z" title="Click to view full content">And the second one is to add more and more interactivity between the real and the virtual world. So, we are working on some improvements to add some real components that will interact with virtual, and that also part of our big strategy and direction and these two worlds can more and more be bridged together, linked together so they can interactÂ  one with the other.</span></strong></p>
<p><strong>Our R&amp;D guys are working on the real world interacting more with the virtual world.Â  And I have started seeing some results which are pretty much crazy and this will be ready for next year.</p>
<p><br style="background-color: #ffff00;" /></strong><span id="b1qt" title="Click to view full content"><strong> There are so many different directions for interaction between the real world and virtual world to develop.Â  Iâ€™m sure ten years from now youâ€™re going to have AR applications everywhere.Â  Its not just temporary fashion stuff or a gimmick for few months. I mean we are getting there, its getting stronger and stronger and we are getting a good adoption rate from our consumers. They like it, they test it, they play with it and brands wants more, people want more and its getting bigger and bigger.</p>
<p></strong></p>
<p><strong>Tish Shute:</strong> Yea and I totally agree, its not a gimmick because the interaction between &#8220;virtual&#8221; and &#8220;real&#8221; enhances the magic of both. Another question about you RandD operation. Is your R&amp;D still in France or have you moved totally out to LA.</span></p>
<p><strong>Bruno Uzzan: We are 50 people in France and I started this LA office two years ago and I moved permanently two years to LA. So Iâ€™m now permanently located in the US to take care of the US office, knowing that revenues are really getting bigger and bigger in the US. So it means that we are getting a lot of traction, working with large company and now Iâ€™m currently located in the US.</strong></p>
<p><strong>Tish Shute:</strong> My sister lives in Paris. Could I visit your R&amp;D lab at some point? Iâ€™d love to visit!</p>
<p><span id="bt1e" title="Click to view full content"><strong>Bruno Uzzan: Yeah sure sure sure. I mean if you want to go. You wonâ€™t have access to all the research. But if you want to go out and meet all the team please do.</strong></span></p>
<p><strong>Tish Shute:</strong> Iâ€™d love to.</p>
<p><strong> Bruno Uzzan: No problem. Shoot me an Email you and I will introduce you to Eric Gehl, COO, he is the COO of the French team. And he can definitely take care of that. </strong></p>
<p><strong>Tish Shute:</strong> That would be fun. Thank you!</p>
<p>Recently, AR browser applications have really caught the imagination of the web community, eg., Layar and Wikitude?Â  Where do you think the most important market for AR is at the moment<span id="k6fx" title="Click to view full content">, entertainment,Â  green tech, business, education?</span></p>
<p><strong>Bruno Uzzan: I think that all that you mention will be important. The first one that did grab my attention is entertainment particularly dual marketing, because they always searching for new ways to interact with players or the consumers.Â  But itâ€™s just the tip of the iceberg, you know, I mean medical applications could be huge using augmented reality. Education, and edutainment is definitely using more and more augmented reality components.Â  And I will just be submitting with big companies â€“ that are considering using augmentation for education. Museums are very important too. Also augmentation as a kind of free sales tool, you know there are so many applications, design, architecture &#8211; so many directions that itâ€™s hard to say today which one will take the lead.</strong></p>
<p><strong>But I do believe that on the short term the ones that are really really moving fast are the entertainment business and the digital marketing business. </strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest shortcomings with current augmented reality and what are the obstacles that no one has solved yet?</p>
<p><strong>Bruno Uzzan: I think the cell phone is not fully ready for augmented reality â€“ a lot of people are working on that but there are still a lot of constraints to get the augmented reality working on a cell phone and I think that from what I heard a lot of manufacturers and a lot of companies are working from direction that are going to help us a lot to develop some great cell phone applications.</strong></p>
<p><strong>And I think thatâ€™s one of the biggest part of the game. All the applications that you see on cell phones so far are just gimmicks â€“ the next big key is how to transform some gimmick cell phone application to a real, industrial, robust application that&#8217;s going to work on a cell phone. So I think thatâ€™s a big challenge for this year. </strong></p>
<p><strong></p>
<p>Most of what we see now is just matching and overlaying some 2d components in a video. This is not what I call AR.Â  Youâ€™re far away â€“ with this kind of application, you are far away from doing the registration that we need to do â€“ you canâ€™t do it. So here&#8217;s the challenge: &#8220;how can you get a Topps is an application working on cell phone. Thatâ€™s the big challengeÂ  How we can make that work!&#8221;</strong> <strong> You can&#8217;t today get a real AR Topps application working on cell phone because there&#8217;s no cell phoneÂ  thatâ€™s actually ready. But we are working on it and the first one that can make that work, itâ€™s going to be huge.</strong></p>
<p><span id="b9-2" title="Click to view full content"><strong>When you are working with good AR components you need a lot of CPU and GPU programs. So today new cell phone have started to be more and more ready for augmented reality but you need a really good cell phone to make it work. You canâ€™t choose an old cell phone to make it work because you have some recognition, you have some tracking, you have some rendering, so you canâ€™t choose a Nokia cell phone two years old to make that work. For sure the newest iPhone is the one that can make it work, but thatâ€™s it for now. There is a lot of research â€“ from large cell phone companies â€“ to get more CPU and GPU into their cell phone.Â  But so far we are also waiting for these devices to be released to consumers.</strong></span></p>
<p><strong>Tish Shute: </strong>And the current economic climate has put a damper on MIDs hasn&#8217;t it. But who can tell? It depends what price points some new MID came out at right?</p>
<p><strong></p>
<p>Bruno Uzzan: Correct.</strong></p>
<p><strong>Tish Shute:</strong> Yes,I agree. But basically whatâ€™s interesting, the interesting thing is, the iPhone can deliver so much of what is necessary and even if Apple hasn&#8217;t given access to the full power of the iphone to AR developers yet, there is really no going back now &#8211; the mobile augmented reality cat is out of the bag!</p>
<p><strong>Bruno Uzzan: Youâ€™re right, youâ€™re fully right. </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/feed/</wfw:commentRss>
		<slash:comments>36</slash:comments>
		</item>
		<item>
		<title>Games, Goggles, and Going Hollywood&#8230;How AR is Changing the Entertainment Landscape: Talking with Brian Selzer, Ogmento</title>
		<link>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/</link>
		<comments>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/#comments</comments>
		<pubDate>Mon, 31 Aug 2009 03:38:38 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[alternate reality RPG]]></category>
		<category><![CDATA[ambient intelligence]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR spam]]></category>
		<category><![CDATA[ARBalloon]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[augmented reality baseball cards]]></category>
		<category><![CDATA[augmented reality development]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality hotspots]]></category>
		<category><![CDATA[augmented reality industry]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[Blockade]]></category>
		<category><![CDATA[Brad Foxhoven]]></category>
		<category><![CDATA[Brian Selzer]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cyberpunk]]></category>
		<category><![CDATA[Evolutionary Reality]]></category>
		<category><![CDATA[EyeToy]]></category>
		<category><![CDATA[eyewear for AR]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[Kensuke Tanabe]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar Developer Conference]]></category>
		<category><![CDATA[location based RPGs]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[markerless natural feature tracking]]></category>
		<category><![CDATA[Masunaga]]></category>
		<category><![CDATA[Metroid]]></category>
		<category><![CDATA[Metroid Prime]]></category>
		<category><![CDATA[Mirrorshades]]></category>
		<category><![CDATA[multiperson mobile AR experiences]]></category>
		<category><![CDATA[Nano Air Vehicles]]></category>
		<category><![CDATA[near field object recognition]]></category>
		<category><![CDATA[new augmented reality trade jargon]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Pentagon's Robot Hummingbirds]]></category>
		<category><![CDATA[Project Natale]]></category>
		<category><![CDATA[Put a Spell]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Sekai camera]]></category>
		<category><![CDATA[social gaming platforms]]></category>
		<category><![CDATA[sticky light]]></category>
		<category><![CDATA[The Dawn of the Augmented Reality Industry]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Topps AR baseball cards]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Vuzix]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[Yoshio Sakamoto]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4334</guid>
		<description><![CDATA[Picture on the left Mirrorshades, picture on the right a Metroid Hud. &#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; Bruce Sterling in Beyond the Beyond &#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg"><img class="alignnone size-full wp-image-4349" title="mirrorshadespost3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg" alt="mirrorshadespost3" width="124" height="204" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2.jpg"><img class="alignnone size-medium wp-image-4350" title="metroid_hud1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2-300x204.jpg" alt="metroid_hud1post" width="300" height="204" /></a></p>
<p><em>Picture on the left <a href="http://www.amazon.com/Mirrorshades-Cyberpunk-Anthology-Greg-Bear/dp/0441533825" target="_blank">Mirrorshades</a>, picture on the right a <a href="http://en.wikipedia.org/wiki/Metroid" target="_blank">Metroid Hud</a>.</em></p>
<p><strong>&#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; <a href="http://www.wired.com/beyond_the_beyond/2009/08/the-key-take-aways-for-investors-interested-in-the-augmented-reality-field/" target="_blank">Bruce Sterling in Beyond the Beyond</a></strong></p>
<p><strong>&#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s super cyberpunk&#8221;&#8230; Bruce Sterling, <a href="http://vimeo.com/6189763" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry.&#8221; </a></strong></p>
<p>The Dawn of the Augmented Reality Industry continues to brighten, and now we have two augmented reality companies, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, firmly established in Hollywood &#8211; the dream mother of so many of our augmented realities.<a href="http://ogmento.com/" target="_blank"></a></p>
<p><a href="http://ogmento.com/" target="_blank">Ogmento</a> is the most recent of these two pioneering augmented reality companies to set up shop in LA.Â  <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> CEO Bruno Uzzan moved to LA from France two years ago, although he still has a fifty person RandD team in France.Â Â  Total Immersion began 10 years ago in the quiet, lonely, hours before the dawn of an AR industry.Â  But <a href="http://gamesalfresco.com/2009/07/23/mattel-launches-augmented-toys-at-comic-con/" target="_blank">Total Immersion&#8217;s AR toys for Mattel,</a> and augmented reality for <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">Topps baseball cards</a>, fired CNet writer Daniel Terdiman up enough to say, &#8220;I have seen the future of toys, and it is augmented reality&#8221; (<a href="http://news.cnet.com/8301-13772_3-10317117-52.html" target="_blank">see full post here on CNet</a>).</p>
<p>Recently, I talked withÂ <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank"> Ori Inbar, one of the founders of Ogmento </a>andÂ  the premier augmented reality blog <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> about his new venture in Hollywood. Bruce Sterling, <a href="http://twitter.com/bruces" target="_blank">@bruces</a>, had some fun with my invention of <a href="http://www.wired.com/beyond_the_beyond/2009/08/augmented-reality-ogmento/" target="_blank">brand new augmented reality trade jargon here</a>!Â  Ori pointed out Ogmento brings two important new facets to the rapidly growing augmented reality field: firstly they are bringing leadership from veterans of the entertainment industry into augmented reality development. <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> have partnered with Ori on Ogmento.Â  And, in an another important step forward for a young industry, Ogmento announced they will be acting as publishers for a fast growing cohort of augmented reality application developers and helping AR development teams out there bring their concepts to the market.</p>
<p>So I was very happy also to have the opportunity to talk with Brian Selzer.Â  Bruce Sterling pointed out in his seminal<a href="http://eurekadejavu.blogspot.com/2009/08/augmented-realitys-sermon-on-flatlands.html" target="_blank"> sermon from the flatlands</a> at the <a href="http://layar.com/" target="_blank">Layar</a> Developer Conference, AR is kind of a &#8220;Hollywood scene.&#8221; We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser experiences in recent weeks in an awesome wave of enthusiasm. Are Hollywood creatives equally smitten? For the answers see the full interview with Brian Selzer below.</p>
<p>Brian Selzer (<a href="http://brianselzer.com/" target="_blank">www.brianselzer.com</a> and <a href="http://twitter.com/brianse7en" target="_blank">twitter &#8211; brianse7en</a> ) has an extensive involvement with emerging platforms:</p>
<p><strong>&#8220;from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC [user generated content] projects (<a href="http://www.mashade.com/" target="_blank">mashade.com</a>, <a href="http://www.instafilms.com/" target="_blank">instafilms.com</a>).&#8221;</strong></p>
<p>I began my own career in Hollywood doing motion control photography and creating software that bridged the language of robotics and servo motors with the visions ofÂ  film directors. Eventually our little company, NPlus1, moved on to 3D vision systems and image recognition stuff.Â  So yes, I have been really, really patient waiting for this particular techno visionary dream.Â  And, while I have been waiting for augmented reality to manifest, I have grown to love the internet.Â  But now, how awesome, <a href="../../2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">It is OMG finally for mobile AR!</a></p>
<p>Augmented reality is busting out all over &#8211; through our laptops, our phones, on the streets, toys, baseball cards, art installations, <a href="http://www.youtube.com/watch?v=9noMfsg486Y" target="_blank">sticky light calligraphy</a> and more.</p>
<p>Many of my questions to Brian were directed at at how and when we will see augmented realities with near field object recognition, image recognition and tracking and, of course, the illusive eyewear.Â  As Bruce Sterling points out we are just at the very, very beginning &#8211; the dawn of an industry.Â  I created the photomontage below on the right to compliment <em> <a href="http://www.tonchidot.com/">Tonchidot&#8217;s</a> </em>illustration suggesting the evolutionary inevitability of holding our phones up (below on the left).Â  The Evolutionary Reality of AR will not end there.Â  It is just a step into eyewear, hummingbirds or <a href="http://http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Nano Air Vehicles</a>, and more&#8230;&#8230;.</p>
<h3>The Evolutionary Reality of AR</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96.png"><img class="alignnone size-medium wp-image-4359" title="Picture 96" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96-300x97.png" alt="Picture 96" width="300" height="97" /></a></p>
<p><em>Cartoon on the left  by  <a href="http://www.tonchidot.com/">Tonchidot</a> on the right a collage of a stock photo and the <a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Pentagon&#8217;s Robot Humming Birds &#8211; </a><a href="http://http//gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">&#8220;Nano Air Vehicles</a>.&#8221;</em><strong><em><strong><a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank"> </a></strong></em> </strong></p>
<p>While we finally we have, an affordable mediating device with the horse power, mindshare and business model to bring AR mainstream with the iphone.Â  The much anticipated Apple 3.1 Beta SDK to be released in September will not, I am sure, open up the Video API at the levels that augmented realities with near field object recognition and tracking require (I would love to be proved wrong though). But the magic wand to deliver even <span id="b9-2" title="Click to view full content">tightly registered AR graphics/media (that require a lot of CPU and GPU)</span> to a wide audience is in our hands, so full access to may not be far off. And others, of course, can/will/might knock the iphone off its current pedestal.Â  AR made it&#8217;s mobile phone debut on the Android after all.</p>
<p>Like everyone else who loves AR, I wish that Apple would open up faster (and I wish Android would manifest on some rocking hardware). But we will see enough of the iphone Video API open for the next generation of mobile augmented reality games and applications to emerge in the coming months.</p>
<p>One of these will be Ogmento&#8217;s.  Although Ogmento is in stealth mode, they have released <a href="http://www.youtube.com/watch?v=EB45O7-6Xrg&amp;eurl=http%3A%2F%2Fogmento.com%2F&amp;feature=player_embedded" target="_blank">a teaser for their first game, &#8220;Put A Spell,&#8221;</a> developed by ARBalloon â€“ screenshot below.Â  Ori did reveal to me in <a href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">th<span style="color: #551a8b;">is interview</span></a> that they are doing image recognition and using the Imagination AR engine.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95.png"><img class="alignnone size-medium wp-image-4356" title="Picture 95" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95-300x177.png" alt="Picture 95" width="300" height="177" /></a></p>
<p>As Brian notes, Hollywood has had the AR bug for a long time. AR has been everywhere in Science Fiction Movies and video games. Nintendo&#8217;s SPD3 head Kensuke Tanabe, &#8220;effectively the man in charge of overseeing all the <em>Metroid</em> franchise underneath original co-creator Yoshio Sakamoto,&#8221; explains the story of <em>Metroid</em> to Brandon Boyer of <a href="http://www.offworld.com/2009/08/retro-effect-a-day-in-the-stud.html" target="_blank">Offworld here</a> (an image of a Metroid Hud on the right opening this post) :</p>
<p><strong>&#8220;the idea of the different visors you use in the <em>Prime</em> games to interact with the world: the scan visor, for instance, set the game apart from other first person shooters in that the player was using it to proactively collect information from the world, rather than having the story come to them passively, in the form of cut-scenes or narration. &#8220;<em>Prime</em> could have adventure elements with the introduction of this visor,&#8221; says Tanabe, &#8220;That&#8217;s how we came up with the genre &#8212; first person adventure, instead of shooter.&#8221;</strong></p>
<p>But as Brian points out:</p>
<p><strong>&#8220;the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear).&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Brian Selzer</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg"><img class="alignnone size-full wp-image-4363" title="me" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg" alt="me" width="188" height="227" /></a></p>
<p><strong>Tish Shute: </strong>Bruce Sterling&#8217;s sermon at the Layar Developer conference, <a href="http://www.wired.com/beyond_the_beyond/2009/08/at-the-dawn-of-the-augmented-reality-industry/" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry,&#8221;</a> was absolutely awesome. He spread the future feast/orgy of augmented reality before usÂ  &#8211; and described many of the dishes we will tasting both delectable and diabolical.Â  One of the many things he points out is, AR is kind of a &#8220;Hollywood scene.&#8221; And, as Ogmento is one of only two augmented reality companies in Hollywood, I am interested to hear how it looks from your neck of the woods. We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser in recent weeks in an awesome wave of enthusiam &#8211; are Hollywood creatives catching the buzz?</p>
<p><strong>Brian Selzer: Â It was a thrill to hear Bruce Sterling mention Ogmento. I devoured all of his Cyberpunk books back in the 80&#8242;s, along with writers like Gibson, Rucker, Shirley&#8230; To me, sci-fi writers are the visionaries who define and influence our technological paths into the future. They make science and tech sexy enough to want to manifest those experiences in the real world. Clearly Bruce sees the AR industry as being sexy. I love that he called it &#8220;a techno-visionary dream come true&#8230; and super-cyberpunk.&#8221; Â And yes, kind of a Hollywood scene.</strong></p>
<p><strong>Hollywood creatives caught the AR bug before they knew what AR was. Â Look at science fiction movies and video games to see AR everywhere. Terminator, The Matrix, Minority Report, Iron Man.. the list goes on. Â Look at any video game with an integrated heads-up display. Â It&#8217;s clear Hollywood loves AR. Â It&#8217;s only been in the past few months though that the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear). So yes, the buzz is here and it&#8217;s strong. Â With that, we all have to be prepared for the good, the bad and the ugly as AR goes mainstream.</strong></p>
<p><strong>It certainly goes to show how young this industry is when Ogmento and Total Immersion are currently the only AR companies based in Los Angeles. It&#8217;s very exciting to be the only company right now demonstrating a natural feature tracking (markerless) iPhone experience in Hollywood. We are in talks to bring some very big brand and properties to the mobile AR space. The goal is to deliver experiences that create added engagement and value to the consumer.</strong></p>
<p><strong>Tish Shute:</strong> Also in his landmark sermon Bruce Sterling noted that augmented reality has been around for 17 yrs and now at last we are seeing the dawning ofÂ  an augmented reality industry. What inspired you to take up the challenge of launching an augmented reality company in Hollywood?Â  Oh congrats that Bruce Sterling name checked Ogmento in his list of companies that prove that this really is the dawn of an industry!</p>
<p><strong>Brian Selzer: I&#8217;ve always been involved in emerging platforms&#8230; from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC projects (mashade.com, instafilms.com). Working with all these great CG game assets, I continued to think about what&#8217;s next, and that&#8217;s when I started to follow AR very closely and started engaging with those who were pioneering in the space.</strong></p>
<p><strong>I remember swapping instant messages with <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> (<a href="http://twitter.com/robertrice" target="_blank">@robertrice</a>) right after the 2008 Super Bowl.Â  We were not chatting about the football game, but rather about some of the commercials that aired during the event as a sign that AR was making its way into the mainstream.Â  A lot of people became aware of AR for the first time when the <a href="http://ge.ecomagination.com/smartgrid/" target="_blank">GE SmartGrid commercial</a> aired.Â  There were all these YouTube videos popping up of people blowing on holographic wind turbines.</strong></p>
<p><strong>The commercial that really got me excited though was the <a href="http://www.youtube.com/watch?v=Kwke0LNardc" target="_blank">Coke Avatar commercial</a>.Â  In that commercial people in the city were sporadically being portrayed as their digital persona&#8217;s, avatars, gaming characters, etc..Â  For me that spot did a great job showing how many of us already have these â€˜alter egosâ€ that live in cyberspace, and how the line between these worlds can sometimes be blurred. I remember watching that commercial and thinking that is exactly the type of experience Iâ€™d like to create with mobile AR.Â  I want to overlap the virtual world into our every-day reality. Why cant I bring my World of Warcraft or Second Life persona with me into the real world?</strong></p>
<p><strong>I am big on the notion of â€œGames and Goals.â€ I believe that games have the power to motivate people in a very powerful way. By challenging ourselves while playing a game we can climb mountains.Â  Augmented Reality is the perfect platform to bring gaming into the real world.Â  By mixing the virtual world with the physical world, this added layer of perception provides a very powerful experience for something like a role-playing game.</strong></p>
<p><strong>One of my earlier social-gaming projects was a website called Superdudes.Â  This was a â€œBe Your Own Superheroâ€ concept that celebrated and motivated kids to create superhero avatar/persona&#8217;s online, and we gave members all sorts of games, challenges, and rewards, some of which carried into the real world. The site recognized members for teamwork, creativity, volunteer work and things like that. So the Superdudes were often involved in charity events and benefits to help children. Â Everybody called each other by their Superhero names, and the line between fantasy and reality were being blurred. Â This project really got me thinking about what happens when you take positive role-playing like this and mix it into the real world.Â  I started to work on a plan for location-based activist missions for points and rewards, but never got to complete that. So I have some unfinished business here.</strong></p>
<p><strong>I think it would be fantastic to be able to show up to some type of fun event with friends, and everybody could see each others alter ego personas standing before them. When you can turn the world into a playground, and use the power of gaming to make a positive impact on the planet&#8230; well, I donâ€™t think there is anything better than that.Â  These are the types of projects that drive me, and I think AR is the best platform to support these types of social gaming experiences.</strong></p>
<p><strong>Tish:</strong> Does Ogmento have any RPGs under development?Â  I noticed in the Google Wave on RPG someone has been working on doing something with the Dungeons&amp;Dragons API.Â  I am interested in exploring the web of protocols underlying Wave as a transport mechanism for multi-person, mobile, AR experiences (not requiring downloads), on an open global outdoor AR network. If not Wave, what do you see as the potential infrastrucure and protocols we could harness for an open augmented reality network?</p>
<p><strong>Brian: Â Ogmento has a deep background in video games and we interact regularly with most of the major game publishers. As a company we are not so much developing our own RPGs right now, but rather exploring what mobile AR extensions make sense for existing brands. Â There are many limitations to location-based gaming, but a global AR network is exactly along the lines we are thinking. Â Lots of discussions are taking place on protocols, platforms, API&#8217;s, and there are numerous ways to approach this. Â We need to be able to use what&#8217;s available now and continue to refine and customize for AR&#8217;s specific needs and issues as we progress. </strong></p>
<p><strong>In general though, Ogmento is focused on what types of experiences can be had today and over the next couple of years. I still think we are several years out from a truly open augmented reality network. Â We are certainly looking at launching our own &#8220;Ogmented Network&#8221; which would support some fun treasure hunt type experiences, or add an entertainment layer on top of traditional outdoor marketing campaigns.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know whether you have read Thomas Wrobel&#8217;s ideas for an open augmented reality network that I just <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">published here on Ugotrade</a>.Â  The principals he talks about are very important for augmented reality to become a major part of our lives &#8211; .Â  Considering the difficulty open networks can pose for emerging business models how can we fund the development of an open framework for augmented reality?</p>
<p>&#8220;<em>a future AR Network, I mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.<br />
Where people can just point their phone, webcam, or pair of AR glasses anywhere were a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.&#8221;</em></p>
<p><strong>Brian: I think funding for these types of projects will definitely come from Venture Capital groups in the near future. Â It&#8217;s early in AR, but the VC&#8217;s are watching and deciding which horses to bet on. Â Until that time, it&#8217;s about service work, and developing AR experiences for others with what is possible today. That work will help fund internal development of original AR products, and platform development.</strong></p>
<p><strong>Tish:</strong> How did you get started with Ogmento?</p>
<p><strong>Brian: My first conversation with Ori was actually about my interest in Location Based RPG concepts.Â Â  We had a long conversation about the possibilities with AR, and it was clear that we shared similar interests, but were coming from different complimentary backgrounds. The idea of collaboration was exciting, so we just kept talking until the timing felt right. Now, with Ogmento we bring a unique blend of AR development experience with a deep backgrounds in AR technology, animation, video games, entertainment, social media, etc.Â Â  I think this is a powerful mix that will allow us to do some great things.</strong></p>
<p><strong>Itâ€™s still so early, and things are just getting started in AR. There are only so many webcam magic tricks you can enjoy before you are ready for something else.Â  The location-based apps have the most potential in my opinion, which is why we are really focused on mobile AR.Â Â  We have some board-game type projects, which do not instantly scream location-based gaming, but if you look at something like the ARhrrr board game, you can see how much more compelling it can be when the game invites the player to be actively moving around during the experience.</strong></p>
<p><strong>Tish:</strong> I am interested in your perspective on how we can create the kind AR experiences that really embody what has always been so exciting about AR &#8211; the tight alignment of graphics and media with real world objects and ultimately a rich immersive 3D experience, so I am going to hit you with a bunch of those, &#8220;Is this really eyewear or vaporware?&#8221; questions.Â  The real deal eyewear changes everything!</p>
<p>While eyeware is a big challenge technically and aesthetically,Â  I am pretty sure that there are several outfits out there that can pull off the optics and projection. â€¨Will the entertainment industry get excited enough to put a major push into delivering the eyewear in short order instead of the 5 to 10 year project that some people still think it is? Â Â  The business development challenge is bigger perhaps than the technical obstacles perhaps? What is your view on this?</p>
<p>And, perhaps, the eyewear is a clear example of a need for partnerships. For example, we have seen efforts from companies like <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a>, and recently a Japanese Company, <a href="http://www.masunaga1905.jp/brand/teleglass/">Masunaga</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97.png"><img class="alignnone size-medium wp-image-4386" title="Picture 97" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97-300x80.png" alt="Picture 97" width="300" height="80" /></a></p>
<p>I have no reports from people who have tried the Maunaga eyewear yet.Â  But,Â  limited by small field of view, and tethered, currently eyewear offerings, available at a reasonable price point, are not workable solutions for augmented reality experiences. But the problems are not insurmountable. What will facilitate the real deal?Â  â€¨â€¨â€¨It seems that it is critical to start creating hardware relationships now. The industry is costly and slow moving and as Robert Rice put it to me in a recent conversation, &#8220;once the software cat is out of the bag, its going to go wild and if the hardware isnt there, its going to stutter.&#8221;</p>
<p>As Ori notes some of the hardware companies like Intel and others don&#8217;t seem to be paying enough attention to AR.Â  Ori points out they donâ€™t see the demand yet.Â  But in order to create an awesome AR experience and demand from a mass audience, don&#8217;t we need to work in conjunction with hardware designers?</p>
<p><strong>Brian: Itâ€™s fun to think about who will eventually deliver a great hardware solution for AR glasses. It will happen. It would be cool to see somebody like an Oakley or Nike partnered up with a company like Vuzix to deliver something people actually might wear in public. Â Perhaps a hardware manufacturer like Apple or Nokia will bring us something like the iSight or the NGaze down the line. Â Iâ€™d love to see a set of glasses designed by Ideo.Â  Microsoft or Sony are already playing with technologies like Project Natale and the EyeToy, so I think its only a matter of time before they deliver an eyewear solution. I would even look to the toy companies to eventually make an investment here.</strong></p>
<p><strong>Gamers will be the early adopters, and in a few years we may start to see people running around in the park wearing glasses with headsets, but it will be acceptable because it&#8217;s clear they are using them for a game. Â Itâ€™s going to take a very sexy and stylish piece of hardware for everyday people to be willing to wear AR glasses in public while going about their everyday business. Â Â Itâ€™s like the recent cover of Wired magazine where Brad Pitt is wearing a mobile headset in his ear, and the editors point out that even he canâ€™t pull that look off, so why do you think you can. Â When AR glasses come in designer frames, and you can&#8217;t tell them from non-AR glasses, to me thatâ€™s when things get really interesting from a mass-adoption perspective. Â Â Compare how many people were carrying around a mobile phone in the 80s to now.Â  I think it will be the same thing with glasses.</strong></p>
<p><strong>I was in an AR pitch meeting the other week at a very significant media company, and brought up the point that todayâ€™s handheld Smartphones will eventually evolve into tomorrows Smartglasses. My comment was quickly shrugged off as sort of a sci-fi notion that was irrelevant to the business at hand. Â Probably true, but I think it is important to understand where digital media and entertainment is going, so you can adapt quickly, and evolve into those spaces more naturally. Â The more we see people walking around with their Smartphones in front of their face (like a camera), the sooner it will be that we make the jump to eyeglasses as a key hardware device for AR experiences.</strong></p>
<p><strong>At Ogmento, we definitely are working on AR experiences with the hardware and software available today. Â We will get some product out this year, and 2010 will be a banner year for markerless mobile AR in general.Â  I think the entire AR community is looking forward to bringing this technology to the mainstream in the form of games, marketing campaigns, virtual docent apps, and much more.Â  It might not be the full experience we are all dreaming about for some time, but we can see the path and the true potential, and it&#8217;s pretty spectacular.</strong></p>
<p><strong>You mention the tight alignment of graphics and media with real world objects. Â That is really our focus. A lot of well-deserved attention is going to the browser overlay &#8220;post-it&#8221; approach right now, which uses compass and GPS. Â We are focused on markerless natural feature tracking, so once you identify something that is AR enhanced in your environment, you can interact with that integrated experience. Â On an iPhone that can be as simple as using your touch screen to interact. Â When you are wearing glasses, it becomes more about visual tracking. There are lots of smart people thinking through these issues. Many of which you have interviewed. It is my hope that there are exciting collaborative efforts to be had in the coming months to get us all there together and faster.</strong></p>
<p><strong>Tish:</strong> Bruce touched on some of the hard problems that have to be solved for augmented reality &#8211; and he noted for instance security needs to be tackled in the early stages. Robert made a nice list, <em>â€œprivacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others.â€</em> Will Ogmento be leading the way in solving some of these hard problems?</p>
<p>And, won&#8217;t trying to solve these hard problems for networked AR in walled garden scenarios one company at a time lead to a lot of reinventing the wheel wasted energy?</p>
<p><strong>Brian: These are all important issues, and again there are a lot of smart people thinking about solutions to these problems on a daily basis. Â Ogmento is interested in partnering with developers and supporting their efforts as a publisher of mobile AR experiences. Â While we intend to roll up our sleeves in these areas, we are currently more focused on taking AR mainstream with the hardware and software available today. Â As the industry evolves, so will Ogmento. As the opportunities evolve, our ability to make a greater impact tackling these issues will be realized.</strong></p>
<p><strong>Tish: </strong>Another area of development that could really kick AR into high gear might be creating augmented reality hotspotsÂ  where we use can deliver the kind of location accuracy/instrumentation necessary to create interesting AR experiences (partnership with Starbucks, perhaps ?!).Â  Augmented reality hots spots, could deliver the kind of high quality AR experience that isn&#8217;t possible ubiquitously at the moment, and may be a real way to get people really exploring the potential of AR now, rather than later?</p>
<p><strong>Brian: Â Agreed. I see a great opportunity here with this approach.</strong></p>
<p><strong>Tish:</strong> Although there are many obstacles to Green AR &#8211; the energy hogging servers at the backend for starters! Last week I had a conversation with Gavin Starks, <a href="http://www.amee.com/?page_id=289" target="_blank">AMEE</a>, and <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice </a>and <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a> about how to work with AMEE and the technology available and encourage Green Tech AR development (<a href="http://blog.pachube.com/2009/06/pachube-augmented-reality-demo-with.html" target="_blank">see an early exploration of green tech AR from Pachube here</a>).</p>
<p>We came up with the idea of holding a competition perhaps centered around a targeted instrumented space. But I would really love to hear your thoughts on the topic of Green Tech AR (the energy hogging servers at the back end being the first cloud on the horizon!.)Â  Cool GreenTech AR imaginings, social gaming ideas, RPGs, not even necessarily even tied to the immediately practical, would be like rain in a drought!</p>
<p><strong>Brian: Â I go back to &#8220;Games and Goals&#8221;&#8230; If you make environmental and other activist efforts fun and rewarding, more are likely to be motivated and participate. Â Can you imagine having a personal &#8220;carbon footprint stat&#8221; floating over your self at all times? Or over your home or factory? Â How would that change your behavior? Â We all love stats. Look at how the Nike+ campaign has used technology and gaming to motivate people to run. Â I think there is a lot that can be done to make being green fun. It starts with the individual, and spreads from there. Â Keep me posted on that one!</strong></p>
<p><strong>Tish:</strong> I would also like to explore further the <a href="http://www.readwriteweb.com/archives/augmented_reality_human_interface_for_ambient_intelligence.php" target="_blank">RRW suggestion that ambient intelligence is both the Holy Grail of AR and possibly snake oil</a>:</p>
<p><em>&#8220;The holy grail of the mobile AR industry is to find a way to deliver the right information to a user before the user needs it, and without the user having to search for it. This holy grail is likely in a ditch somewhere beside a well-traveled road in the district of the semantic Web, ambient intelligence and the Internet of things. Be wary of any hyped-up invitation to invest in a company that claims to have gotten the opportunity right. What we&#8217;ve seen in the commercial industry to date is a rather complex version of a keyboard, mouse, and monitor.&#8221;</em></p>
<p><em> </em></p>
<p>So Holy Grail, Snake Oil, or a ditch somewhere&#8230;.?</p>
<p><strong>Brian: Â I instantly think of Minority Report, where Tom Cruise&#8217;s character is being bombarded with holographic ads personalized with his name and to his current situation. Â In the future, Spam is a nightmare, especially when it knows who you are. Â I think the key thing here is delivering &#8220;the right information&#8221;, and we still dont have that down. I do see a day where we can truly customize what comes to us, how we want it, when we want it. Â My future vision of ambient intelligence is the ability to &#8220;turn everything off&#8221; if I want to&#8230; block out the stimuli and replace it with images of nature, or natural surroundings, etc. Â Where I live in Los Angeles, we have those digital billboards everywhere, so it&#8217;s like advertising overload wherever you look (hints of Blade Runner). Â I personally don&#8217;t mind them, but I know there is great debate on there being simply too many billboards everywhere. So AR would only add to the noise of life by adding yet another digital overlay of information, right? </strong></p>
<p><strong>Perhaps the holy grail is to use technology to filter things out. AR might become a solution to leading a simpler life, or a perfectly customized life if you want that. Ultimately the control needs to be with the individual. Â I guess I am talking about something like TiVo taken to the extreme.</strong></p>
<p><strong>Tish:</strong> And then that other biggy &#8211; augmented reality search! I am asking this next question ofÂ  <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a href="http://sekaicamera.com/" target="_blank">Sekai </a>camera too and now I must also ask <a href="http://www.acrossair.com/" target="_blank">Acrossair</a> and several others I guess! Obviously a huge area of opportunity in this broader landscape that uses location-awareness, barcode scanners, image recognition and augmented reality is to harness the collective intelligence &#8211; a whole new field of search. There is the beginning of a discussion on this <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">in the comments here</a>.</p>
<p>What will it take, in your view, to become a leader in augmented reality search?</p>
<p><strong>Brian: Â I&#8217;m more of a content guy, so I tend to focus on things like UI, quality of creative, etc.. Â From that perspective, I am looking forward to evolving beyond the &#8220;post-it&#8221; text overlay user-experience we see now in AR search. I was impressed with the TAT Augmented ID concept and hope we start seeing more smart design solutions like that emerging in the space. Â There are some great new design approaches coming out of the location-aware space that should be applied to AR search. I&#8217;ve been studying the heads-up display designs being used in video games, and re-watching movies like Iron Man for ideas. This is another example where Hollywood has painted a polished picture of what AR can and should look like, and the masses have already accepted these design approaches. Â So from that perspective, from my view the leaders in search will be delivering sexy, smart and simple solutions. It&#8217;s all about the S&#8217;s.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Composing Reality and Bringing Games into Life: Talking with Ori Inbar about Mobile Augmented Reality</title>
		<link>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/#comments</comments>
		<pubDate>Wed, 06 May 2009 14:50:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Kids With Cameras]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternative reality gaming]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented times]]></category>
		<category><![CDATA[Better Place]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Caryatids]]></category>
		<category><![CDATA[Come Out and Play]]></category>
		<category><![CDATA[composing reality]]></category>
		<category><![CDATA[Cory Doctorow]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[game development conference]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[games for preschoolers on the iphone]]></category>
		<category><![CDATA[games on the iphone]]></category>
		<category><![CDATA[GDC 2009]]></category>
		<category><![CDATA[GE augmented reality ad]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Int 13]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[iPhone OS 3]]></category>
		<category><![CDATA[iphone versus the android]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[jane mcgonigal]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kweekies]]></category>
		<category><![CDATA[Loopt]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microsoft Tag]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile gaming]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Netweaver]]></category>
		<category><![CDATA[open source augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pookatak]]></category>
		<category><![CDATA[Pookatak Games]]></category>
		<category><![CDATA[reality experiences]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Shai Agassi]]></category>
		<category><![CDATA[smart environments]]></category>
		<category><![CDATA[smart objects]]></category>
		<category><![CDATA[The End of Hardware]]></category>
		<category><![CDATA[the Pong for augmented reality]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous augmented reality]]></category>
		<category><![CDATA[ubiquitous experience]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[WARM 09]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[WikiMouse]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3448</guid>
		<description><![CDATA[Recently, I talked to Ori Inbar (above), formerly senior vice- president at SAP.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of Pookatak Games &#8211; a video game company, &#8220;with a vision to upgrade the way people experience the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost.jpg"><img class="alignnone size-medium wp-image-3449" title="oriinbarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost-300x199.jpg" alt="oriinbarpost" width="300" height="199" /></a></p>
<p>Recently, I talked to <a href="http://gamesalfresco.com/">Ori Inbar</a> (above), formerly senior vice- president at <a href="http://www.sap.com/">SAP</a>.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of <a href="http://gamesalfresco.com/about/" target="_blank">Pookatak Games</a> &#8211; a video game company, <strong>&#8220;with a vision to upgrade the way people experience the world.&#8221;</strong> Ori will be participating May 20th, in<a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank"> O&#8217;Reilly&#8217;s Where 2.0 panel, &#8220;Mobile Reality</a>&#8221; -Â  an event not to be missed IMO.</p>
<p>The taste for computing anywhere anytime has entered human culture via the iphone and is spreading like chocolate cake and pizza at a preschool party (see <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_self">why the iPhone changed everything</a>).Â  And while the full flowering of the next step is yet to come &#8211; computing anywhere, anytime by anyone and <strong>anything </strong><a href="http://en.wikipedia.org/wiki/Internet_of_Things" target="_blank">(&#8220;the internet of things&#8221;</a>), our love for these first devices capable of being <strong>mediating artifacts for ubiquitous computing</strong> (Adam Greenfield) is a vital first step to free us from our tethers to computer screens, and fulfill the promise of augmented reality.</p>
<p>If you need more convincing on the pivotal role augmented reality will play as the web moves into the world, check out Tim O&#8217;Reilly&#8217;s recent comments in <a id="iz1_" title="this video clip on Augmented Times" href="http://artimes.rouli.net/2009/04/tim-oreilly-on-recognition-rfid-and-web.html" target="_blank">this video clip posted on Augmented Times</a> and <a id="wtf4" title="here" href="http://radar.oreilly.com/2008/02/augmented-reality-a-practical.html" target="_blank">here</a> early last year.</p>
<p>From another perspective, the gloomy specter of economic and environmental catastropheÂ  is driving a movement to &#8220;<a id="h5pf" title="infuse intelligence into the way the world work's&quot;" href="http://news.bbc.co.uk/2/hi/technology/7992480.stm" target="_blank">infuse intelligence into the way the world work&#8217;s.&#8221;</a> But the challenge for a smart planet is not just about making environments smart, it is about using smart environments to enable people to act smarter (<a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see my interview with Adam Greenfield</a>).</p>
<p>We need a rapid upgrade in both the way the world works, and the way we experience the world.</p>
<p>((Note:Â  It is time to read (if you haven&#8217;t already) <a href="http://search.barnesandnoble.com/The-Caryatids/Bruce-Sterling/e/9780345460622" target="_blank">Bruce Sterling&#8217;s Caryatids</a> (<a href="book of the year for 2009" target="_blank">Cory Doctorow&#8217;s book of the year for 2009</a>) &#8220;as a software design manual&#8221; (<a href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">see Julian Bleeker</a>) because Caryatids reveals the Gordian knots of human folly, greed, compassion and desire entwined in near future designs for technologies to save the world.))</p>
<p>Ori Inbar, worked with Shai Agassi (Shai is now leading the world changing <a id="v5ow" title="Better Place" href="http://www.betterplace.com/" target="_blank">Better Place</a> ) driving <a id="gf_5" title="Netweaver" href="http://en.wikipedia.org/wiki/NetWeaver" target="_blank">Netweaver</a> from a mere concept to a &#8220;major, major business for SAP.&#8221; So Ori has already been through the cycle of working in a very small startup and growing it into a billion dollar business.Â  He has both the experience and the passion to realize his vision for augmented reality.</p>
<p>At Pookatak, he explains :</p>
<p><strong>&#8220;We design â€œreality experiencesâ€ that make usersâ€™ immediate environments more significant to them. We wish to free young and old from getting lost in front of the screen. By delivering the worldâ€™s information to peopleâ€™s field of view, and by weaving real world objects into interactive narratives, we help people rediscover the real world.&#8221;</strong></p>
<p>Pookatak will release their first game this summer. Currently it is under wraps. But Ori gives us some glimpses of what is to come in the interview below.</p>
<p>In addition to founding Pookatak, Ori is involved in a broader effort to move augmented reality forward. On his blog, <a id="ie5s" title="Games Alfresco" href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> &#8211; he recently welcomed <a href="http://gamesalfresco.com/about/" target="_blank">a new partner, Rouli Nir</a>, Ori has focused his eye of wisdom on every significant recent advance in Augmented Reality (check out <a id="zr9y" title="this essence of Ori's thinking in a fast paced video" href="http://gamesalfresco.com/2009/03/09/augmented-reality-today-ori-inbar-speaks-at-warm-2009/" target="_blank">this essence of Ori&#8217;s thinking in a fast paced video</a> presentation for <a href="http://gamesalfresco.com/2009/02/12/live-from-warm-09-the-worlds-best-winter-augmented-reality-event/" target="_blank">WARM â€˜09</a>).</p>
<p>Also Ori is one of the organizers of the interactive media track at <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a>.Â  At ISMAR this year, Ori explained,<strong> &#8220;we are trying to bring in people that develop interactive experiences for consumers, beyond the traditional attendees coming from a research perspective.</strong>&#8221;</p>
<p>In the interview below, Ori explains much of his thinking on how augmented reality will become commercially successful.Â  Enjoy it, think about it, and share it. And most importantly, if you can, get involved with ISMAR 2009.</p>
<p>OriÂ  has inspired me to participate in <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> this year.Â  Ori pointed out:</p>
<p><strong>The </strong> <a href="http://campwww.informatik.tu-muenchen.de/ismar09/lib/exe/fetch.php?id=ismar09%253Astart&amp;cache=cache&amp;media=ismar09:ismar09-cfp_090211_final.pdf" target="_blank">call for papers</a> <strong>is on, and this year it targets well beyond the typical research papers audience and into interactive media and art folks. </strong></p>
<p><strong>There are plenty of opportunities such as:</strong></p>
<p><strong>Art Gallery</strong></p>
<p><strong>Demonstrations</strong></p>
<p><strong>Tutorial</strong></p>
<p><strong>Workshops</strong></p>
<p>It&#8217;s a huge opportunity to shape the emergence of augmented reality.<br />
<br /></br></p>
<h2><strong> Interview With Ori Inbar</strong></h2>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png"><img class="alignnone size-full wp-image-3479" title="picture-41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png" alt="picture-41" width="107" height="146" /></a><br />
<h3>Making Augmented Reality Commercially Successful</h3>
<p><strong>Tish Shute: </strong>You are considered a key trail blazer in AR and you have the go to blog for augmented reality!Â  What are the most important lessons you have learned researching, writing, and developing AR in the last couple of years?</p>
<p><strong>Ori Inbar: You need to have a vision. You need to know where this is going to go in ten or fifteen or twenty years. But you&#8217;ve got to start with something really simple that makes use of the technology you have on hand. And do something that is practical, that people will like, and something they would actually want to buy. Its as simple as that. I&#8217;m currently looking at what we could do with existing technology. First of all, you have to put it in front of people. Right now most people have never heard about the term augmented reality. Go into the street, and ask 100 people about it, maybe 2 would know about it. So you need to put it in front of people because most people think it&#8217;s still science fiction or a special effect you see in movies, not something you can experience in real life. </strong></p>
<p><strong>Tish: </strong>It seems to me to that for augmented reality applications to become popular with existing technology the key breakthrough would be getting people to hold up their phones. What are the obstacles to getting people to use their mobile devices like this?</p>
<p><strong>Ori: There&#8217;s a really nice cartoon by </strong><em> </em><strong><a href="http://www.tonchidot.com/">Tonchidot</a> (below) &#8211; the Japanese company behind the Sekai Camera. It&#8217;s an illustration showing the evolution of man, from ape to man (holding a cell phone looking down), to the developed man holding a device like a camera &#8211; in front of its eyes.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37.png"><img class="alignnone size-medium wp-image-3454" title="picture-37" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37-300x221.png" alt="picture-37" width="300" height="221" /></a><strong></strong></p>
<p><strong>Which is exactly what you&#8217;re talking about. People ask, &#8220;are people going to walk with this like that all day long?&#8221; Probably not. I mean you have to build it in a way that doesn&#8217;t require them to hold it like that all the time. People are used to this gesture with the ubiquitous digital cameras. I tested one of my prototypes on a two and a half year old girl. She had no problem holding it just like she holds a camera.<br />
</strong><br />
<strong>Tish:</strong> <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank"> Blair MacIntyre</a> mentioned, &#8220;The problem with the mobile phone as a AR device is a problem of awareness,&#8221; i.e., you have to have a way of letting people know when there&#8217;s something interesting wherever they are. One of the issues regarding this is if you get too many alerts, then you tune them out.</p>
<p><strong>Ori: First of all Blair is one of the people in academia that get it. Because he looks at it from an experience perspective. Not just as an interesting technical problem to solve. Let&#8217;s start with getting people to enjoy this new experience. The AR demos so far were mostly eye candies, and mostly for advertising &#8211; the<a href="http://ge.ecomagination.com/smartgrid/#/landing_page" target="_blank"> GE AR ad</a> created a lot of buzz; but you look at it for 10 seconds and you forget about it.Â  You need to build something that people would want to experience over time and would be willing to pay for. I think that&#8217;s the big test, right?</strong></p>
<p><strong>Now in terms of having a ubiquitous experience where you&#8217;re continously connected, it doesn&#8217;t have to be an overwhelming experience. Just like some of the social media tools we&#8217;re using today, we decide when to connect, and we filter out the trash. You could get alerts only for things that really matter to you, not for everything that happens in your immediate environment. </strong></p>
<p><strong>There will be many layers of information, and it&#8217;ll be up to you to pick the ones you want to experience. The real benefit is that you get the information in your own field of view and in context of where you are or what you do.</strong></p>
<p><strong>Tish:</strong> So what are you working on these days?</p>
<p><strong>Ori: We are working on a little app that targets a very different audience than what you&#8217;d expect: pre schoolers. We think we can encourage them to get away from a PC or TV screen and learn something while playing &#8211; in the real world. You&#8217;ll hear more about it as soon as this summer. Nuff said.</strong></p>
<p><strong>But, it is a small application that will run on the iPhone. People ask how many pre-schoolers own iPhones? Well, their parents do. </strong></p>
<p><strong>Tish:</strong> Yes there are certainly many New York kids with iPhones &#8211; my kid now has my old iphone.Â  He has pretty much switched from playing games on his DS to the iPhone. I noticed in your WARM video you place a big emphasis on AR as something that will get kids away from screens and engaged with reality.Â  This is something parents will approve of!</p>
<p><strong>Ori: Yes I saw something really interesting at my kids&#8217; party one day; they were all sitting around the room &#8211; looking down at their own DS screens.Â  You could play the DS anywhere, but kids would usually play it on the sofa, looking at the screen, isolated from the world. With an iPhone and a camera, and the application we&#8217;re producing, reality becomes part of the game. Yes that makes it all of a sudden much more interesting for parents. Because kids are spending so much time in front of the screen, all of a sudden they&#8217;re something that will encourage them to interact with real objects, real things. Every parent I&#8217;ve talked to loves that idea.</strong></p>
<p><strong>Tish:</strong> Yes that is what is cool about the work of <a href="http://www.katilondon.com/" target="_blank">Kati London</a> &#8211; I think I saw someone say this on Twitter, &#8220;Kati puts the computer in the game not the game in the computer.&#8221;</p>
<p><strong>Ori: Yes, kids are spending more time in front of games and the computer because it&#8217;s more interesting. It captivates them with &#8220;<a id="x_z0" title="game pleasures" href="http://8kindsoffun.com/">game pleasures</a> &#8221; that tap into their brain&#8217;s dopamine circuitry &#8211; constantly seeking reward and satisfaction. So you&#8217;re not going to be able to tell them to go back to playing in reality without these pleasures. We have to study these mechanics from games and bring them into reality. It&#8217;s about programming real life; and augmented reality helps you achieve that.</strong></p>
<p><strong>Here&#8217;s an example: cause and effect; in a game when you do something you always get an immediate effect. You&#8217;re good, you get a reward. You&#8217;re not good, you get a cue to improve. In real life you do things and you could wait 2 or 3 years until you actually get feedback (if you&#8217;re lucky). Augmented Reality allows you to bring these mechanics into the real world. I think that&#8217;s going to help kids rediscover reality, in a new sense, which is what every parent is dreaming about.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know how much you can say about your app. But in regard to doing augmented reality on the iPhone.. there&#8217;s no compass. Is this a limitation?</p>
<p><strong>Ori: True, no compass yet. But the camera gives you a lot of information that you can interact with. When you run the application, you see the world in front of you, and if the app can recognize real life objects &#8211; it can put virtual elements on top of it.</strong></p>
<p><strong>Tish:</strong> But not with any accuracy unless you&#8217;re using markers. Are you using markers?</p>
<p><strong>Or</strong><strong>i: We&#8217;re using natural feature recognition. It doesn&#8217;t have to be an ugly looking marker. It can be any image.</strong></p>
<p><strong>Tish:</strong> So you&#8217;re using image recognition. Are you working with one of these image recognition startup companies (<a id="nws6" title="list here" href="http://www.educatingsilicon.com/2008/11/25/a-round-up-of-mobile-visual-search-companies/" target="_blank">list here</a> )?</p>
<p><strong>Ori: We&#8217;re working with one of those. What&#8217;s unique about it is it runs very nicely on any cell phone, and on the iPhone it works the best. For this first app, it doesn&#8217;t really matter where you are physically; the geolocation is not part of the experience. </strong><span style="background-color: #ffff00;"><br />
<strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong> For a truly engaging AR experience we will need more of a backend than is currently available?</span><br />
</span><br />
<strong>Ori: I call the backend the cloud, where you have all this information and ways to access it from anywhere. Actually I think it&#8217;s become pretty mature today. If you look at the different elements required to enable an augmented reality experience to work, you have &#8211; first &#8211; the user whose always in the center. Then you have the lens. The lens can be an iPhone, or glasses, even a projector. The lens allows you to watch, sense and track information in the real world: people, places, things. Then in the backend you have the cloud where you store and retrieve information.</strong></p>
<p><strong>So if you look at the maturity of these different elements, I think the cloud is in pretty good shape. Because there&#8217;s so much information we&#8217;re collecting and storing. Anything from Google, Wikipedia, Facebook, all that kind of stuff, it&#8217;s a lot of useful information you can access from anywhere using APIs. And a lot of it is also starting to include geolocation information. Take <a id="zhag" title="Loopt" href="http://www.loopt.com/" target="_blank">Loopt</a> or Google&#8217;s <a href="http://www.google.com/latitude/intro.html" target="_blank">friends service</a> that allows you to see where your friends are and what they&#8217;re doing. There&#8217;s tons of information out there and it&#8217;s pretty easy to access it. Now what do you do with it is the question?</strong></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> is such a simple and brilliant application and nobody thought about doing it until this guy from Salzburg did. It doesn&#8217;t have any sophisticated visual tracking. It knows your position and it&#8217;s simply looking at the angle you&#8217;re pointing to. Based on these parameters it brings information from Wikipedia that pertains to your field of view. So most of it was already there. It&#8217;s just a matter of connecting the pieces in an experience that is valuable for people.</strong></p>
<p><strong>Tish: </strong>It is the uptake of even a very simple technology that puts the magic in it.</p>
<p><strong>Ori:Â  Yes, take Twitter. If you go to its homepage it looks like a very simple boring app but it is something that is both enjoyable and very useful to people.</strong></p>
<h3><strong>Why you should participate in ISMAR 2009</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40.png"><img class="alignnone size-medium wp-image-3478" title="picture-40" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40-222x300.png" alt="picture-40" width="222" height="300" /></a><br />
<strong>Tish: </strong>I know that you are involved in organizingÂ  <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> (picture above from Ori&#8217;s post on <a href="http://gamesalfresco.com/2009/02/23/ismar-2009-the-worlds-best-augmented-reality-event-wants-you-to-contribute/" target="_blank">&#8220;ISMAR 2009: The World&#8217;s Best Augmented Reality Event&#8230;,</a>&#8220;) and there is a call out for papers and for volunteers, can you tell me more about it?</p>
<p><strong>Ori: Yes, we hope to have the first ISMAR where we practice what we have just discussed: let&#8217;s build on all the research invested so far and instead of thinking only about 5-10 years from now, let&#8217;s see what we can do today. So we are bringing people in from other disciplines &#8211; artists, interactive media developers and people from the entertainment industry.Â  The goal is to use the technology to make something interesting for people &#8211; again, something that people would buy, and making it commercially successful.Â  Many people either don&#8217;t know about ISMAR because in the past it was a pure engineering-orientated event and peopleÂ  from a commercial perspective of AR weren&#8217;t attracted to it.Â  The Chair of the Event this year is based in Florida and he is going to bring in a lot of people from the entertainment industry such as Disney. I think this will transform this event into something more like SIGGRAPH &#8211; more of an industry event.Â  As one of the organizers of the interactive media track we are trying to bring in people that want to build applications for consumers.</strong></p>
<p><strong>Tish:</strong> In terms of AR applications what are the flagships today?</p>
<p><strong>Ori: There are very few because it&#8217;s just the beginning. There&#8217;s one tiny studio in France called <a id="z1ln" title="Int 13" href="http://www.int13.net/en/" target="_blank">Int 13</a> . They&#8217;ve created maybe the first commercial game running on a mobile device using AR technology. It&#8217;s called <a href="http://www.youtube.com/watch?v=Te9gj22M_aU" target="_blank">Kweekies</a>. It was one of the contenders for the Nokia Mobile innovation awards. They were one of the ten finalists, but they didn&#8217;t win it. It&#8217;s looks really cool. It&#8217;s somethng that runs on your desk, with a marker. Many AR folks say markers are the past, markers are ugly. But it&#8217;s still a cool experience. I think people will go for it.</strong></p>
<p><strong>Tish:</strong> Yes I think we will have to look to small companies that are free to think creatively to lead the way.Â  It seems many games companies are tied up pulling off huge big budget projects and enterprise is still catching up on how to use social media!</p>
<p><strong>Ori: Yes, last year I was in the game development conference (GDC); there was no mention of augmented reality &#8211; not on the exhibition floor, none of the sessions, nobody talked about it. I was stunned. Then this year, there was a little a change. There were like three demos on the exhibition floor, <a href="http://www.metaio.com/" target="_blank">Metaio,</a> <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and a Dutch company called <a href="http://www.augmented-reality-games.com/" target="_blank">Beyond Realit</a>y.Â  And then there was Blair&#8217;s talk, which was very very cool. The room was packed with people. And after the talk there were dozens of people lining up to talk with him about the topic. There was definitely interest, but still on the very edge. The video game industry is still a hit driven business and publishers spend upward of 20-30 million dollar to create the best AAA game possible. They just can&#8217;t take the risk. So it&#8217;s going to come from smaller companies, from outsiders coming in with a vision and understanding on how to put the AR pieces together to create a totally new experience.</strong></p>
<p><strong>Tish:</strong> But the basic tool set is there isn&#8217;t it?</p>
<p><strong>Ori: I talked to some folks at the games developer conference, many folks with MMO background, and they have great ideas about AR. It&#8217;s great to see different people with different views on what&#8217;s needed first. &#8220;Joe the Programmer&#8221; had this idea of creating a small piece of hardware that you can put in every house and provide accurate geospatial information in your home. That couldÂ  open up many opportunities for AR experiences in homes.</strong></p>
<p><strong>Tish:</strong> Don&#8217;t you think we have enormous resources in terms of image databases that provide a great basis for augmented reality.Â  I was talking to Aaron Cope at ETech about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> &#8211; Flickr&#8217;s vernacular mapping project using all the geotagged photos in Flickr. That is such cool project. <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron will be speaking at Where 2.0</a> also.</p>
<p><strong>Ori: Think of Google Earth. Google Earth leveraged communities to basically map all the major cities around the world into 3D models. And that is an essential step to be able to do augmented reality outdoors. Because if you had to model everything from scratch, it wouldn&#8217;t be realistic.</strong></p>
<h3><strong>Augmented Reality and Becoming Greener.</strong></h3>
<p><strong>Tish:</strong> I am really interested in how AR interfaces might be useful to some of the emerging energy identity/metering projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.wattzon.com/" target="_blank">WATTZON</a> because I think it is very important that people have very intuitive, immediate, and enjoyable ways to relate to energy data so they can make greener choices.</p>
<p><strong>Ori: Back in the day I had an idea to build an Augmented Reality application to become greener. You look at things around your home with the camera and itÂ  recognizes its green gas footprint and makes recommendations to reduce it.Â  I guess it was a bit too early to do that based on visual recognition alone&#8230;you&#8217;d needÂ  additional sensors that would provide related information about what you are looking at.</strong></p>
<p><strong>Tish:</strong> Well as there is more interest in Green technology do you think we may see VC interest in some green AR projects now?</p>
<p><strong>Ori: I talked to some of the investment folks, Angels as well as VC&#8217;s about AR and they had no clue what it is. There&#8217;s a need for a whole lot of education. And there are no proof points (as in successful investments in this domain), and counter to popular belief &#8211; they don&#8217;t like risk so much&#8230;</strong></p>
<p><strong>Tish:</strong> And consumer adoption must lead the way, right?</p>
<p><strong>Ori: Just like with every emerging technology in history, people never bought the technology, they bought the content, the apps, the benefits that came on top of the technology. Whether it was VHS winning over Beta Max, or BluRay winning over HD. It&#8217;s always because of more/better content. Look at the video game console war: Xbox, and Nintendo did better than Sony just because they had more and better games. Even Windows was a success thanks to its applications. People bought it for the applications not the OS. The content is the first to drive demand.</strong></p>
<p><strong>Tish:</strong> One of the challenges to giving people new ways to relate to their energy consumption is that you can just have them looking at graphs of how bad they have been in the past you &#8211; that may make them feel bad but that doesn&#8217;t necessarily give them ways or motivation to change. There perhaps needs to be more immediate relationship to the data to facilitate change. I think the mantra for optimization of anything from energy usage to supply chains is timely, actionable data?</p>
<p><strong>Ori: There are a lot of ideas about measuring information and displaying it to people. For example, the Prius hybrid car, one of its interesting features &#8211; which is kind of game like &#8211; is a constant display of your current fuel consumption. That alone changes how people drive because they try to beat the &#8220;Score&#8221; and as a result conserve more fuel. That model can be applied to our homes&#8230;</strong></p>
<p>Tish: Yes that is something I am very interested in. I have been following several projects in this area &#8211; one of my favorites is the <a href="http://www.arduino.cc/" target="_blank">Arduino</a>, <a href="http://www.currentcost.com/" target="_blank">Current Cost</a>/<a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> integrations <a href="http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/" target="_blank">I saw at Homecamp</a>.</p>
<p>You joined a start up with Shai Agassi which was bought out by SAP right? He has a brilliant approach with Better Place.</p>
<p><strong>Ori:Â  I think what&#8217;s really unique about Better Place&#8217;s approach is that he doesn&#8217;t require people to change their behavior. People are still going to have their own cars. They&#8217;ll be able to drive as far as they want, and for the same (or lower cost). Its not necessarily about a new technology, electric cars have been around for a long time but there was no way people were going to be limited by the 50 or 70 mile range and Better Place is solving that problem. With its infrastructure of charging spots and battery switching stations, drivers are going to be able to drive anywhere. And it&#8217;ll be similar to having to stop once in a while to refuel your car. The price maybe even lower than what you pay today for your transportation needs &#8211; and you&#8217;ll stop generating green gas. It&#8217;s a clever way of taking technology to a whole new level without changing the behavior of people.</strong></p>
<p><strong>Tish: </strong>Better Place is a classic example of things as a service isn&#8217;t it?Â  It is basically a utility company.</p>
<p><strong>Ori: It is similar to a phone carrier model.Â  You pay for a membership that gives you access to the car (equivalent to the phone) and electricity (equivalent to the phone line) for the same price of fuel cost today. And as bonus you get to save the world.</strong></p>
<h3><strong>How the iphone changed the game for AR &#8211; and the iphone versus Android</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38.png"><img class="alignnone size-medium wp-image-3472" title="picture-38" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38-300x198.png" alt="picture-38" width="300" height="198" /></a><em></em></p>
<p><em>Picture from Ori&#8217;s post</em><strong><em>, <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_blank">&#8220;GDC 2009: Why the iphone changed everything&#8221; </a></em></strong></p>
<p><strong>Ori: And back to AR, you have to take the same approach, because nobody&#8217;s wants to don those huge head mounted displays or backpacks. You have to take advantage of people&#8217;s current behavior: they already carry their iPhones or similar devices.</strong></p>
<p><strong>Tish:</strong> As we discussed, you just have to get people raising up their phones and looking through them when that is a useful thing to do. Both Wikitude and Nathan Freitas&#8217;s graffiti app were enough to get me interested in the evolutionary step of raising my phone! Nathan&#8217;s graffiti app is nice. You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art (see my interview, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG finally for Augmented Reality?&#8221;</a>).</p>
<p><strong>Ori: The app store has fundamentally changed the mobile gaming industry. Last year they were in shambles. There was no growth. Everybody was complaining, &#8220;we can&#8217;t handle it, there&#8217;s a million phones, and you have to test it on each phone. And carriers suck, they don&#8217;t care about sharing and promoting your content. Everything was bad. This year mobile gaming is the hottest thing. And it&#8217;s all because of the iPhone. It changed the game.</strong></p>
<p><strong>Tish: </strong>How do you think Android is going to get traction against the iphone?</p>
<p><strong>Ori: Well the number one thing is the form factor &#8211; the iPhone is just much cooler than the G1. Its OK but it doesn&#8217;t have the same feel. People thought it was going to be easy to clone the iPhone but none of the attempts succeeded so far.</strong></p>
<p><strong>Tish: </strong>How much does it matter for AR not being able to runs things persistently in the background on the iphone?</p>
<p><strong>Ori: Actually they have add a such a capability in OS 3.Â  You can now make use of a background service.</strong></p>
<p><strong>Tish:</strong> OS 3 will open up new possibilities for AR?<strong> </strong></p>
<p><strong>Ori: The access to the video API is still not public.Â  But there is a new Microsoft application &#8211; Microsoft Tag that makes use of that API which means it is probably OK to use it.</strong></p>
<p><strong>Tish: </strong>(I ask Ori for his card and he shows me how to read it with my iphone.) Oh nice you have an AR card, of course!</p>
<h3><strong>In Search of Pong for Augmented Reality</strong></h3>
<p><strong>Tish: </strong>So how will AR begin to, as Blair&#8217;s friend put&#8217;s it, &#8220;facilitate a killer existence,&#8221; particularly as we are probably looking at some new and perhaps pricey hardware?</p>
<p><strong>Ori: You could take the Better Place approach. We&#8217;re going to give you a great experience and we&#8217;ll include the devices as part of that experience for the same price. Let&#8217;s say you subscribe to an AR experienceÂ  which offers access to multiuser, support, and all the information you need wherever you go &#8211; exactly according to the vision. You pay for a subscription on a monthly basis and included in that cost we give you a better device that offers aÂ  better AR experience. It&#8217;s following the phone carrier approach, but in a good way.</strong></p>
<p><strong>But first of all we do need our Pong! I was sitting with a couple of AR game enthusiasts at the GDC and we were asking ourselves, &#8220;how do we create the first pong for AR?&#8221;</strong></p>
<p><strong>Was Pong a multiplayer game? Not necessarily! Did it connect to the network? No! We have to create the first dot in a long line of dots that will bring us to our destination.</strong></p>
<p><strong>Tish: </strong>You haven&#8217;t seen a Pong yet have you?</p>
<p><strong>Ori: Not yet. I mean there&#8217;s maybe a handful of games and apps out there, but I don&#8217;t think any of them is a Pong yet. Still, it&#8217;s getting closer.</strong></p>
<p><strong>Tish: </strong>Kati London is doing some very interesting work on bringing games into reality, isn&#8217;t she?</p>
<p><strong>Ori: Yes, she works with Frank Lanz at <a href="http://playareacode.com/" target="_blank">Area/Code</a>. He teaches at NYU and has designed games for the <a href="http://www.comeoutandplay.org/" target="_blank">&#8220;Come Out and Play&#8221;</a> festival here in Manhattan. And a lot of these games are actually low tech.</strong></p>
<p><strong>Tish:</strong> Yes I have a big alternate reality game blog brewing that I haven&#8217;t had time to write yet!</p>
<p><strong>Ori: The city is the gameboard is their slogan. It&#8217;s going to be a great playground for AR games. The city becomes a theme park. The city could become an even bigger touristic attraction. People will come to the city to be part of these games. So you&#8217;re having thousands of people running around the city playing all sorts of games from laser-tag style to history adventures, to treasure hunts.</strong></p>
<h3><strong>Composing Reality</strong></h3>
<p><strong>Tish: </strong>So why haven&#8217;t you focused on one of these kinds of games with your company?</p>
<p><strong>Ori: We have a couple of scenarios along these lines that we&#8217;re planning for 2010-11. But first focus on what&#8217;s possible today.</strong></p>
<p><strong>Tish: </strong>And what&#8217;s stopping you from doing those kind of games today?</p>
<p><strong>Ori: Many things. The devices are not there yet, location services are not accurate enough, ubiquitous sensors are notÂ  there yet.</strong></p>
<p><strong>Tish: </strong>You think alternate reality gaming needs more &#8220;ubiquity&#8221; than is currently available?</p>
<p><strong>Ori: Not necessarily. People are doing alternate reality games with no &#8220;ubiquity&#8221; at all. But my interest is to add the visual aspect. I believe humans are mostly driven visually.</strong></p>
<p><strong>Jane McGonigal said in a talk at GDC, that AR would allow us to program reality, which is exactly how I look at it. Once you can recognize things, some of it with WiFi and RFID and all sorts of sensors. But visual sensors is always going to be the ultimate way to recognize things. And once you recognize things and know what they are, and can pull information about those things (or people and places) from the internet, you can program it (visually). You could program it to be fictional, like in a video game, or it could be programmed as non-fictional, like a documentary. And that allows you to do things that before were unimaginable.</strong></p>
<p><strong>Tish: </strong>But you can&#8217;t forget the visual, it is primary the connection to peoples&#8217; primary sensory relationships.</p>
<p><strong>Ori: Yes, it&#8217;s like you go to a grocery store and you pick your vegetables, a lot of it is by sight and by touch. And what if you could also see just by looking at it that it&#8217;s from a local store, and that it&#8217;s organic?</strong></p>
<p><strong>Tish:</strong> It goes beyond overlays really?</p>
<p><strong>Ori: By the way, I don&#8217;t like the term &#8216;overlay&#8217;. I know that&#8217;s how it looks: you either overlay or superimpose, but I&#8217;m still searching for a better term. A term I prefer to use is &#8220;composing reality&#8221;. Just like painters, they use brushstrokes and colors and compose a painting. We need to take the real element and the virtual element and compose them into something new. It&#8217;s not just about slapping one on top of the other.</strong></p>
<p><strong>Tish: </strong>yes I think the idea of dashboards is not so appealing.</p>
<h3><strong>Pookatak Games</strong></h3>
<p><strong>Tish: </strong>Do you want to explain the evolution of your company? You have an interesting history of success with high end enterprise applications.</p>
<p><strong>Ori: Since I was a kid I wanted to invent and create things. When I discovered software, that was a really cool way of actually creating things from nothing. From thin air; and you can do it very quickly. That&#8217;s what brought me into software. But I was always looking for the intersection between technology and art. Looking for ways to bring these things together. In the early nineties virtual reality was doing it. It had the appeal of cutting edge technology that can be combined with art. But then, as we all know, it crashed. So I joined Shai Agassi&#8217;s startup (who is now doing Better Place) back in the early nineties. I was one of the first employees in his startup which was developing multimedia products. I was leading the development of one of its flagship product. At some point we realized the technology could be great for an enterprise environment.</strong></p>
<p><strong>It was a really great experience. First going through this cycle from a very small startup and growing into this multi billion dollar business. I was responsible for defining and marketing SAP&#8217;s platform, which was called Netweaver. It was just an idea when we joined SAP and by the time I left it was a major, major business for SAP. I learned about the challenges of building a platform. No matter what purpose you&#8217;re building it for, it typically has similar rules. It&#8217;s definitely not just about the technology; the content that comes with it is really key to making a platform successful.</strong></p>
<p><strong>The third part of this platform trifecta is the community. If you don&#8217;t build a community, you won&#8217;t get the critical mass required for adoption. It may be your own platform but it&#8217;s not necessarily the people&#8217;s platform. That experience is very key to what we&#8217;re doing today. Now, a new industry is being born on the basis of a remarkable technology. But to drive adoption, first we&#8217;ll need good content. The content will be created using today&#8217;s technology with internal tools developed to simplify the process. Next step would be to make the tools used internally &#8211; available to other developers. Help scale the industry, enable innovation on a larger scale. That way we have a chance to create a platform. So it isn&#8217;t really just about my company. I&#8217;m so passionate about augmented reality, I want to it to become a healthy and successful industry for the next 5, 10, 15 years.</strong></p>
<p><strong>Tish: </strong>Yes I am so ready to be liberated from the sitting behind a computing screen! And I know that all this hardware is murdering the environment.</p>
<p><strong>Ori: There&#8217;s &#8216;s the book by Rolf Hainich which is called &#8220;<a id="ba8p" title="The End Of Hardware" href="http://www.theendofhardware.com/">The End Of Hardware.</a> &#8221; It&#8217;s about hardware for augmented-reality. Once you use goggles or other AR interfaces you eliminate the need for screens, laptops, etc. It&#8217;s going to be great for the environment. You have read Rainbow&#8217;s End, right? According to the book in few years there will barely be any (visible) hardware. At least it&#8217;ll have a much smaller footprint for the environment. And it&#8217;ll touch every aspect of life, everything you do. It&#8217;ll change the way you interact with the world.</strong></p>
<h3><strong>The Illusive Eyewear for Immersive AR.</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost.jpg"><img class="alignnone size-medium wp-image-3469" title="retroar-googlespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost-300x225.jpg" alt="retroar-googlespost" width="300" height="225" /></a><br />
<em>Friend of Ori&#8217;s in San Francisco wearing retro AR goggles (from <a href="http://gamesalfresco.com/2009/05/04/gdc-2009-roundup-a-tiny-spark-of-augmented-reality/" target="_blank">Games Alfresco, Ori&#8217;s roundup of GDC 2009</a>)</em></p>
<p><strong>Tish:</strong>OK lets talk about goggles.</p>
<p><strong><strong>Ori: Goggles are going to happen, we want to be hands free.</strong></strong></p>
<p><strong>It&#8217;s going to happen because it&#8217;s just a more intuitive way to use this technology. But above all it has to look cool. Because if it&#8217;s not, if it&#8217;s a big headset, then maybe a small percent of the population might use it, but most people won&#8217;t. It has to look like an accessory, like new cool eyeglasses that you just must wear.</strong></p>
<p><strong>I recently talked to a friend, who runs an industrial design firm, and has experience in designing such glasses for companies like Microvision and Lumux. He says that when you try to bring the images so close to our eyes &#8211; there are some really hard problems to solve. Otherwise it can become really annoying and cause dizzyness.</strong></p>
<p><strong>But I&#8217;m optimistic. I believe it&#8217;s going to happen 3 to 5 years from now. It&#8217;s already starting now: Vuzix announced goggles that will be available this year. Some AR apps that are going to take advantage of next year. Initially only a fraction of the population will use it. And that&#8217;s going to help advance it and make it better and better. But it&#8217;s going to take time until it reaches the mass market.</strong></p>
<p><strong>Tish:</strong> In virtual worlds we have seen, I think, a lot of mistakes in terms of reinventing the wheel and producing too many proprietary versions of the same thing and not enough concerted effort on standards and open platforms that could create a vibrant ecosystem.Â  How can augmented reality not make the same mistakes?</p>
<p><strong>Ori: There are some early AR open source efforts ARTookit, ARtag but it is not a movement yet.Â  One of the things we&#8217;re trying to do at ISMAR this year is to put togetherÂ  discussions around key industry issues, such as standards. Some people say it&#8217;s too early, you have to have a defacto standard to start from. But pretty soon it&#8217;s going to be too late. Just like with virtual worlds, all of a sudden you have all these islands that don&#8217;t talk to each other. Why get to that point if we can plan to avoid it? Let&#8217;s start thinking about it right now. On the other front there are devices. There are pockets of people working on adapting devices for AR, second guessing the hardware companies. Why not get them together with the Intels and Nvidias of the world, and discuss what this device should be able to do. And then compete to make it happen.</strong></p>
<p><strong>Tish: </strong>How much luck are you having with this discussion part?</p>
<p><strong>Ori: People are very interested in doing this. We proposed these panels for ISMAR. And I&#8217;ve got some key people already on board. They have tons of input, they want to get involved. We&#8217;ll see how much we can actually get out of it.</strong></p>
<p><strong>Tish: </strong>In virtual worlds it was a while before vibrant opensource communities developed.Â  OpenSim has I think been the breakthrough community in this regard.</p>
<p><strong>Ori: You have to think about the elements up front. The dream job is to architect the industry. Say we agree on the required pieces. Then we could help the right companies succeed in delivering the pieces. Next, we have to collaborate so that these pieces talk to each other. And eventually these communication methods will become defacto standards and most developers will adopt it.</strong></p>
<p><strong>Tish: </strong>So I&#8217;m going to put you in the role. You&#8217;ve got your dream job. You&#8217;re going to architect this community. So what are the key pieces and where would you like to see the open source communities take hold first?</p>
<p><strong>Ori: Open source will not be exclusive. It&#8217;s going to live side by side with proprietary technology.</strong></p>
<p><strong>The key pieces? You have the user at the center. And the user interacts with a lens. The lens includes both the hardware and the software. And then the lens senses and interacts with the world, which includes people, things and places. And these people-things-places emit information &#8211; about who they are, where they are, what they&#8217;re doing, etcÂ  &#8211; which is then stored in the cloud.</strong></p>
<p><strong>And then you have the content providers, the people and companies, composers who weave AR experiences through the pieces we mentioned before. These composers need a platform that glues these pieces together. Pieces of the platform will be on the lens, and in the world, and in the cloud. If you manage to remove the frictions, and connect these pieces into an experience that people like &#8211; then you have a platform. What the platform does it reduces the overhead and accelerates innovation.</strong></p>
<p><strong>Tish: </strong>Another problem virtual worlds faced in their development was their isolation from the world wide web.Â  Will augmented reality avoid this plight?</p>
<p><strong>Ori:Â  Yes, I believe the key, like you said before, is not to reinvent the wheel. The cloud is already there.Â  Take Wikitude for example, all <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a> had to do is buildÂ  a relatively simple client app, connected to wikipedia, and all of a sudden it offered a wealth of information in your field of view.</strong></p>
<p><strong>I think we can learn a lot from web 2.0. For example, in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, you&#8217;ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So let&#8217;s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So let&#8217;s find ways to make people create information that can be used for AR.</strong></p>
<p><object width="425" height="344" data="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /></object></p>
<p><em>Ori Inbar directed <a title="Wiki Mouse" href="http://www.youtube.com/watch?v=GTXtW3W8mzQ" target="_blank">Wiki Mouse</a> &#8211; a WIKI Film co-created by a swarm of movie makers around the world.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Sensor Networks and Sustainability: &#8220;Connecting Real, Virtual, Mobile and Augmented Spaces&#8221;</title>
		<link>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/#comments</comments>
		<pubDate>Sun, 19 Apr 2009 06:32:59 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[MQTT and RSMB]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[Carbon Goggles]]></category>
		<category><![CDATA[distributed sustainability]]></category>
		<category><![CDATA[home energy management]]></category>
		<category><![CDATA[open data]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[sensor networks and sustainability]]></category>
		<category><![CDATA[SHASPA]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[TweetaWatt]]></category>
		<category><![CDATA[Virtual Worlds]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3381</guid>
		<description><![CDATA[Today, I did a presentation, on connecting real, virtual, mobile, and augmented spaces to support sustainability, for Earth Week SL, with Dave Pentecost and Jim Purbrick, who presented on Carbon Goggles. Dave and I focused on sensor networks, open data, Pachube, OpenSim, and sustainability from perspective of, &#8220;hack local, think global.&#8221;Â  Dave and I will [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-21.png"><img class="alignnone size-medium wp-image-3382" title="picture-21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-21-300x225.png" alt="picture-21" width="300" height="225" /></a></p>
<p>Today, I did a presentation, on <a href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj" target="_blank">connecting real, virtual, mobile, and augmented spaces to support sustainability,</a> for <a href="http://slearthweek.wordpress.com/2009/04/10/earth-week-press-release-see-schedule-also/" target="_blank">Earth Week SL</a>, with <a href="http://www.gomaya.com/glyph/" target="_blank">Dave Pentecost</a> and <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a>, who presented on <a href="http://carbongoggles.org/" target="_blank">Carbon Goggles</a>.</p>
<p>Dave and I focused on sensor networks, open data,<a href="http://www.pachube.com/" target="_blank"> Pachube</a>,  <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim,</a> and sustainability from perspective of, &#8220;hack local, think global.&#8221;Â  Dave and I will be picking up on some of these themes of sensor networks and sustainability next week in our presentation with <a href="http://www.darleon.com/" target="_blank">Dimitri Darras</a> at ITP,Â  NYU, Aprl 24th, 6.30 pm to 8 pm &#8211; <a href="http://itp.nyu.edu/sigs/news/special-event-open-sim/" target="_blank">details here</a>.Â  If you are in New York City, I hope to see you there.</p>
<p>We got some interesting insights into augmented reality from <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a> whose <a href="http://carbongoggles.org/" target="_blank">Carbon Goggles</a> project prototypes how we can use augmented reality to read carbon identity and to combine well organized, verified data from <a href="http://www.amee.com/" target="_blank">AMEE</a> &#8211; a neutral aggregation platform to measure the &#8220;carbon footprint&#8221; of everything on earth, with crowd sourced tagging and linking.</p>
<h3>Shaspa &#8211; &#8220;the sensor network system that has it all&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-22.png"><img class="alignnone size-medium wp-image-3391" title="picture-22" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-22-300x224.png" alt="picture-22" width="300" height="224" /></a></p>
<p>We also discussed, recently launched, <a href="http://www.shaspa.com/" target="_blank">Shaspa</a>. Shaspa&#8217;s energy management packages connect spaces &#8211; real, virtual, mobile and augmented.Â  Shaspa has been bloggedÂ  by <a href="http://www.maxping.org/business/real-life/virtual-management-of-energy-consumption-in-the-home.aspx/" target="_blank">Maxping</a> and <a href="http://www.virtualworldsnews.com/2009/04/shaspa-launches-home-energy-organizer-on-opensim.html" target="_blank">Virtual World News</a>, so you can read all about it, but the Shaspa device kit won&#8217;t be available until next week. Some key features of the Home EnergyÂ  package are listed on the slide above.Â  However, this evening, Dave Pentecost and I got a sneak preview of both the Shaspa commmunity and enterprise hardware and software packages from Shaspa founder Oliver Goh. We were pretty impressed.</p>
<p><strong>Dave:</strong> &#8220;<strong>It&#8217;s the ultimate hackable device for energy management!&#8221;</strong></p>
<p><strong>Oliver:</strong> <strong>&#8220;Bring us any sensor device &#8211; with documentation, and within three days we will put a driver into Shaspa.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/daveandoliverpost.jpg"><img class="alignnone size-medium wp-image-3392" title="daveandoliverpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/daveandoliverpost-300x178.jpg" alt="daveandoliverpost" width="300" height="178" /></a></p>
<p>Oliver is on the right and Dave on the left in the picture above. The picture below shows Shaspa in OpenSim. Oliver and I will be attending the <a href="http://www.3dtlc.com/"><span style="color: #810081;">3D Training, Learning and Collaboration</span></a> Conference in Washington, DC, next week.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-23.png"><img class="alignnone size-medium wp-image-3412" title="picture-23" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-23-300x208.png" alt="picture-23" width="300" height="208" /></a></p>
<h3>Links</h3>
<p>Here are some of the links that came up in the presentation as many people asked for them to be published. Dave also has them on <a href="http://www.gomaya.com/glyph/archives/002520.html#002520" target="_blank">his blog</a>.</p>
<p>SLIDES on GOOGLE DOCS:<br />
<a title="Earth Week SL Presentation, April 18th, 2009 - Google Docs" href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj">Earth Week SL Presentation, April 18th, 2009 &#8211; Google Docs</a></p>
<p><a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">Pachube, sensor networks</a></p>
<p><a href="http://www.gomaya.com/glyph" target="_blank">Dave&#8217;s blog covering Maya archaeology, jungle ecology, and technology</a></p>
<p><a href="http://www.gomaya.com/glyph/archives/001914.html" target="_blank">Maya Frontier, Usumacinta River videos</a></p>
<p><a href="http://en.wikipedia.org/wiki/Collapse_(book)" target="_blank">Collapse</a></p>
<p><a href="microcontrollers http://arduino.cc/" target="_blank">Arduino</a></p>
<p><a href="http://community.pachube.com/tutorials" target="_blank">Pachube &#8211; tutorials</a></p>
<p><a href="http://apps.pachube.com/" target="_blank">Pachube Apps </a>-</p>
<p><a href="http://www.pachube.com/feeds/1284" target="_blank">Arduino-SL-Pachube data site</a></p>
<p><a href="http://www.pachube.com/feeds/1505" target="_blank">SL to Pachube site</a></p>
<p><a href="http://www.zachhoeken.com/connecting-to-the-world" target="_blank">Dave&#8217;s Danger Shield &#8211; Pachube  tutorial</a></p>
<p><a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">TweetaWatt site (LadyAda)</a></p>
<p><a href="http://www.gomaya.com/glyph/archives/002505.html" target="_blank">Dave&#8217;s post on TweetaWatt to Opensim/SL</a></p>
<p><a href="http://peterquirk.wordpress.com/2008/12/22/tutorial-using-the-streamlined-tool-chain-for-importing-sketchup-models-into-realxtend-04/" target="_blank">Peter Quirk&#8217;s post on Importing Sketchup into RealXtend</a></p>
<p><a href="http://opensimulator.org/wiki/Main_Page" target="_blank">Opensim</a></p>
<p><a href="http://www.realxtend.org/" target="_blank">RealXtend</a></p>
<p><a href="http://reactiongrid.com/" target="_blank">ReactionGrid</a></p>
<p><a href="http://homecamp.pbwiki.com/" target="_blank">homecamp</a></p>
<p><a href="http://www.cminion.com/wordpress/" target="_blank">cminion -wind turbines in OpenSim</a></p>
<p><a href="http://mikethebee.mevio.com/" target="_blank">MiketheBee</a></p>
<p><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">Is it &#8220;OMG finally&#8221; for Augmented Reality?</a></p>
<p><a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">Smart Planet: Interview with Andy Stanford-Clark</a></p>
<p><a href="http://www.orangecone.com/" target="_blank">Orange Cone &#8211; Information Shadows and Things as Services</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Dematerializing the World, Shadows, Subscriptions and Things as Services: Talking With Mike Kuniavsky at ETech 2009</title>
		<link>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/</link>
		<comments>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/#comments</comments>
		<pubDate>Thu, 19 Mar 2009 03:16:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[#etech]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Ambient Orb]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[BlinkM]]></category>
		<category><![CDATA[Bocci at ETech]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[data shadows]]></category>
		<category><![CDATA[dematerializing products]]></category>
		<category><![CDATA[dematerializing the world]]></category>
		<category><![CDATA[dressing the shadows]]></category>
		<category><![CDATA[ecology of services]]></category>
		<category><![CDATA[econolypse]]></category>
		<category><![CDATA[embodied energy data]]></category>
		<category><![CDATA[energy identity]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[item level identification]]></category>
		<category><![CDATA[LilyPad]]></category>
		<category><![CDATA[LoveM]]></category>
		<category><![CDATA[Maker culture]]></category>
		<category><![CDATA[Makershed]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Moore's Law]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Path Intelligence]]></category>
		<category><![CDATA[RFID tracking]]></category>
		<category><![CDATA[servicization of things]]></category>
		<category><![CDATA[smart LED]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[Steven Levy]]></category>
		<category><![CDATA[sustainable design]]></category>
		<category><![CDATA[the dotted line world]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Thinglink project]]></category>
		<category><![CDATA[ThingM]]></category>
		<category><![CDATA[things as services]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hardware]]></category>
		<category><![CDATA[urban green space]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[WineM]]></category>
		<category><![CDATA[wireless networks]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3191</guid>
		<description><![CDATA[ETech 2009 was all about making interesting and deeply socially effective technological interventions in the world. And dematerializing products into services seemed to be one of the most powerful concepts elaborated there to accomplish this.Â  Mike Kuniavsky in his presentation, &#8220;The dotted-line world, shadows, services, subscriptions,&#8221; noted: &#8220;There&#8217;s great opportunity here to create an ecology [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/bicycleriderdatashadows.jpg"><img class="alignnone size-medium wp-image-3192" title="bicycleriderdatashadows" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/bicycleriderdatashadows-300x230.jpg" alt="bicycleriderdatashadows" width="300" height="230" /></a></p>
<p><a href="http://en.oreilly.com/et2009" target="_blank">ETech 2009</a> was all about making interesting and deeply socially effective technological interventions in the world. And dematerializing products into services seemed to be one of the most powerful concepts elaborated there to accomplish this.Â  Mike Kuniavsky in his presentation, <a href="http://en.oreilly.com/et2009/public/schedule/speaker/1947" target="_blank"><strong>&#8220;The dotted-line world, shadows, services, subscriptions,&#8221;</strong></a> noted:</p>
<p><strong>&#8220;There&#8217;s great opportunity here to create an ecology of services embodied as robust, valuable, exciting new tools with focused, limited functionality, tied together with item-level identification and wireless networks. Whole classes of things that can enrich our lives and bank accounts are now possible thanks to the way ubiquitous computing interweaves services and devices at an intimate, everyday level&#8230;.<br />
</strong><br />
<strong>We now have the technology to create whole new classes of tools for living in a way that is more useful and fun for individuals, more sustainable for society, and more profitable for companies. That way is to recognize the connectedness of all everyday things, and to build on it, rather than ignoring it.&#8221;</strong></p>
<p>The picture opening this post is from Mike&#8217;s presentation (see <a id="zuqd" title="Mike's blog" href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">Mike&#8217;s blog</a> forÂ  <a href="http://www.orangecone.com/tm_etech_2009_0.1.pdf">a PDF with all of the images and notes</a> (884 PDF), and the original presentation description).</p>
<p>An ecosystem usingÂ  item-level identiï¬cation, wireless networking, and data visualization is evolving that links everyday objects to information about those objects &#8211; what Kuniavsky calls their â€œinformation shadow.â€Â  Because every object can be uniquely identified and that identification can be associated with a cluster of metadata, it &#8220;exists simultaneously in the physical world and in the world of data.&#8221;</p>
<p>Mike mentioned Tom Coates&#8217; <a href="http://www.plasticbag.org/archives/2005/04/the_age_of_pointatthings/" target="_blank">&#8220;Age of Point-At Things&#8221;</a> blog post to say that although Tom was talking about TV listings data, the same ideas can be applied to anything that&#8217;s uniquely identified. Also, Mike noted, he often references Ulla-Maaria Mutanen&#8217;s <a href=" http://aula.org/people/ulla/thinglink_white_paper.pdf" target="_blank">Thinglink project</a> and her observation about Amazon ASINs to explain this concept which is, of course, closely related to <a href=" http://en.wikipedia.org/wiki/Internet_of_things" target="_blank">the internet of things.</a></p>
<p>Until recently, Mike explained, accessing the information shadow was difï¬cult. The world of objects and the world ofÂ  information shadows were separated by the difï¬culty of getting at the information. But now, increasingly:</p>
<p><strong>&#8220;we can instantaneously see the world of information shadows as weâ€™re interacting with the world of objects.&#8221; </strong></p>
<p>Mike&#8217;s is not only conceptualizing these ideas, his company with partner Tod E. Kurt, <a id="zh2z" title="Thingm" href="http://thingm.com/" target="_blank">Thing<span class="ru_CC6D50_bk">M,</span></a> is producing hardware that will enable this vision.</p>
<p><strong>&#8220;We&#8217;re a ubiquitous computing consumer electronics company, which sounds fancy, but weâ€™re pretty small. We design, manufacture and sell ubicomp hardware.&#8221;</strong></p>
<p>ThingM may be small now but they are at the leading edge of huge transformation.Â  When asked, &#8220;How do you see the near-future city working with ubiquitous computing&#8230;&#8221; Adam Greenfield put it succinctly to Lalie Nicolas for <a href="http://www.lehub-agence.com/site.php">Le Hub</a>â€™s <a href="http://www.ludigo.net/index.php?rub=0">Ludigo</a> project:</p>
<p><strong>&#8220;I would go so far as to say that there will be no area or domain of urban activity that is not somehow disassembled and recomposed as a digital, networked, interactive process over the next few years. Objects, buildings and spaces will be reconceived as network resources; cars, subways and bicycles will be reimagined as on-demand mobility services; human communities are already well on the way to becoming self-conscious &#8216;social networks.&#8217;â€</strong></p>
<p>For the rest of this short interview <a href="http://speedbird.wordpress.com/2009/03/16/ludigo-interview/" target="_blank">see Adam&#8217;s post</a>, and for my recent long interview with Adam <a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see here</a>.</p>
<h3>&#8220;&#8216;Almost everything in this room is in a landfill, but just doesn&#8217;t know it yet.&#8217;Â  This needs to change&#8221;</h3>
<p>(Tim O&#8217;Reilly responding on Twitter to a quote from <a href="http://twitter.com/AlexSteffen" target="_blank">@AlexSteffen</a>&#8216;s talk)</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/picture-5.png"><img class="alignnone size-medium wp-image-3194" title="picture-5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/picture-5-300x241.png" alt="picture-5" width="300" height="241" /></a></p>
<p><em><span class="caps">Chart above from Jeremy Faludi&#8217;s presentation</span> <a class="attach" href="http://assets.en.oreilly.com/1/event/20/Priorities%20for%20a%20Greener%20World_%20If%20You%20Could%20Design%20Anything,%20What%20Should%20You%20Do_%20Presentation.pdf">Priorities for a Greener World: If You Could Design Anything, What Should You Do? Presentation</a> <span class="en_filetype">[PDF]</span></em> <span class="caps"> </span></p>
<p>Interconnecting themes at ETech,Â  <a id="nn8n" title="Inhabitat notes" href="http://www.inhabitat.com/2009/03/13/the-best-of-green-at-etech-2009/" target="_blank">Inhabitat noted,</a> &#8220;formed bridges between luminary speakers from a variety of backgrounds, as <a href="http://www.inhabitat.com/2006/10/26/worldchanging-the-book-is-out/">Alex Steffen</a>, <a href="http://www.inhabitat.com/2008/02/20/mary-lou-jepsen-at-greener-gadgets/">Mary Lou Jepsen</a>, <a href="http://www.faludidesign.com/">Jeremy Faludi</a>, and others reinforced the need to create repairable, open-source, <a href="http://www.inhabitat.com/2009/03/02/greener-gadgets-2009/">long lasting products</a>, reveal energy usage, and pursue forward-thinking strategies for a greener tomorrow.&#8221; But <a href="http://www.faludidesign.com/" target="_blank">Jeremy Faludi</a>, a sustainable design strategist and researcher<span class="caps">, </span><span class="caps">put the design challenge most directly:</span></p>
<p><span class="caps"> <strong>&#8220;</strong></span><strong>If you really care you need to dematerialize, turn products into services&#8230;&#8221; </strong></p>
<p>The idea of data shadows has been a part of the conversation in ubiquitous computing for a long time (since Marshall McLuhan perhaps?).Â  But, at ETech 2009, it seemed to have come of age.</p>
<p>It came up again and again, in the need to dematerialize stuff that seemed to be part of every conversation, from Faludi&#8217;s comments on the amount of toxic mining waste created in the manufacture of one laptop, to Raffi Krikorian&#8217;s presentation of <a href="http://www.wattzon.com/" target="_blank">Wattzon&#8217;s</a> Embodied Energy Database (<a href="http://www.slideshare.net/raffikrikorian/wattzon-etech-2009" target="_blank">see slides here</a>), and <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> founder, Gavin Stark&#8217;s presentation, <a name="session7799"></a> (also see <a href="http://www.amee.com/blog/2009/03/19/energy-identity/">Gavin&#8217;s blog on Energy Identity here</a>).</p>
<p>The path to dematerializing the burdensome stuff that spells doom for our environment was not only presented conceptually and in creative solutions to specific problems (e.g. ThingM) at ETech. There were also hands on workshops (see <a href="http://www.ugotrade.com/2009/03/10/making-a-rfid-to-web-interface-and-lilypad-electronic-fashion-at-etech-2009/" target="_blank">my post on the two I attended</a>) from Maker gurus, who were also often to be found in the <a href="http://en.oreilly.com/et2009/public/schedule/detail/7281" target="_blank">Makershed</a>, providing opportunities to experiment with and prototype your own solutions (my hat is off to <a href="http://en.oreilly.com/et2009/public/content/about" target="_blank">Brady Forrest and the ETech committee</a> for pulling all this together).</p>
<h3>Connecting the dots&#8230;</h3>
<p>In the wake of an &#8220;econolypse,&#8221; (neologism pulled from Bruce Sterling&#8217;s twitter feed -Â  @bruces) and on the eve of environmental catastrophe, we may well have, as Adam Greenfield <a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">said to me here</a>, &#8220;seriously screwed the pooch.&#8221;</p>
<p>But that does not mean we should not do everything we can to try to save the day.</p>
<p>And in the serendipity peculiar to a conference, I was talking  in the corridor to Gavin Starks of <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> who is working to create &#8220;the world&#8217;s energy meter&#8221; (on the right in the picture below), and Tony Mak from <a id="hc7p" title="O'Reilly AlphaTech Ventures" href="http://www.oatv.com/" target="_blank">O&#8217;Reilly AlphaTech Ventures</a> (to Gavin&#8217;s right), and Usman Haque of <a id="vp25" title="Pachube" href="http://www.pachube.com/">Pachube</a> (on Tony&#8217;s right) <a id="ihta" title="-see my earlier interview here" href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">- see my earlier interview with Usman here</a>), when Tim O&#8217;Reilly (far left) came by with Steven Levy of WiredÂ  (to Tim&#8217;s left).Â  More on <a id="vp25" title="Pachube" href="http://www.pachube.com/">Pachube</a>, <a id="vwro" title="WattzOn" href="http://www.wattzon.com/" target="_blank">WattzOn</a>, <a id="lnyt" title="AMEE" href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.pathintelligence.com/" target="_blank">Path Intelligence</a> and how these projects may connect in an upcoming post.Â  Path Intelligence like AMEE is funded by the O&#8217;Reilly Venture group.</p>
<p>And no sooner had I snapped the photo below, Mike Kuniavsky arrived.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_170dxf8g9hg_b.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/timoreillytalkingtogavinstarkspost2.jpg"><img class="alignnone size-medium wp-image-3276" title="timoreillytalkingtogavinstarkspost2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/timoreillytalkingtogavinstarkspost2-300x180.jpg" alt="timoreillytalkingtogavinstarkspost2" width="300" height="180" /></a></p>
<p>It seemed such an historic meeting, I asked everyone if I could switch my recorder on.</p>
<p>Tim had just been explaining how the concept of &#8220;data shadows&#8221; fit with something he&#8217;d learned from Gavin in a breakfast conversation. Â Gavin was talking about what AMEE is learning from smart meter data collected from 1.2 million homes in the UK. Â The energy signature from each device is so unique that you can tell not only the make and model of major appliances in each home, but its age. Â  Gavin is worried about the privacy implications (as we all should be), but nonetheless, you can see the implications for business. Tim framed a vital question:<strong> What new businesses are growing in the data shadows?</strong></p>
<p><strong>Tim O&#8217;Reilly: </strong>Here&#8217;s the other member of this conversation I was trying to broker. This is Mike Kuniavsky, Gavin Starks. I was talking in your session about the point he made in his session&#8230;Steve Levy from Wired&#8230;</p>
<p><strong>Tish Shute:</strong> sorry, could you recap the point?</p>
<p><strong>Tim O&#8217;Reilly:</strong> &#8230;just the idea about data shadows, I just think it&#8217;s just such a powerful metaphor that every .. and you went on to explain that potential for subscriptions and so on&#8230;</p>
<p><strong>Mike Kuniavsky:</strong> Yes well what I was saying was that essentially every object that has an identifier associated with it, and there are a number of different kinds of identifiers out there, simultaneously lives in kind of the world of physical objects, and of the world of data. And the identifier links those two.</p>
<p><strong>Steven Levy:</strong> Just like Sterling&#8217;s Spimes?</p>
<p><strong>Mike Kuniavsky:</strong> A spime, it&#8217;s related obviously because we&#8217;re talking about RFIDs, but I&#8217;m really specifically talking about the fact that there is this information shadow that exists out there.</p>
<p><strong>Tim O&#8217;Reilly:</strong> I think we&#8217;ll find it lots of different ways, that was my excitement in connecting these points.</p>
<p><strong>Gavin Starks:</strong> My take on it is energy identity &#8211; that everything and everybody ends up with an energy identity that is the embodiment of their physical consumption.</p>
<p><strong>Mike Kuniavsky:</strong> And I would say, not to argue, I would say that energy comes as part of my information shadow. Like I carry this baggage of data along with me. And whatever data is potentially appropriate can be glommed on to that. And then that can then be carried to something else that can manipulate it. And also that&#8217;s true about every object. And now that we have RFID tracking of individual objects, it&#8217;s true about literally every object, not just every class of objects.</p>
<p><strong>Usman Haque:</strong> There&#8217;s a really beautiful story by Julio Cortazar where he uses the phrase &#8220;dressing the shadows&#8221; and it&#8217;s about the idea the shadow is not this sort of flat black thing but we can sort of put things onto it and slowly sort of grow it into something. It&#8217;s actually sort of more of a love story. But it&#8217;s a really interesting idea that the shadow&#8217;s not just the absence of but that it&#8217;s kind of the important part of it [for more see Usman&#8217;s paper, <a href="http://www.haque.co.uk/papers/dressingshadowsofarch.pdf" target="_blank">Dressing the shadows of architecture</a> &#8211; which is also available in spanish <a href="http://www.tintank.es/articulo_vestirsombras.html" target="_blank">here</a>.]</p>
<p><strong>Mike Kuniavsky:</strong> It&#8217;s the Peter Pan Barrie [JM Barrie, the author] thing. When Peter Pan&#8217;s shadow gets cut off and Wendy has to resew it back on. Potentially what all of these item level identification technologies are doing is they&#8217;re sewing the shadow back to the objects that they came from. And so you&#8217;re getting the information.</p>
<p><strong>Gavin Starks:</strong> It&#8217;s like the two and a half kilo Macbook which has a 460 kilo carbon shadow.</p>
<p><strong>Tim O&#8217;Reilly:</strong> It&#8217;s just a very powerful concept. That&#8217;s all I&#8217;m saying. I think it&#8217;s a metaphor that as soon as you have it, it makes it very easy to understand and to see a whole lot of things. So I&#8217;m very fond of it. Already it&#8217;s my new favorite toy. And it is great running into you all in the same place in the hall so I could introduce you all.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_173c5f8nvcm_b.png"><img class="alignnone size-medium wp-image-3203" title="dhj5mk2g_173c5f8nvcm_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_173c5f8nvcm_b-300x231.png" alt="dhj5mk2g_173c5f8nvcm_b" width="300" height="231" /></a></p>
<p><em>Image from Mike&#8217;s ETech presentation</em><br />
<strong><br />
&#8220;To create these new experiences we need to think about the design of both digital devices and infrastructures differently. We need step back from standalone tools and think about what service those tools deliver, then construct new avatars that fit better into people&#8217;s everyday experiences. We also need to step back from our infrastructural products and think about what services they enable. The electrical grid did not first start out as an abstract electrical grid in South Manhattan; it started as a way to deliver electric light. The electric bulb was not a standalone device, it was an avatar of Edison&#8217;s light delivery service and it was, first and foremost, designed to solve a specific problem for a large consumer market. Only then did the infrastructure it created expand to solve other kinds of problems.&#8221; Mike Kuniavsky&#8217;s ETech presentation, 2009</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Talking With Mike Kuniavsky</strong></h3>
<p><strong> </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethandmikeballpost.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethgoodmanandmikekuniavskyballpost.jpg"><img class="alignnone size-medium wp-image-3280" title="elizabethgoodmanandmikekuniavskyballpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/elizabethgoodmanandmikekuniavskyballpost-300x199.jpg" alt="elizabethgoodmanandmikekuniavskyballpost" width="300" height="199" /></a><br />
</strong></p>
<p><em>Mike Kuniavsky and Elizabeth Goodman playing Bocci after ETech</em></p>
<p>The conversation with Mike began with a discussion about how to encourage participation. Usman Haque was present but he was called to lunch shortly.Â  The question of encouraging participation in deep social change was another recurring theme at ETech.Â  And, as Mike noted in his presentation:</p>
<p><strong>&#8220;The design of these avatars [Kuniavsky's term for objects that are closely tied to services] is quite challenging. They canâ€™t really be as personalized. You just can&#8217;t pimp your City Carshare car. You only get one kind of bike in the Call a Bike program. That&#8217;s an important problem to solve. We love to have our stuff be ours. However, the same technologies can bring that, too. Our key fob can bring our whole world with us, and whether sit down in a minivan, on a chair or in a plane we can bring our world with us. The thing can become our preferred colors, with our favorite music, and a picture of our loved ones on the dahboard, desk, or wall. Is it the same thing as owning it and Â  leaving your stuff in it? No, but it&#8217;s closer.&#8221;<br />
</strong></p>
<p>Moreover:</p>
<p><strong>.. objects have to change at a fundamental level. They have to be designed differently and they have to be described and discussed differently. The â€œownerâ€™sâ€ relationship to the object changes. The very idea of ownership changes. The solid object grows a dotted line that is filled-in as-needed, when-needed, and with the features that are needed. This is not the same thing as renting or co-ownership, its anytime/anywhere nature-enabled by the underlying technology makes these new service objects fundamentally new (Kuniavsky&#8217; presentation at ETech).<br />
</strong><br />
Elizabeth Goodman&#8217;s brilliant presentation at ETech, <a id="eag1" title="Designing for Urban Green Space" href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Designing for Urban Green Space,</a> discussed a study of urban green space volunteership as a way &#8220;to rethink urban green space as a spectrum of places with varying types of ownership and management.&#8221;Â  Mike began the conversation by citing Elizabeth&#8217;s work.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_178gdn22ngf_b.png"><img class="alignnone size-medium wp-image-3208" title="dhj5mk2g_178gdn22ngf_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_178gdn22ngf_b-300x219.png" alt="dhj5mk2g_178gdn22ngf_b" width="300" height="219" /></a></p>
<p><em>Picture from <a href="http://en.oreilly.com/et2009/public/schedule/detail/5562" target="_blank">Elizabeth Goodman&#8217;s presentation</a>.</em></p>
<p><strong>Mike Kuniavsky:</strong> Well what I was saying [re participation], citing my wife Elizabeth Goodman&#8217;s work &#8230;She did all this work at Intel on people&#8217;s health practices and the issues [around] instrumenting people&#8217;s lives in order to produce behavioral change and the problems with that.</p>
<p>The question is how do you, sense to encourage, rather than sense to punish, when all the indicators are going down, like economic indicators, ecological indicators. They&#8217;re just not going to be going up perceptibly in a very long time. You don&#8217;t want to discourage people. The way to create behavioral change is not to essentially keep punishing people for the past. And so I don&#8217;t know if I have a good answer for this, but there is this entire kind of thinking about how do you encourage people to keep doing things even when the actual easy-to-measure indicators like the first order indicators are all pointing down. It&#8217;s the classic thing about how do you get people to stay fit even as they&#8217;re aging. They are never going to be as healthy as they were when they were 50 again.</p>
<p><strong>Usman Haque:</strong> I think you really hit on it when you said it&#8217;s not about the first order but about the second order measurements because that is exactly the kind of thing you want to change. It&#8217;s not that you want to stop it from falling because sometimes it&#8217;s impossible, you want to slow it&#8217;s rate.</p>
<p><strong>Mike Kuniavsky:</strong> Exactly. You want to slow the rate because at the bottom maybe you can start looking at the first order indicator. But you can&#8217;t look at the first order indicator while things are going to hell. And so you can just say it&#8217;s less bad than it would have been. And figuring out how to take the first order sensory data and turn it into this kind of second order data that might be helpful for actually creating behavioral change, because ultimately that&#8217;s what all of this is talking about.</p>
<p><strong>Tish Shute: </strong>This discussion about behavioral change wasn&#8217;t elaborated in your presentation was it?</p>
<p><strong>MK:</strong> I presented on essentiallyÂ  the combination of being able to identify individual objects and the idea of providing services as a way of creating things&#8230; the servicization of things &#8230;turningÂ  things into services is greatly accelerated by network technologies and the ability to track things and what leads this to the potential of having fundamentally different relationships to the devices in our lives and to things like ownership.</p>
<p>Like we now have the technology to create objects that are essentially representatives of services &#8211; things like City Car Share.Â  What you own is not a thing but a possibility space of a thing.Â  This fundamentally changes the design challenges.Â  I am pretty convinced that this is how we should be using a lot of these technologies is to be shifting objects from ownership models to service models.Â  We can do that but there are significant challenges with it. What is happening is that we have had the technology to do this for a while, but we haven&#8217;t be thinking about how to design these services.Â  We haven&#8217;t been thinking about how to design what I call the avatars of these services &#8211; the physical objects that are the manifestation of them, like an ATM is the avatar of a banking service.Â  It is useless without the banking service it is a representative of, essentially.</p>
<p>If you imagine a this as an abstract idea, the ATM pokes out of [the service and into] a specific thing, but so does the bank tellers and so does the web site.</p>
<p><strong>TS:</strong> It seems like this is a major shift in how we conceptualize our economy, culture and even government &#8211; what are the avatars of government?</p>
<p><strong>MK:</strong> I think change in government is very hard. The example I have been using is the light bulb.Â Â  Start by solving a problem. The interesting thing about lightbulbs is that it was not the invention of an incandescent filament that glowed in a vacuum&#8211;that had been invented long before&#8211;it was the system that it was part of.Â  And that is was part of a much larger design project that was created specifically for delivering the service of light to lower Manhattan in 1884.</p>
<p><strong>TS:</strong> The grid hasn&#8217;t changed since Edison right &#8211; one of the earlier speakers mentioned this, that if Edison came back now he would say, &#8220;the grid is where I left it.&#8221;</p>
<p><strong>MK:</strong> My point is that he wasn&#8217;t creating an abstract electrical grid, he was solving a problem by creating a system that had as its avatar &#8211; as its end point this bulb. But the bulb is actually not the system, it is merely the end point.</p>
<p>As we are thinking about the capabilities of these technologies my argument is we have to be designing service systems along that model.</p>
<p><strong>TS:</strong> Web services?</p>
<p><strong>MK:</strong> Not just designing Web services.Â  I am a big fan of thinking about digital tools outside the context of general purpose computing devices. I consider laptops general purpose and I consider phones general purpose.Â  Yes originally the handset started out just as a phone but now it is essentially a computer terminal and now you have netbooks and netbooks are essentially this halfway point between a phone and a laptop because now you are going to get net books with G3 cards.Â  Essentially it is already a big phone.Â  Those are general purpose computing platform, and I am not very interested in those.</p>
<p><strong>TS:</strong> What motivated you to make that move in your thinking?</p>
<p><strong>MK:</strong> I thought it was very narrow kind of thinking.Â  I thought that the costs of computing represented by the technologies in the middle of the Moore&#8217;s Law curve &#8211; rather than on the right &#8211; that the cost of that had dropped so far that it seemed we could be making all kinds of devices that had information processing as part of what it is without being general purpose computing platforms.</p>
<p>The ipod is a good example.Â  The ipod is a computer and you can run linux on it. It has more computing power than an computer did in the seventies. But who cares? The point of it is that you are using that power to solve a problem. You are applying the capabilities of information processing to solve specific problems. I have actually worked on infrastructural stuff. Twenty years ago I was associated with some early distributed computing stuff, then I did ten tears of web site design stuff, but i am essentially done with that. Because what I am really interested in isÂ  creating new kinds of tool, new classes of tools that use information processing as the core of what makes them interesting and valuable.</p>
<p><strong>TS:</strong> Do these tools have to leverage networks to be useful?</p>
<p><strong>MK: </strong> No I think it is possible to use information processing in a small scale without having to be online all the time.Â  That is another one of the big toolboxes.Â  It creates a deep shift in the capabilities of what you can do if you have a network.Â  But the network can be really, really low bandwidth and simple for it still to be useful. You get these things that wake up once a month and spit out a packet with their telemetry.Â  And they are incredibly valuable but they are not what you would normally consider to be an always-on device.Â  It changes what they can do very fundamentally.Â  But it is not this thing that requires there to be blanket wifi.</p>
<p>You can have devices out there and this is the sort of a cliched example but the guy riding a bicycle around with a wifi access point in rural area where you have no infrastructure to do it otherwise.Â  But you have a little computer in every area and as he rides by they will exchange some data.</p>
<p><strong>You don&#8217;t have to have fibre at the curb to really, really make interesting deeply socially effective technological interventions in the world. </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/aaaroncopetodekurtmikekuniavskypost.jpg"><img class="alignnone size-medium wp-image-3210" title="aaaroncopetodekurtmikekuniavskypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/aaaroncopetodekurtmikekuniavskypost-300x199.jpg" alt="aaaroncopetodekurtmikekuniavskypost" width="300" height="199" /></a></strong></p>
<p><em><a id="d3_j" title="Aaron Starup Cope," href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope,</a> Flickr, Tod E. Kurt, and Mike Kuniavsky &#8211; discussing <a id="rzgd" title="The Shape of Alpha" href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">The Shape of Alpha</a> (more on this upcoming!)<strong><br />
</strong></em><br />
<strong>MK:</strong> What we are trying to do is to do that.Â  We make a BlinkM &#8211; we make hardware &#8211; you saw my business partner Tod E. Kurt, he does all the heavy engineering and I am the guy who waves his hands around a lot and sends faxes.Â  We came out with our first product a year ago was a smart LED.Â  It is very simple RGB LED, it has a microcontroller and the microcontroller has firmware on it that kind of abstracts out the complexity of incorporating LEDs into a hobbyist product.Â  So you can do arbitrary colors, so it can do smooth fades between any two points in RGB space, you don&#8217;t need to know anything about Pulse Width Modulation or even microcontrollers.Â  You don&#8217;t have to know anything about anything except a little bit about electricity to use the thing. [In addition to <a id="hy-z" title="Blinkm" href="http://thingm.com/products/blinkm.html" target="_blank">BlinkM</a>, <a id="g8y3" title="Blinkm Maxm" href="http://thingm.com/products/blinkm-maxm.html" target="_blank">BlinkM MaxM</a> &#8211; the smart LED, Thingm has developed prototypes for other products such as the <a id="hqwc" title="Winem" href="http://thingm.com/products/winem.html" target="_blank">WineM</a> RFID wine rack and <a href="http://thingm.com/sketches/lovem.html" target="_blank">LoveM LCD chocolate box</a>.]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_174cf26bcgn_b.png"><img class="alignnone size-medium wp-image-3211" title="dhj5mk2g_174cf26bcgn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_174cf26bcgn_b-224x300.png" alt="dhj5mk2g_174cf26bcgn_b" width="224" height="300" /></a></p>
<p><strong>TS:</strong> I made a <a href="http://www.arduino.cc/en/Main/ArduinoBoardLilyPad" target="_blank">LilyPad</a> enabled Tshirt yesterday, if I used your LED what difference would that make to my Tshirt?</p>
<p><strong>MK:</strong> You could have the LED without changing the circuit at all, you could have it blink in any pattern, be any color, fade between colors. With our new one which is bigger than the old one, we actually have inputs. You could stick a wire on it or weave it into your shirt, and when you touch the wire it would change the behaviour of the LED.</p>
<p><strong>TS:</strong> Nice, you are giving me even more incentive to finish my T-Shirt. I noticed that Tim O&#8217;Reilly was connecting you to Gavin Starks, CEO of AMEE just now, and Usman Haque of Pachube.Â  What is the connection between you work on Thingm and these projects?</p>
<p><strong>MK:</strong> I think what Gavin&#8217;s doing, as I understand it from Tim, he is essentially creating this new kind of sensor network that monitors electrical usage and allows you to feed it back. What that does is that creates a new kind of data in the data shadow of your house, you refrigerator or whatever. It suddenly grows this extra lobe out in the data world that then has these new capabilities that can be attached to.</p>
<p><strong>TS: </strong>In terms of what you do with ThingM how are these ideas expressed through BlinkM?</p>
<p><strong>MK:</strong> We&#8217;re still building stuff that&#8217;s on a slightly lower level, components. Our corporate goal this year is to make our first product, a stand alone solution to something. One of the easiest things you can do with our technology right now is you can replicate an Ambient Orb in about ten minutes. You could tie into their work. But you could also tie into it in a more subtle way where you could make lights smart so that when the net electricity cost goes above a certain threshold the lights know to dim or to turn off. And that can be dependant on how people use them. So rather than having a light you essentially associate a function or purpose with a light. Then the light knows based on electricity usage when it&#8217;s purpose has high priority enough to be on.</p>
<p>Not all of these ideas pour into our products, we can only afford to make LEDs.</p>
<p><strong>TS:</strong> Still it is amazing how ThingM really is a flagship for what is big and important shift in the way we can relate to stuff. And what about Usman&#8217;s Pachube. Where does ThingM fit with that?</p>
<p><strong>MK:</strong> I see Pachube less as a monolithic service than as a standard for device communication. Essentially it&#8217;s a proposal for interdevice communication, and potentially an easy way for people to define the way devices behave within their own personal ecology of smart devices. It&#8217;s something that&#8217;s in the early stages, and I think the barriers are not technological, the barriers are social. The barriers are understanding what this is for and why to use it. It&#8217;s not about will it work. It&#8217;ll work.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_177pc5g76g5_b.png"><img class="alignnone size-medium wp-image-3213" title="dhj5mk2g_177pc5g76g5_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_177pc5g76g5_b-300x230.png" alt="dhj5mk2g_177pc5g76g5_b" width="300" height="230" /></a></p>
<p><em>Image from Mike&#8217;s ETech presentation &#8211; original image source: Yottamark</em></p>
<p><strong>&#8220;You can, hypothetically, look at any object and know where it was made, what it is made of, what your friends think of it, how much it sells for on Ebay, how to cook it, how to ï¬x it, how to recycle it, whatever. Any information thatâ€™s available about an object can now be available immediately and associated with that object.&#8221; </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_179fkxx3bg9_b.png"><img class="alignnone size-medium wp-image-3214" title="dhj5mk2g_179fkxx3bg9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_179fkxx3bg9_b-300x231.png" alt="dhj5mk2g_179fkxx3bg9_b" width="300" height="231" /></a></strong></p>
<p><strong>&#8220;Connect it with location information and you have Location Based Services for anything. This is Cabspotting by Stamen. As Tom Coates says, once we have a handle, you can throw the data around.&#8221; (Kuniavsky)</strong></p>
<p>More to come on Stamen Design later! <a href="http://en.oreilly.com/public/schedule/speaker/2156">Tom Carden</a> (Stamen Design) ran a workshop at ETech 2008, <a id="bcqk" title="&quot;Live, Vast and Deep: Web-native Information Visualization,&quot;" href="http://en.oreilly.com/et2008/public/schedule/detail/1585" target="_blank">&#8220;Live, Vast and Deep: Web-native Information Visualization,&#8221;</a> outlining the process of taking a real data set from an online <span class="caps">API</span> (such as <a href="http://flickr.com/services/api">Flickr</a> or <a href="http://dopplr.pbwiki.com/">Dopplr</a>) and shaping it into an informative, beautiful, and useful interactive graphic presentation and this year, <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 	<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) gave a workshop on <a id="nbzw" title="&quot;Maps from Scratch: Online Maps from the Ground Up.&quot;" href="http://en.oreilly.com/et2009/public/schedule/detail/5555" target="_blank">&#8220;Maps from Scratch: Online Maps from the Ground Up.&#8221;</a> <a id="k6oi" title="Eric Rodenbeck" href="http://en.oreilly.com/et2009/public/schedule/speaker/2160" target="_blank">Eric Rodenbeck</a>, founder and creative director of Stamen Design, presented on, <a id="q4up" title="&quot;New Data Visualization: Reaching Through Maps.&quot;" href="http://en.oreilly.com/et2009/public/schedule/detail/5438" target="_blank">&#8220;New Data Visualization: Reaching Through Maps.&#8221;</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/dhj5mk2g_180g6zstxc4_b.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/ercirodenbeckandshawnallenpost.jpg"><img class="alignnone size-medium wp-image-3279" title="ercirodenbeckandshawnallenpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/03/ercirodenbeckandshawnallenpost-300x199.jpg" alt="ercirodenbeckandshawnallenpost" width="300" height="199" /></a></p>
<p><em>The picture above is of Eric Rodenbeck and Shawn Allen playing Bocci.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/feed/</wfw:commentRss>
		<slash:comments>16</slash:comments>
		</item>
		<item>
		<title>Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</title>
		<link>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/</link>
		<comments>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/#comments</comments>
		<pubDate>Sat, 28 Feb 2009 04:28:06 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[crossing digital divides]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[aggregating the world's energy data]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Anne Galloway's forgetting machine]]></category>
		<category><![CDATA[antisocial networking]]></category>
		<category><![CDATA[antisocial networking systems]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[cities and networks]]></category>
		<category><![CDATA[connecting environments]]></category>
		<category><![CDATA[context aware]]></category>
		<category><![CDATA[context aware applications]]></category>
		<category><![CDATA[context aware mediators]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[deliberative democracy]]></category>
		<category><![CDATA[Eben Moglen on privacy]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[Erving Goffman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[flexible identity]]></category>
		<category><![CDATA[information processing]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative is a mood]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[mobile computing]]></category>
		<category><![CDATA[mobile phones and sensors]]></category>
		<category><![CDATA[mobility]]></category>
		<category><![CDATA[next generation internet]]></category>
		<category><![CDATA[Nurri Kim]]></category>
		<category><![CDATA[onto]]></category>
		<category><![CDATA[ontome]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[privacy in networked environments]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[self-describing networked objects]]></category>
		<category><![CDATA[smart homes]]></category>
		<category><![CDATA[smart products]]></category>
		<category><![CDATA[social networking systems]]></category>
		<category><![CDATA[sousveillance]]></category>
		<category><![CDATA[speedbird]]></category>
		<category><![CDATA[spime wrangle]]></category>
		<category><![CDATA[spime wrangling]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[spimy]]></category>
		<category><![CDATA[sustainable cities]]></category>
		<category><![CDATA[the big now]]></category>
		<category><![CDATA[the city is here for you to use]]></category>
		<category><![CDATA[the future of the internet]]></category>
		<category><![CDATA[the long here]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp technologies]]></category>
		<category><![CDATA[ubiquitous systems]]></category>
		<category><![CDATA[unbook]]></category>
		<category><![CDATA[uncanny valleys]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[web of things]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2969</guid>
		<description><![CDATA[Adam Greenfieldâ€™s new book, The City Is Here For You To Use, is coming soon (photo above by Pepe Makkonen is from Adam Greenfieldâ€™s Flickr stream). Adam told me: â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg"><img class="alignnone size-full wp-image-2970" title="adamgreenfieldpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg" alt="adamgreenfieldpost" width="333" height="500" /></a></p>
<p>Adam Greenfieldâ€™s new book, <em><strong><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></strong></em>, is coming soon (photo above by Pepe Makkonen is from <a id="souo" title="Adam Greenfield's Flickr stream" href="http://www.flickr.com/photos/studies_and_observations/">Adam Greenfieldâ€™s Flickr stream)</a>. Adam told me:</p>
<p style="text-align: left;"><strong>â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a version zero or public alpha in about six weeks.â€</strong></p>
<p>I am not good at waiting for books I really want to read to arrive. But, on the upside, it brings out my already pretty highly developed investigative instinct. So when Adam very generously agreed to do an interview, impatience turned into delight in tasting what is to come. And Adam is encouraging this kind of engaged anticipation. He writes (<a id="v80w" title="see post" href="http://speedbird.wordpress.com/2009/02/19/of-books-and-unbooks/">see post</a>) that <em>The City Is Here For You To Use</em>, is shaping up:</p>
<p><strong>â€œas something of an <a id="oj:9" title="unbook" href="http://theunbook.com/2009/02/18/what-is-an-unbook/">unbook</a><em> avant la lettre. </em>Itâ€™s why weâ€™ve [<a href="http://www.nurri.com/">Nurri Kim</a> and Adam Greenfield] always insisted on keeping you in the loop as to the bookâ€™s <a href="http://speedbird.wordpress.com/2009/01/22/bookproject-update-005-year-two/">fitful progress</a>, itâ€™s why I take every opportunity to <a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/">test its ideas here</a>, itâ€™s why I make explicit the fact that your response to those ideas is crucial to their evolution and expression. And itâ€™s why, even though the process is inevitably going to result in a static, physical document as one of its manifestations &#8211; and hopefully a very nice one indeed &#8211; weâ€™ve committed to offering a free and freely-downloadable Creative Commons-licensed PDF of every numbered version of <em>The City</em>, from zero onward.</strong></p>
<p><strong>You buy the book if you want the object. The ideas are free.â€</strong></p>
<p>I found the opportunity to ask Adam questions about some of his subtle renderings of technology, culture, and being in urban environments challenging and very illuminating.Â  Although I definitely get the feeling I am asleep at the wheel on some of the critical areas he is thinking and writing on.</p>
<p>Knowing the depth and range of Adam&#8217;s thought in his seminal book, <em><a id="you9" title="Everyware" href="http://www.studies-observations.com/everyware/">Everyware</a></em>, and his blog, <a id="r22r" title="Speedbird" href="http://speedbird.wordpress.com/">Speedbird</a>, before I began the conversation I asked Adam to point me to some of his posts that reflect key ideas he is working on at the moment (Adam has recently posted<em> </em><a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/" target="_blank"><em>The City Is Here</em>: Table of contents</a>).Â  Adam directed me to these three posts.</p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">Antisocial networking</a></p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/" target="_blank">More songs about context and mood</a></p>
<p><a href="http://speedbird.wordpress.com/2007/01/29/messenger-space-messenger-body-messenger-mesh/" target="_blank">Messenger, space, messenger body, messenger mesh</a></p>
<p>I may ramble and diverge, as is my nature, but these posts inspired many of the questions I ask.</p>
<p>Adam is currently head of design direction for service and user-interface design at Nokia and living in Helsinki, so I did not have the opportunity to do the interview in person. But I have glimpsed Adamâ€™s world through his Flickr stream and some of these images have found their way into this post. But I suggest you browse Adamâ€™s photography for yourself. I cannot do justice to the thousands of nuanced perceptions of cities, networks and publics you will find there. In the meantime, here are three glyphs of Adam Greenfield that I liked a lot.</p>
<p><strong><em><a id="r315" title="&quot;My favorite shoes&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074835498/">â€œMy favorite shoes,â€</a> <a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">â€œMy favori</a><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">te chairâ€</a> </em></strong><em>and</em><strong><em> </em></strong>photo by Adam Greenfield, <em><strong><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/"> </a><a id="vjz1" title="&quot;Favoriteplace&quot;" href="http://www.flickr.com/photos/studies_and_observations/1849426174/">â€œFavoriteplaceâ€</a></strong></em></p>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg"><img class="alignnone size-full wp-image-2984" title="favoriteshoespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg" alt="favoriteshoespost" width="225" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1.gif"><img class="alignnone size-medium wp-image-2975" title="favoritechair1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1-300x225.gif" alt="favoritechair1" width="300" height="225" /></a></em></strong></p>
<p><a href="../wp-content/uploads/2009/02/favoriteplace.jpg"><br />
</a><br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2.jpg"><img class="alignnone size-medium wp-image-2992" title="favoriteplace2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2-300x225.jpg" alt="favoriteplace2" width="300" height="225" /></a></p>
<h3>A Conversation (in gdoc) with Adam Greenfield</h3>
<p><strong> Tish Shute:</strong> Could you explain a little about the evolution of your thoughts on urban environments, ubicomp and interaction design? What shifts in your thinking have taken place over the last few years re the dawning of the age of ubiquitous computing? It is a couple of years now since <a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a>, what aspects of the uptake of <em>Everyware</em> have most surprised, disappointed or inspired you? Which of the many thesis you discuss in <em>Everyware</em> have become the most crucial for <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>?</a></p>
<p><strong>Adam Greenfield: You know, thereâ€™s a little passage in the liner notes to the second Throbbing Gristle album that I always think of when Iâ€™m asked questions along these lines. As part of their stance, theyâ€™d adopted the dry tone of a corporate annual report, and the preamble began by saying, â€œSince our last report to you, many things have changed. Indeed, it would be foolish to assume that it could be otherwise.â€ And I think thatâ€™s just exactly right: the world keeps moving, and the positions weâ€™d staked ourselves to not so long ago may no longer be correct, or even relevant, to the one we find ourselves inhabiting now.<br />
</strong><br />
<strong>So, first, I think itâ€™s important to cop to all the places in <em>Everyware</em> where I just outright got things wrong. Thereâ€™s a passage in Thesis 50, for example, where I unaccountably mock the idea that â€œthe mobile phoneâ€¦will do splendidly as a mediating artifact for the delivery of [ubiquitous] services.â€ OK, this was admittedly written in a pre-iPhone world &#8211; and was correct <em>for</em> that world &#8211; but you can really see my parochialism showing here. It took the iPhone to make the proposition as blazingly self-evident to me in North America as it had been for quite some time to folks in Europe and Asia.</strong></p>
<p><strong>Having said that, though, I think Iâ€™m justified in taking a little pride in what the book got right. The broader trends the book set out to discuss &#8211; the colonization of everyday life by information processing &#8211; well, take a good look around you. And so one of the points of departure for the new book is taking everything posited in <em>Everyware</em> as a given: the urban environment, and most everything in it as well, has been provisioned with the kind of abilities you mention. So what now?</strong></p>
<p><strong>How do you go about designing informatic systems so they donâ€™t undermine the wonderful things about cities? How do you design cities so they can incorporate networked informatics to greatest advantage? How, especially, do you accomplish these things when the disciplinary communities involved barely speak the same language? And how do you keep everyoneâ€™s eyes on the prize, which is the ordinary human being asked to make sense of these new propositions? These are the questions<em> </em><em>The City Is Here For You To Use </em>sets out to address.</strong></p>
<p><strong><em><br />
</em></strong></p>
<p><a href="../wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"><img class="alignnone size-full wp-image-2993" title="adamgreenfieldthelonghere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg" alt="adamgreenfieldthelonghere" width="500" height="321" /></a></p>
<p><em>Adam talking about the <a href="http://www.flickr.com/photos/studies_and_observations/3181518615/" target="_blank">â€œLe Long Iciâ€</a> in Paris (also see Adamâ€™s post, <a href="http://speedbird.wordpress.com/2008/05/04/the-long-here-and-the-big-now/" target="_blank">â€œThe long here and the big nowâ€</a>)</em><strong></strong></p>
<p><strong>TS:</strong> You mention that the hardest parts ofÂ  producing <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em></a> wasnâ€™t <em><strong>â€œkeeping on top of all the emergent manifestations of urban informatics, or even developing a satisfying spinal argument about their significanceâ€</strong></em> but getting the voice right.Â  It seems that now is the perfect time for a book that would really speak to a wide audience.Â  But also it seems that the city that is here for you to use is manifesting quite differently in different parts of the world?Â  You seem to be somewhat of a nomad, Japan to NYC to Helsinki.Â  Can putting together different views of urban informatics give us more depth perception on the emergence of ubiquitous computing?</p>
<p><strong>AG: Thereâ€™s no question in my mind that the long-term experience of everyday life in Tokyo, New York, and now Helsinki has been an invaluable asset to me, as I imagine it would be to anybody interested in thinking or writing about the networked city. Itâ€™s given me a certain amount of parallax, you know? And that, in turn, throws a really interesting light onto how the selfsame technology can appear in substantially different guises in different social contexts.</strong></p>
<p><strong>But explaining those things &#8211; those complicated, delicate negotiations &#8211; getting them right, doing them justice, doing so in a way that doesnâ€™t dumb anything down, and still remaining accessible? Itâ€™s a challenge, let me tell you. You want to remain approachable and humane, but you also want to explain things like different jurisprudential takes on property, or how advocates of RESTful architectures think that REST is the reason why Internet adoption spread as rapidly as it did. If you want to enjoy even one chance in a hundred of getting your message across, youâ€™ve got to start with an understanding that those subjects are MEGO territory for most people &#8211; whether they hail from Shibuya, Shoreditch or San Pedro.</strong></p>
<p><a href="../wp-content/uploads/2009/02/everywareicon.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg"><img class="alignnone size-full wp-image-2996" title="everywareicon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg" alt="everywareicon" width="136" height="135" /></a></p>
<p><em><strong><a href="http://www.flickr.com/photos/studies_and_observations/89045331/" target="_blank">Everyware icons: Information processing dissolving into behavior</a></strong></em><em><strong> </strong>(Icons inspired by <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall</a>; design by Adam Greenfield and <a href="http://www.nurri.com/">Nurri Kim</a>).Â  [Adam notes on his Flickr page that he tweaked <a href="http://www.flickr.com/search/?w=14112399%40N00&amp;q=everyware+icons&amp;m=text" target="_blank">these icons </a>as section headers for </em><em><a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a></em><em>]</em></p>
<p><strong>TS:</strong> Could you explain more about what you term â€œontoâ€ and â€œontomeâ€ and how this differs from spimes and spime wrangling?<strong><br />
</strong><strong><br />
AG: You know, I never did get to develop that idea as much as I would have liked. In my mind, at least, â€œontomeâ€ referred to the totality &#8211; the global environment of addressable, queryable, scriptable objects. (An â€œonto,â€ then, would be any given such object.) I guess I was looking for words that would do two things: allow us to distinguish between the instantiation and the class, and leave us with a better word than â€œspime.â€</strong></p>
<p><strong>TS: </strong>When you say better word than spime this is this becauseâ€¦.<br />
<strong><br />
AG: Euphony, primarily. : . )</strong></p>
<p><strong>TS:</strong> When I first used the Android app,Â  <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a>, on Broadway, NYC &#8211; a street I have traveled thousands and thousands of times, and it offered up new information about itself, it was definitely an â€œOMG this is big!â€ moment for me. Like the first time I clicked on a screen and Amazon sent out a book in the early nineties (something so ordinary now it seems impossible that it was exciting but I remember it was to me!). But if I understand <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">your post here</a> correctly, isnâ€™t Android with compass the first easy-to-use context-aware mediator for wrangling onto, ontome and spimes?<strong><br />
</strong><br />
<strong>AG: Wikitude sure looks pretty impressive, and maybe even useful. But I would never, ever call it â€œcontext-aware.â€<br />
</strong><br />
<strong>To my mind, at least two more things would need to happen before we could comfortably think of it a â€œcontext-aware spime wrangler.â€ First, the buildings and other public objects around you would actually have to be spimy &#8211; theyâ€™d have to report something of their past and current state to the network. And then, some application running on your phone would somehow have to cross-reference that state information with some fact about your current state of being, and deliver you relevant information.</strong></p>
<p><strong>S</strong><strong>o, letâ€™s take your Wikitude example. Youâ€™re walking down Broadway and you pass an unfamiliar building, and for whatever reason you want to know more about it. Your phone pings the buildingâ€™s dynamic self-description, and it replies to the effect that Andy Warhol had his Factory there between 1973 and 1984. If Wikitude chooses to share this particular piece of information with you, and not some other potentially germane factoid from the buildingâ€™s history, on the strength of the fact that â€œThe Velvet Underground and Nicoâ€ was in your last.fm playlist? That would constitute some small measure of context-awareness.</strong></p>
<p><strong>But you see how hard we had to try just to come up with an example, how forced it is, how</strong><em><strong> so-what. </strong></em><strong>And I have to say that &#8211; short of some infinitely supple system that really could model your innermost desires ahead of real time, and present appropriate responses to them &#8211; most so-called â€œcontext-awareâ€ applications and services are like this. Theyâ€™re either trivial, or wildly overambitious.</strong></p>
<p><strong>Maybe we donâ€™t need for things to be context-aware for them to be useful, anyway. Certainly a great many objects in the world are starting to report their own status, and many more will do so in the fullness of time. And for the most part, all youâ€™ll need to avail yourself of them is a Web browser running on a device that knows where it is in the world. An iPhone or an Android device will work splendidly &#8211; I called the iPhone â€œthe first real everyware deviceâ€ the day it came out and I was able to play with it for the first time &#8211; and in that way, the answer to your question is â€œyes.â€ Not to be longwinded or anything. ; . )</strong></p>
<p><a href="../wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"><img class="alignnone size-medium wp-image-3000" title="objectwithimperceptibleproperties" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties-300x212.jpg" alt="objectwithimperceptibleproperties" width="300" height="212" /></a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/206984090/#DiscussPhoto" target="_blank">This Object has imperceptible properties. </a> [Adam notes on his Flickr page: &#8220;This is a custom RFID-enabled transit pass that <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall </a>had made up for me here in Seoul. I&#8217;ve (clumsily) tagged it with the icon that Nurri and I developed to represent just such emergent situations as this in the everyware milieu &#8211; that there&#8217;s no way for anyone to understand that this object has puissance beyond the obvious simply by examining it.&#8221;]</em></p>
<p><strong>TS: </strong>It seems thatÂ  we are just at the beginning of understanding how to create networks of spimes (e.g. <a href="http://www.pachube.com/" target="_blank">Pachube</a>). Gavin Starks of <a id="ya:2" title="AMEE" href="http://www.amee.com/">AMEE</a> (â€the worldâ€™s energy meterâ€) once suggested to me that AMEE could be described as a facilitator of networked spimes (everything will have an energy identity). I think you may be familiar with AMEE because you keynoted next to Gavin at<a href="http://2007.xtech.org/public/schedule/grid/2007-05-16" target="_blank"> Xtech 2007</a>.</p>
<p>I would be interested to hear your thoughts on AMEE?</p>
<p>When <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">you discussed onto and ontome in this post</a>, you noted:</p>
<blockquote><p><em><strong>â€œThe greater part of the places and things we find in the world will be provided with the ability to speak and account for themselves. That theyâ€™ll constitute a coherent environment, an <a href="http://www.graphpaper.com/2006/03-23_a-spime-is-a-species">ontome</a> of <a href="http://flickr.com/photos/studies_and_observations/89092744/">self-describing networked objects</a>, and that weâ€™ll find having some means of handling <a href="http://web.archive.org/web/20050117141647/www.v-2.org/greenfieldspime.pdf">the information flowing off of them</a> very useful indeed.â€</strong></em></p></blockquote>
<p>Is the idea of â€œenergy identityâ€ that AMEE proposes an ontome?Â  <em><br />
<strong><br />
</strong></em><strong>AG: See below for a prÃ©cis of my feelings regarding environmental/sustainability initiatives, AMEE included. Uhâ€¦is AMEE an ontome? No. Thereâ€™s just one ontome, and itâ€™s coextensive with what folks now call the Internet of Things. It sounds like individual AMEE sensors would be â€œontos.â€</strong></p>
<p><strong>But I think the difficulty weâ€™re having is a pretty good indicator that the terminology is more trouble than itâ€™s worth. Sometimes a coinage, as satisfying as it may be lexically, just doesnâ€™t work for people. These days Iâ€™m trying to get out of the neologism trade.</strong></p>
<p><strong>TS: </strong>I know <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">when Usman Haque talks about Pachube</a> he talks about spimes and spime wrangling. I asked Usman for his thoughts on spimes and onto/ontome and he gave me some comments.</p>
<p><strong>Usman Haque:</strong> I think I had somehow missed the conversation about onto and ontome but backtracked through blog posts to piece it together (unfortunately some posts at v-2 and Studies &amp; Observations no longer exist!). There are a couple of things that have made me uncomfortable about the word â€™spimeâ€™: (a) the fact that it might be too easy to confuse with an â€œobjectâ€. A â€™spimeâ€™ should also encompass relationships between things, and not just the â€œthingnessâ€ itself. (b) the sound of it (as Adam noted above). But then I am reminded of that horrible gooey interface used to plug into people in <a href="http://www.imdb.com/title/tt0120907/">eXistenZ</a> &#8211; it somehow seems appropriate that it should be a horrible gooey word, and not something that can disappear politelyâ€¦ So I like onto/ontome because it speaks to my first concern about â€™spimeâ€™; but my second concern, it turns out, is not the problem I thought it was, and so onto/ontome might beâ€¦ ahemâ€¦ too euphonic! On the question of this thing people are calling the â€œInternet of Thingsâ€, Iâ€™ve tried in lectures to reframe it as the â€œEcosystem of Environmentsâ€. Further, Vlad Trifa makes a delicious point that just as â€˜webâ€™ is different from â€˜internetâ€™, so too should we consider the â€œWeb of Thingsâ€<strong> </strong>rather than the â€œInternet of Thingsâ€, something I agree with.</p>
<p><strong>TS: </strong>It seems like this point about the difference between â€œthe web of thingsâ€ and the â€œinternet of thingsâ€ is pretty important?<br />
<strong><br />
AG: The parallel distinction between Web and Internet sure is! Theyâ€™re two completely different things, right? And http is far from the only protocol that runs over the Internet. Now, as to what Vlad means by extending this particular distinction to the domain of networked objects, I donâ€™t yet know, I havenâ€™t had time to check it out. But sure, in principle Iâ€™d totally be willing to go along with the idea that thereâ€™s a meaningful distinction between two environments named that way.</strong></p>
<p><strong><br />
</strong></p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg"><img class="alignnone size-full wp-image-3010" title="everywareicon3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg" alt="everywareicon3" width="142" height="139" /></a><br />
</em></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/89045326/in/photostream/" target="_blank">No information is collected here; network dead zone</a></em></p>
<p><strong>TS: </strong>I was just going over <a id="yo_s" title="Greenfield's principles of ubiquitous computing" href="http://www.we-make-money-not-art.com/archives/2006/10/adam-greenfield.php">Greenfieldâ€™s principles of ubiquitous computing</a>.Â  I am not sure that I see any current manifestations of ubicomp that hold to these priniciples yet?</p>
<p><strong>AG: Oh, sure there are. Look at the work Tom Coates has done on <a href="http://fireeagle.yahoo.net/" target="_blank">Yahoo!â€™s Fire Eagle</a>; look at <a href="http://www.dopplr.com/" target="_blank">Dopplr</a>. And look at some of the steps other, less compassionate developers (e.g. Facebook) have been forced to take by their own users.</strong></p>
<p><strong>Look, those principles are just codifications of common sense and basic neighborly virtues, expressed in language appropriate to the domain of application. The best, smartest and most ethical developers have never needed guidelines to do the right thing. But especially inside companies and other complex organizations, people who want to implement compassion in their design of a technical system may occasionally find it useful to have some color of authority to invoke in their struggles</strong><strong>. Thatâ€™s all those five principles are there for, and Iâ€™m well satisfied that people have been able to use them that way.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome.jpg"><img class="alignnone size-medium wp-image-3005" title="smarthome" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome-300x225.jpg" alt="smarthome" width="300" height="225" /></a><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank"><br />
</a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank">Boffiâ€™s take on the smart home</a>- photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> In your post, <a id="klme" title="More Songs About Context And Mood" href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/">More Songs About Context And Mood,</a> you suggest a direction for interaction design that you point out is not far from Yvonne Rogersâ€™ ideas in â€œMoving on from Weiserâ€ about a switch in goal of ubicomp from Weiserâ€™s vision of calm living (â€computers appearing when needed and disappearing when notâ€) to engaged living &#8211; ubicomp technologies not designed to to do things for people but to help people engage more actively in things that they do (ensembles, ecologies of resources).</p>
<p>You also suggest interaction designers should be:</p>
<blockquote><p><strong><em>&#8220;parsimonious about the interaction design challenges our organizations do take on, with an eye toward reducing the complications of context (and the attendant opportunities for default, misunderstanding, misfire, time-wasting, and humiliation) to some manageable minimum.&#8221;</em></strong></p></blockquote>
<p>As you have pointed out, â€œwe donâ€™t do â€œsmartâ€ very well yet.â€ But paradoxically smart grids, smart homes, smart products etc. etc. are ubiquitously coming to market right now.</p>
<p>Yvonne Rogers suggests interaction designers should be:</p>
<blockquote><p><em>moving from a mindset that wants to make the environment smart and proactive to one that enables people, themselves, to be smarter and proactive in their everyday and working practices</em><em> </em></p></blockquote>
<p>What areas might interaction designers most productively direct their attention towards?<br />
<strong></strong></p>
<p><strong>AG: You note that things called â€œsmart homesâ€ and â€œsmart productsâ€ are coming onto the market, and that sure would seem to be the case. But as to whether or not these things are genuinely smart, we donâ€™t have anything more to go on than the marketing departmentâ€™s word. I think you can already see that I tend to take language very seriously, and I really donâ€™t uses like the â€œsmartâ€ here, or the â€œawareâ€ in â€œcontext-aware.â€ They overpromise, they cannot help to set us up for failure and disappointment.</strong></p>
<p><strong>You know what Iâ€™d really like to see interaction design wrestle with? I would love to see a rigorous, no-holds-barred examination of the complexities of the self and its performance in everyday life, and how these condition our use of public space (and personal media in public space). I would love to see the development of ostensibly â€œsocialâ€ platforms informed by some kind of reckoning with issues like vulnerability, dishonesty, the fact of power dynamics. In other words, before we deign to go about â€œhelpingâ€ people, wouldnâ€™t it be lovely if we understood what they perceived themselves as needing help with, and why?</strong></p>
<p><strong>Iâ€™d also pay good money to see talented interaction designers turn their efforts toward tools for the support of deliberative democracy, for the navigation of complex multivariate decision spaces, and for conflict resolution.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg"><img class="alignnone size-full wp-image-3071" title="locativeasamood" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg" alt="locativeasamood" width="500" height="375" /></a><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank"><br />
</a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank">Locative is a mood</a> &#8211; photo by Adam Greenfield</em><strong><br />
</strong></p>
<p><strong>TS:</strong> I know you said this would take too long to explain but I couldnâ€™t help noticing that you seem to be, perhaps, skeptical about the role of everyware can play in sustainable living and yet, it seems at the moment, in the hacker and business communities at least, the role of everyware in reducing carbon footprint/energy management etc, is the great green hope?</p>
<p>Will everyware enable or hinder fundamental changes at the level of culture and identity necessary to support the urgent global need &#8211; â€œto consume less and redefine prosperity?â€<strong><br />
</strong><br />
<strong>AG: Iâ€™m not skeptical about the potential of ubiquitous systems to meter energy use, and maybe even incentivize some reduction in that use &#8211; not at all. Iâ€™m simply not convinced that anything we do will make any difference.</strong></p>
<p><strong>Look, I think we really, seriously screwed the pooch on this. We have fouled the nest so thoroughly and in so many ways that I would be absolutely shocked if humanity comes out the other end of this century with any level of organization above that of clans and villages.</strong><strong> Itâ€™s not just carbon emissions and global warming, itâ€™s depleted soil fertility, itâ€™s synthetic estrogens bioaccumulating in the aquatic food chain</strong><strong>, itâ€™s our inability to stop using antibiotics in a way that gives rise to multi-drug-resistance in microbes</strong><strong>. </strong></p>
<p><strong>Any one of these threats in isolation would pose a challenge to our ability to collectively identify and respond to it, as itâ€™s clear anthropogenic global warming already does. Put all of these things together, assess the total threat they pose in the light of our societiesâ€™ willingness and/or capacity to reckon with them, and I think any moderately knowledgeable and intellectually honest person has to conclude that itâ€™s more or less â€œgame over, manâ€ &#8211; that sometime in the next sixty years or so a convergence of Extremely Bad Circumstances is going to put an effective end to our ability to conduct highly ordered and highly energy-intensive civilization on this planet, for something on the order of thousands of years to come.</strong></p>
<p><strong>So (sorry <em>again</em>, Bruce) I just donâ€™t buy the idea that weâ€™re going to consume our way to Ecotopia. Nor is any symbolic act of abjection on my part going to postpone the inevitable by so much as a second, nor would such a sacrifice do anything meaningful to improve anybody elseâ€™s outcomes. Iâ€™d rather live comfortably &#8211; hopefully not obscenely so &#8211; in the years we have remaining to us, use my skills as they are most valuable to people, and cherish each moment for what it uniquely offers.</strong></p>
<p><strong>Maybe some people would find that prospect morbid, or nihilistic, but I find it kind of inspiring. It becomes even more crucial that we not waste the little time we do have on broken systems, broken ways of doing things. The primary question for the designers of urban informatics under such circumstances is to design systems that underwrite autonomy, that allow people to make the best and wisest and most resonant use of whatever time they have left on the planet. And who knows? That effort may bear fruit in ways we have no way of anticipating at the moment. As it says in the Quâ€™ran, gorgeously: â€œAt the end of the world, plant a tree.â€</strong></p>
<p><strong><a href="../wp-content/uploads/2009/02/biowall2.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg"><img class="alignnone size-full wp-image-3008" title="biowall2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg" alt="biowall2" width="375" height="500" /></a><br />
</strong></p>
<p><em><a href="http://www.flickr.com/search/?q=biowall&amp;w=14112399%40N00" target="_blank">Biowall! </a>- photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>In <a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">your post â€œAntisocial Networking,â€</a> you make some telling comments on the sorry state of social networking systems.</p>
<div style="margin-left: 40px;"><strong><em>â€œAll</em> <em>social-networking systems, as currently designed, demonstrably create social awkwardnesses that did not, and could not, exist before. All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options &#8211; and those static! A wiser response to them would be to recognize that, in the words of the old movie, â€œthe only way to win is not to play.â€</em></strong></div>
<p>But you do also state:</p>
<div style="margin-left: 40px;"><strong><em>â€œBut itâ€™s past time for me to acknowledge that while the discourse of social networking may at first blush seem marginal to my core concerns, itâ€™s far more central to those concerns than I might wish.â€</em></strong></div>
<p>Which of your concerns is social networking more central to than you might wish and why?</p>
<p><strong>AG: Well, you know Iâ€™m interested in social interaction, interpersonal behavior, and in how these things play out in networked environments. Thereâ€™s virtually no way for me to avoid dealing with Facebook, as wretched as I think it is</strong><strong>.</strong></p>
<p><strong>Facebook is pretty hegemonic, in that its reach and influence extend further than the universe of people who use it. I bump up against it constantly, in a few different ways. People send me links I canâ€™t access, because Iâ€™m not on Facebook. People spend time and energy trying to convince me that Iâ€™m really missing out, because Iâ€™m not on Facebook. The last few months, thereâ€™s even been a few people who feel justified in expressing some kind of </strong><strong>exasperation, that theyâ€™re really pissed offâ€¦because they canâ€™t find me on Facebook. Itâ€™s become the sovereign interface to any kind of life in public</strong><strong>, and as a result a great many people donâ€™t question its modes, tropes and metaphors.</strong></p>
<p><strong>So when it comes time to build some kind of situated interpersonal mediation framework, some kind of intervention in the fabric of the city, those are the tropes they reach for: accounts, profiles, friend counts, friendings and unfriendings, nudges and pokes. And as a member of a team tasked with the design of such systems, as a potential user of them, and certainly as someone exposed to the social rhetoric flowing downstream from their use, you bet these tropes become central to my concerns.</strong></p>
<p><strong>But what if we admitted that Facebook and the whole paradigm itâ€™s built on are broken? What would things look like if we started from a more sensitive understanding of the interaction between self and others? Say, the understanding Erving Goffman was offering us as far back as the late 1950s? Then youâ€™d understand the need for provisions like a â€œbackstage,â€ a place to swap out one mask for another, the ability to present oneself differently to different communities and networks. Thatâ€™s what Iâ€™m interested in exploring.</strong></p>
<p><strong>TS: </strong>Social networking systems in their current form are crude and express a very narrow bandwidth of human relationship. But already people are connecting everywareâ€™s networked social acts to existing social networking systems. At the ITP winter show there was <a id="eo:2" title="kickbee" href="http://gizmodo.com/5109297/kickbee-now-the-world-can-know-what-your-fetus-is-up-to">kickbee</a> &#8211; networked fetal communication (and <a id="kwj6" title="tweetmobile" href="http://tweetmobile.com/">tweetmobile</a> which used twitter as an acctuator for an ambient display) and green everyware (energy monitoring) is showing up in a number forms on existing social networks. But rather than just hooking up everyware to these existing flawed social network systems, does everyware require a reimagining of networked social interactions and social networking systems?<strong><br />
</strong><br />
<strong>AG: Thatâ€™s a great question, and I think the answer is clearly â€œyes.â€ Itâ€™s one thing to confine the consequences of that brokenness to the Web, and entirely another to let it bleed out into the world.</strong></p>
<p><strong>Does that mean any such reimagining is <em>going</em> to happen, that people will somehow refrain from plugging real-world outputs into these terribly flawed frameworks? Not a chance in hell. Itâ€™s too late to put a fence on that particular cliff. But maybe thereâ€™s still time to park an ambulance in the valley</strong><strong> below.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg"><img class="alignnone size-full wp-image-3074" title="earthssurface" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg" alt="earthssurface" width="375" height="500" /></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2970558731/" target="_blank">&#8220;A graphic representation of a portion of the Earth&#8217;s surface, as seen from above&#8221;</a> &#8211; photo by Adam Greenfield<br />
</em></p>
<p><strong>TS: </strong>I saw you tweet that you met Usman Haque from <a href="http://www.pachube.com/" target="_blank">Pachube</a> recently. What do you find most interesting about Pachube and <a href="http://www.eeml.org/" target="_blank">EEML</a>? Will you design a project for Pachube to push the conversation further?Â  Did Usman ask you to take a role in the future of Pachube. How does Pachube enable the vision of<em> <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a></em>? I could go on for ever with questions,Â  so please do tell!</p>
<p><strong>AG: OK, I should probably reiterate that my fundamental interest is in people, and in what they choose to make and do with technology, not the technology itself. For the last few years, Iâ€™ve particularly been trying to understand how people interact with each other and with the urban environments around them when those environments have been provisioned with the ability to gather, process and take action on data. And this is how I come about my interest in what Usman is up to with Pachube, because those â€œgather,â€ â€œprocessâ€ and â€œtake action uponâ€ functions are generally accomplished by different systems, designed by different groups of people, at different times and to different ends. What Pachube aims to do is make the difficult and not-particularly-glamorous work of connecting these pieces a whole lot easier.</strong></p>
<p><strong>Think of it as a step toward enabling the ontome, this so-called Internet of Things we&#8217;ve been talking about, the same way basic protocols like HTTP and HTML enabled the wildfire spread of the Internet weâ€™re familiar with. What Pachube offers is a way &#8211; a relatively straightforward and self-explanatory way &#8211; to plug any given compatible input into a similarly compatible output. So if youâ€™ve got an air-quality sensor or a soil-pH sensor or a personal biometric monitor, you can plug it into Pachube, and someone else can grab the data those things generate and use it to drive a visualization, or the state of a physical system like a window, or whatever else they can imagine. Itâ€™s as close as anyoneâ€™s yet come to providing a plug-and-play backbone for the creation of responsive environments.</strong></p>
<p><strong>And I think itâ€™s absolutely brilliant that itâ€™s designed to work with Arduino and Processing, two lightweight, open-source frameworks that hobbyists and researchers (and even one or two more serious developers) around the world are already using to build things. (Arduinoâ€™s a kit of parts for doing basic physical computing &#8211; using data to drive lights, motors, and other actuators that have effect out here in the world &#8211; while Processing is a very accessible language to do dynamic and interactive graphics for screen-based media). Given both its openness and modularity, and its willingness to build on top of the very popular frameworks that already exist, Iâ€™m very excited to see what people make of and with Pachube.</strong></p>
<p><strong>I have to be honest and admit that personally, I couldnâ€™t really care less about the environmental angle, for reasons that I went into at embarrassing length above. What Iâ€™m engaged by in Usmanâ€™s work is the idea that Pachube is helping to create an open platform for people to share data more readily. And while, no, he hasnâ€™t explicitly asked me to take any particular stake in things, Iâ€™m always happy to lend a hand in whatever way would be most useful. I think itâ€™s a project worth supporting.</strong></p>
<p><strong>As to how Pachube enables some of the ideas in</strong><em><strong> The City Is Here</strong></em><strong>, the answer has to do with the bookâ€™s call for every â€œpublic objectâ€ &#8211; every lamppost, bus shelter, commercial faÃ§ade, and so forth &#8211; to support an open API. Somethingâ€™s got to string all those objects together, present them to people as resources to be taken up and used, and Usmanâ€™s offered us a critical first step in that direction.</strong><em><strong><br />
</strong></em><br />
<strong>TS:</strong> Usman suggested, it might be interesting to ask you about â€œthe tension between â€˜couldâ€™ and â€™should.â€™</p>
<p><strong>Usman Haque: </strong>There are a whole bunch of things that we â€œcanâ€ do, technologically speaking; how do we decide what we â€™shouldâ€™ do, as we find ourselves in an age where we can build almost anything we can imagineâ€¦? particularly with reference to technology/privacy/security triumvirate. e.g., leaving aside that the majority of the world is *not* in the technology â€˜paradiseâ€™ that weâ€™re in, here in the west, only a small fraction of people are currently producing the technology that the rest of us use; one aim is to get people more engaged in the productive process, but, in a sense that will also mean the whole wide ecosystem of technology will be even bigger, both â€œgoodâ€ stuff and â€œbadâ€ (that qualification firmly placed on how itâ€™s used), as opposed to now when we can focus on quite specific things that government &amp; industry are doing and saying â€œthat shouldnâ€™t be happeningâ€¦.â€. part of this relates to something <span class="nfakPe">adam </span>said on his blogÂ  in the comments (see <a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank">here</a>).â€Â <strong><a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank"> </a></strong></p>
<p><strong>AG: I think the first part of answering that question has to involve figuring out who â€œweâ€ are in any given situation. A â€œweâ€ composed of seven Helsinki-based Linux developers would most likely arrive at very different answers than the United States Air Force Materiel Command or Samsungâ€™s board of directors, right? So clearly, a first challenge is getting to some kind of pragmatically useful alignment between those local and occasionally even painfully parochial perspectives with whatâ€™s best for the Big We. And this challenge is only going to become more vexing as the ability to imagine, design, build and deploy informatic componentry gets more and more widely distributed. In this respect the spread of simple, modular, low-barrier-to-entry tools only makes things worse!</strong></p>
<p><strong>The primary issue that I can see here is that the inherent clock speed of technical development is so very much faster than that of any meaningful deliberative process â€œweâ€ might bring to bear on it. A concomitant concern is that the sources of technical innovation and production are now so widely distributed that you can be reasonably certain that somebody, somewhere will implement any given technically feasible idea, no matter how offensive, poorly thought-out, socially disruptive or frankly stupid. A public toilet you have to SMS to unlock and use? A â€œFriend Finderâ€ visualization with high locational precision and no privacy features whatsoever? A first-person rape-simulation â€œgameâ€? A clunky brown iPod knockoff? Somebody thought each one of these things was worth the time, expense and effort to actually go about making it. They exist.</strong></p>
<p><strong>But Iâ€™m pretty old-fashioned in some ways, in that I think the good old Habermasian idea of the public sphere still has some life left in it. And I think it should be self-evident by now that thereâ€™s no necessary contradiction between even the newest (cough) â€œsocial mediaâ€ and the formation of such a sphere. So youâ€™ve provided a forum, and in it I get to express my belief that these things are stupid and pointless and probably should not have been built. And if somebody gets all het up about that, they can argue right back at me in comments. And eventually one or another of these positions begins to tell, in terms of regulation, legislation, and other tools of the juridical order, in terms of protest campaigns or organized boycotts or litigationâ€¦in terms of nonexistent sales!</strong></p>
<p><strong>Thereâ€™s nothing new in any of this, of course, though indubitably some of the dynamics are amplified or accelerated by e-mail, Twitter and YouTube. My main contention is that informatic technology now has such deeply pervasive implications, and for things like presentation of self that previous waves of technical development barely touched, that â€œweâ€ as societies need to be very much more conscious of the consequences before committing to any one course of action.</strong></p>
<p><strong>I should also point out that I do not, at all, believe that weâ€™re â€œin an age where we can build almost anything we can imagine,â€ though I might buy â€œâ€¦<em>two or three of</em> almost anything we can imagine.â€ On the contrary, as I implied above, I think the global constraints on our ability to operate freely are already becoming quite evident, and will continue to grow teeth over the next few decades.</strong></p>
<p><strong><br />
TS: </strong>Also UsmanÂ  added &#8230;</p>
<p><strong>Usman Haque:</strong> ..where Adam said: <em>in this regard, I very much *do* have a problem with â€œjust showing up.â€ â€” </em>something I feel that as well. but i always wonder: What happens when one appears to be mandating participationâ€¦?</p>
<p><strong>AG: Look, I happen to have a strong &#8211; maybe some would say obnoxious or hyperactive or overdeveloped &#8211; sense of personal responsibility and accountability. I think one is basically committed to some measure of responsibility for the commonweal simply by surviving to the age of majority. The</strong><strong> choice of how, particularly, to discharge that responsibility</strong><strong> can only be yours and yours alone, but it canâ€™t be ducked or gotten around without severe and entirely predictable consequences. So to Usman Iâ€™d respectfully suggest that Iâ€™m not the one mandating participation. Life is.</strong></p>
<p><em><strong><br />
</strong></em></p>
<p><strong>TS:</strong> It seems we have grown accustomed to striking a Faustian bargain on the internet today -Â  in order to share and distribute parts of our identity we are expected to give up key information to one site to store and disperse our data. <strong> </strong>I took part in<a href="http://www.ugotrade.com/2007/12/21/a-conversation-with-eben-moglen-on-second-life/" target="_blank"> a discussion with David Levine, IBM and Eben Moglen on privacy</a> last year.Â  And Eben Moglen gave a succinct description of the elements of privacy and how they have been treated in the American Constitution that is, I think, relevant to unpacking some of the challenges of ubiquitious computing. Here are some extracts from that conversation where, Eben notes:</p>
<blockquote><p><em>there are three elements that are mixed up in privacy and we tend not to notice which one we are talking about at any given moment.</em></p>
<p><em>There is secrecy &#8211; that is the data should not be readable by or understandable by anybody except me or people I designate. There is anonymity which is the data can be seen by anybody but about whom it is should be knowable only by me or people that I designate. And there is autonomy which isnâ€™t about either secrecy or anonymity but which is about my right to live under circumstances which reinforce my sense that I am in control of my own fate. And this form of privacy is actually the one we talk about in the constitutional structure when we talk about the right to get an abortion or use birth control.</em></p></blockquote>
<p>â€œAnonymityâ€ is a condition that is a deep structuring characteristic of the internet as you, Lessig and others have commented on.Â  And frequently we are promised (questionably) â€œsecrecyâ€ or anonymity as privacy protection by services handling our data on the internet.Â  But Eben (one of the USâ€™s great constitutional lawyers) points out that â€œautonomyâ€ is a key form of privacy in theÂ  US constitutional structure that is often compromised in situations where our digital selves may constrain our non-digital selves.</p>
<blockquote><p><em>The real issue here is about the forcing of choices on usâ€¦digital aspects of identity can quickly acquire an inflexibilty that constrains our non-digital selves.</em></p>
<p><em>I see again and again the ways in which people now find themselves unable to make certain life choices easily because there digital self has acquired an inflexibility that constrains their non-digital self.</em></p></blockquote>
<p>As we go beyond the end to end internet and we lose the structuring characteristic that has privileged anonymity: How do you see these three elements of privacy, anonymity, secrecy and most importantly autonomy, being worked out in a networked world beyond the end to end internet?</p>
<p>Are there any new structuring characteristics that could privilege autonomy? (which Eben indicates is linked to having a flexible identity).</p>
<p><strong>AG: If we accept for the moment a definition of autonomy as a feeling of being master of oneâ€™s own fate, then absolutely yes. One thing I talk about a good deal is using ambient situational awareness to lower decision costs &#8211; that is, to lower the information costs associated with arriving at a choice presented to you, and at the same time mitigate the opportunity costs of having committed yourself to a course of action. When given some kind of real-time overview of all of the options available to you in a given time, place and context &#8211; and especially if that comes wrapped up in some kind of visualization that makes anomaly detection and edge-case analysis instantaneous gestalts, to be grasped in a single glance &#8211; your personal autonomy is tremendously enhanced. <em>Tremendously</em> enhanced.</strong></p>
<p><strong>But as to how this local autonomy could be deployed in Moglenâ€™s more general terms, I donâ€™t know, and Iâ€™m not sure anyone does. Because heâ€™s absolutely right: Bernard Stiegler reminds us that the network constitutes a <em>global mnemotechnics</em>, a persistent memory store for planet Earth, and yet weâ€™ve structured our systems of jurisprudence and our life practices and even our psyches around the idea that information about us eventually expires and leaves the world. Its failure to do so in the context of Facebook and Flickr and Twitter is clearly one of the ways in which the elaboration of our digital selves constrains our real-world behavior. Let just one picture of you grabbing a cardboard cutoutâ€™s breast or taking a bong hit leak onto the network, and see how the career options available to you shift in response.</strong></p>
<p><strong>This is whatâ€™s behind Anne Gallowayâ€™s calls for a â€œforgetting machine.â€ An everyware that did that &#8211; that massively spoofed our traces in the world, that threw up enormous clouds of winnow and chaff to give us plausible deniability about our whereabouts and so on &#8211; might give us a fighting chance.</strong><br />
<strong><br />
TS: </strong>The concept of autonomy is signaled clearly in the title you have chosen for your next book, <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>,</a> and is a theme of all your writing!Â  While you talk about many of the possible constraints to presentation of self and potential threats to a flexible identity that ubicomp poses, your next book signals optimism. What are your key grounds for optimism?</p>
<p><strong>AG: Itâ€™s not optimism so much as hope. Whether itâ€™s well-founded or not is not for me to decide. I guess I just trust people to make reasonably good choices, when theyâ€™re both aware of the stakes and have been presented with sound, accurate decision-support material.</strong></p>
<p><strong>Putting a fine point on it: I believe that most people donâ€™t actually want to be dicks. We may have differing conceptions of the good, our choices may impinge on one anotherâ€™s autonomy. But I think most of us, if confronted with the humanity of the Other and offered the ability to do so, would want to find some arrangement that lets everyone find some satisfaction in the world. And in its ability to assist us in signalling our needs and desires, in its potential to mediate the mutual fulfillment of same, in its promise to reduce the fear people face when confronted with the immediate necessity to make a decision on radically imperfect information, a properly-designed networked informatics could underwrite the most transformative expansions of peopleâ€™s ability to determine the circumstances of their own lives.</strong></p>
<p><strong>Now thatâ€™s epochal. If that isnâ€™t cause for hope, then I donâ€™t know what is.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg"><img class="alignnone size-full wp-image-3076" title="obamannook1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg" alt="obamannook1" width="375" height="500" /></a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/3246420459/" target="_blank">Newson Obamanook</a> &#8211; photo by Adam Greenfield, &#8220;The fact that it was one of the happiest days of my adult life may have colored my appreciation of this space. A bit, anyway.&#8221;</em></p>
<p><strong>TS:</strong> In your writing you seem to imply that we will not find answers to our new relationship with Everyware by transposing the internet onto things for convenienceâ€™s sake but rather like the bike messengers -Â  we must explore the rich and complex terrain of the city that is ours to use in a give an take relationship.Â  Through our own exertions we find- how â€œanything reasonably smooth and approximately horizontal can become a thoroughfare,â€Â  rather than be served up the city as something for us to consume.</p>
<p>You seem to be suggesting our city becomes ours to use because of the way we use it in our personal journeys -like â€œthe messenger subconsciously maps the contours of an economic geography &#8211; known sources and sinks of courier assignments, or â€œtagsâ€ &#8211; and a threat landscape, this latter comprised of blind corners, cable-car and metro tracks, and traffic lanes.</p>
<p>But bike messengers are the lone ranger of our big cities. Others surf the city in tribes that ride the roiling tides of highly networked information together. How are the â€œnaturalâ€ gestures of these tribes, e.g. day traders, who yoked to the tracings of a hive mind, part of the city that is here for us to use?Â  I thought the comment <a href="http://twitter.com/ginsudo" target="_blank">@ginsudo</a> made shortly after joining Twitter and setting up TweetDeck particularly poignant:</p>
<blockquote><p><em><span class="status-body"><span class="entry-content">â€œwatching Tweetdeck is like watching stock market of your personality ebb and flow. needs analytics to maximize inherent self-involvement.â€</span></span></em></p></blockquote>
<p>But, for many of us our work has more in common with the day trader than the bike messenger, and are we pretty hooked on the ever growing possibilities for â€œcontactâ€ and identity sharing/construction, social media has producedÂ  (with all theâ€Here Comes Everybody,â€ C. Shirky, benefits and risks).Â  Early theorizing of a â€œcalm,â€ invisibleâ€ ubicomp seems out of synch with the excitable, active, engaged, contact driven, â€œusersâ€ that are <span class="status-body"><span class="entry-content">watching stock market of their personality (or personal brand) ebb and flow.</span></span></p>
<p>How will these excitable/exciting processes of contact and identity sharing that have captured of a pretty large segment of popular imagination (not confined to the West -services like <a id="f9mb" title="Gupshup" href="http://www.smsgupshup.com/">Gupshup</a> does much of the same curating, linking and distributing of identity that web based social media does in SMS) be/ or not be part of <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a>?<strong><br />
</strong><br />
<strong>AG: Letâ€™s remember that ubicomp itself, as a discipline, has largely moved on from the Weiserian discourse of â€œcalm technologyâ€; Yvonne Rogers, for example, now speaks of â€œproactive systems for proactive people.â€ You can look at this as a necessary accommodation with the reality principle, which it is, or as kind of a shame &#8211; which it also happens to be, at least in my opinion. Either way, though, I donâ€™t think anybody can credibly argue any longer that just because informatic systems pervade our lives, designers will be compelled to craft encalming interfaces to them. That notion of Mark Weiserâ€™s was never particularly convincing, and as far as Iâ€™m concerned itâ€™s been thoroughly refuted by the unfolding actuality of post-PC informatics.</strong></p>
<p><strong>All the available evidence, on the contrary, supports the idea that we will have to actively fight for moments of calm and reflection, as individuals and as collectivities. And not only that, as it happens, but for spaces in which weâ€™re able to engage with the Other on neutral turf, as it were, since the logic of â€œsocial mediaâ€ seems to be producing</strong><em><strong> Big Sort</strong></em><strong>-like effects and echo chambers. We already â€œmaximize inherent self-involvement,â€ analytics or no, and the result is that the tools allowing us to become involved with anything but the self, or selves that strongly resemble it, are atrophying.</strong></p>
<p><strong>So when people complain about K-Mart and Starbucks and American Eagle Outfitters coming to Manhattan, and how it means the suburbanization of the city, I have to laugh. Because the real</strong> <strong>suburbanization is the smoothening-out of our social interaction until it only encompasses the congenial. A gated community where everyone looks and acts the same? <em>Thatâ€™s</em> the suburbs, wherever and however it instantiates, and I donâ€™t care how precious and edgy your tastes may be. Richard Sennett argued that what makes urbanity is precisely the quality of necessary, daily, cheek-by-jowl confrontation with a panoply of the different, and as far as I can tell heâ€™s spot on.</strong></p>
<p><strong>We have to devise platforms that accommodate and yet buffer that confrontation. We have to create the safe(r) spaces that allow us to negotiate that difference. The alternative to doing so is creating a world of ten million autistic, utterly atomic and mutually incomprehensible tribelets, each reinforced in the illusion of its own impeccable correctness: duller than dull, except at the flashpoints between. And those become murderous. Nope. Unacceptable outcome.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg"><img class="alignnone size-full wp-image-3075" title="uncannyvalleys" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg" alt="uncannyvalleys" width="500" height="369" /></a></strong><br />
<em><a href="http://flickr.com/photos/studies_and_observations/3119708407/" target="_blank">Uncanny Valleys </a>- Adam comments,&#8221;Our apartment in NYC as rendered in Google Earth, with realtime traffic, weather, daylight and shadow as well as geodetic, street grid and service overlays. Camera view is South; that&#8217;s First Avenue just left of center-screen.&#8221;</em></p>
<p><strong><br />
TS:</strong> Smart phoneâ€™s are now drawing everyware data into the system and the net is reaching into who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc..</p>
<p><a id="u:ys" title="Nathan Freitas" href="http://openideals.com/">Nathan Freitas</a> says Android:<em> </em>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ (Android APIs for a wide range of sensor data.)</p>
<p>What in your view will be the most likely platform, Android or what?, to socialize the idea that sensor data could be a piece of every application?</p>
<p><strong>AG: An open platform. A platform with lots of hooks and ways to plug things into it, a strong developer community, a shallow learning curve and/or an easy-to-use, high-level development environment.</strong></p>
<p><strong>I donâ€™t have a dog in this race, mind you. I couldnâ€™t care less who gets there first.</strong></p>
<p><strong>TS: </strong>New location based services, e.g., <a id="kvue" title="Xtify" href="http://xtify.com/featured">Xtify</a> and <a id="fajp" title="ViaPlace" href="http://www.viaplace.com/">ViaPlace</a>, are offering us ways to share location data across lots of different applications (eg Xtify and a dating application like <a id="yixz" title="MeetMoi" href="http://www.meetmoi.com/welcome">MeetMoi</a> ). In return for services that allow us to share information, we must give up key information up to one site to store and disperse (although there are many differences in approach to our data, from the Twitter stance â€œshow but donâ€™t ownâ€ as opposed to Facebookâ€™s stance &#8211; â€œin order to show we must have rights to itâ€). But the basic model of Twitter &#8211; to provide a white noise platform for people to build service on top off seems to be being transposed to location based services. Obvious questions arise like what happens to our data in a start up like MeetMoi if they go belly up?Â  Apparently in the dot.com bust data was the first thing to go on the auction block in bankrupcy cases.</p>
<p>Also, I suppose it is hardly surprising (if disappointing to me) that some of the early location based services are trying to get mindshare by picking up on the glue celebrities give to mass culture. At the last New York Tech Meetup, <a href="http://m.twitter.com/omgicu" target="_blank">OMGICU</a> demoed a rather terrifying new pre-launch location based â€œparticipatory celebrity gossip applicationâ€ which seems to combine all the worst features of social media with celebrity stalking, plus a narrative to change the notion of celebrity itself by â€œturning D listers into A listers.â€</p>
<p>Hopefully location based applicationsÂ  will not get stuck on â€œstalker, stalker, stalkerâ€ apps like OMGICU .</p>
<p>David Oliver, <a id="qgz3" title="Oliver Coady" href="http://olivercoady.com/">Oliver Coady</a> gave me a good question: &#8220;How does timeliness and location-independence change our ideas of social media?</p>
<p>And how can we design new architectures that can reinforce the sense that I am in control of my own fate?</p>
<p><strong>AG: But weâ€™ve already come so far in terms of turning D-listers into A-listers! On a daily basis, Iâ€™m exposed to almost as many cues insisting I attend to nonentities and dullards like Robert Scoble as those insisting I attend to nonentities like Madonna or Thomas Friedman.</strong><strong> Itâ€™s gotten ridiculous.</strong></p>
<p><strong>Now, how does timeliness and location change our ideas of social media? It makes them dangerous!</strong></p>
<p><strong>Look, even a proud Z-lister like myself &#8211; Iâ€™m a public person only in the most debased and degraded meaning of that word &#8211; Iâ€™ve had experiences that shook me up, like having someone approach me while I was quietly hanging out in the back of St. Markâ€™s Books, and wanting to strike up a conversation based on some talk theyâ€™d seen me give a year or so previously. Now part of learning to deal with this kind of thing is shrugging it off, being grateful and flattered that someone thinks youâ€™re interesting enough to single out for that kind of attention, or chalking this up to Sennettâ€™s observation about the constitution of urbanity. Or doing all three at once.</strong></p>
<p><strong>But letâ€™s remember that at the end of the day, a â€œsocial networkâ€ is nothing but a group of arbitrarily distributed human beings joined by a communications channel, and those people have eyes and ears. The degree to which they recognize some shared interest gives them significance filters. If social capital accrues to those in the network who are able to claim some connection with a â€œcelebrity,â€ no matter how fleeting, then such connections are going to be mobilized, made explicit. And now say the network has been provided with the tools allowing it to plot the appearances of those putative celebrities in space and time, and what do you get? You get a circumstance in which it is very, very difficult to maintain any membrane between the private self and the world, for anyone whoâ€™s even remotely a public figure, whether they particularly want to be a public figure or not. You get network effects that amplify those locational traces, and further undermine any possibility of anonymity, even anonymity-by-suspension-of-interrogative-awareness (which is a clumsy way of referring to that blasÃ© matter-of-factness around famous people that most big-city folks eventually develop).</strong></p>
<p><strong>Am I letting myself off the hook? Not in the slightest. I passed Terence Stamp on the street not so long ago, and you bet I Twittered it. My only excuse was that I Twittered it to a closed loop of no more than a few dozen people. But then, who knows what those few dozen people will turn around and do with that fact, on the open networks to which they in turn belong?</strong><strong> And that, too, is my responsibility.</strong></p>
<p><strong>Iâ€™m not sure thereâ€™s anything to be done about any of this but cultivate our own urbanity, learn to say â€œso whatâ€ when we happen to find ourselves next to Philip Seymour Hoffman in the line at Whole Foods.</strong><strong><br />
</strong></p>
<p><strong>TS: </strong>Zittrain in <a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It</a>, foregrounds â€œgenerativityâ€ and a generative devices (as opposed to appliances) as the most fortuitous starting point for: â€œtools to bring about social systems to match the power of the technical one.â€</p>
<p>Are appliances a threat to the city that is here for you to use? How can generativity ensure <em><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></em> as Zittrain argues it has ensured, even if imperfectly, that the internet has been here for us to use?<strong><br />
</strong><br />
<strong>AG: You know, I havenâ€™t read the book, Iâ€™ve only heard him give the talk, so itâ€™s certainly possible thereâ€™s a subtlety to the argument that Iâ€™m missing. But Iâ€™m not sure Jonathan isnâ€™t simply wrong about this notion of generativity. Not that the concern is misplaced, but that heâ€™s insufficiently trustful in human agency. Is a car â€œgenerative,â€ by his definition? Certainly not. And yet look at all the cultural production that goes on around â€œthe car,â€ look at all the assemblages people make with cars, from Beach Boys songs to <a href="http://en.wikipedia.org/wiki/Ghost-riding">ghost riding the whip</a>, from J.G. Ballard novels and <em>Herbie the Love Bug</em> to <em>Tokyo Drift.</em></strong></p>
<p><strong>Or probably more to his point: look at the Japanese mobile-phone market &#8211; seemingly one of the most locked-down and unpropitious circumstances imaginable for the production of culture, in technical terms and Zittrainâ€™s both. And yet fully 50% of the bestselling books in Japan last year were written on mobile phones. Not <em>read</em>, which would already be impressive enough (if â€œimpressiveâ€ is indeed the word): </strong><em><strong><a href="http://www.nytimes.com/2008/01/20/world/asia/20japan.html">written</a>. </strong></em><strong>What does that imply for his argument?</strong></p>
<p><strong>So, yes, I think there are grounds for concern in that we don&#8217;t allow technologies and frameworks to appear that unduly limit the scope of human creativity</strong><strong>. Code is still law. But I also think people are quite amply able to reach into what would appear to be the least propitious technologies and tell their own stories with same.<br />
</strong></p>
<p><strong><br />
TS: </strong> One aspect of Everyware that seems in need of some visionary yoga is the how we will relate to pixels anywhere.</p>
<p>In <em><a href="http://www.lulu.com/content/1554599">Urban Computing and its Discontents</a></em> you mention how our technological trajectories often make it seem as if we seem to get fixated on particular scenes in movies, e.g., <em>Minority Report</em>. You point out that so many ambient informatics projects seem simply â€œto expand the reach of signage and advertising in dense urban spacesâ€¦.as if weâ€™ve become transfixed by the scene from <em>Minority Report</em> where heterosexual cop John Anderton is on the run from his colleagues.â€</p>
<p>Ideas from the <em>Minority Report</em> continue to hold sway in designs as we saw in the recent MIT demo of <a href="http://ambient.media.mit.edu/projects.php?action=details&amp;id=68" target="_blank">SixthSense</a> at TED.</p>
<p>But visions of augmented reality were pretty high profile in this years Super Bowl commercials this year (including a highly anthropomorphic imagining of ubicomp that was a kind of WoW mashup with a Pixar movie).</p>
<p>What recent movies/commercials have produced scenes mostly likely to be are new fixation fodder for ubicomp and why?</p>
<p><strong>AG: I donâ€™t think Iâ€™m qualified to answer that, actually. We donâ€™t have a TV, so I donâ€™t see much in the way of commercials, and most of the films I wind up seeing are the kind that play at Anthology Film Archives. What I can say is that science fiction is currently suffering in toto from an inability or disinclination to posit future scenarios that are any weirder or more visionary than those emerging from other sectors of the culture. And that would be fine, except sf has traditionally been the place where we wrestled with the imaginary.</strong></p>
<p><strong>We need that set of tools, badly. If for no other reason than something I glean from personal experience: essentially my entire professional career has simply been the leveraging of ideas and concepts I originally wrestled with in the encounter with William Gibson and Bruce Sterling when I was 16. Today&#8217;s visionary sf means tomorrow&#8217;s halfway-competent generalist.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg"><img class="alignnone size-full wp-image-3030" title="nurrikim" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg" alt="nurrikim" width="375" height="500" /> </a></strong><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank"></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank">Nurri Kim in the waiting zone</a> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>My AR friend, <a href="http://curiousraven.squarespace.com/about-me/">Robert Rice</a>, who is <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">working on a markerless AR platform,</a> notes that data visualization is one of the critical elements of AR in terms of â€œmake or break.â€ Robert says, â€œeven with the ultimate in ubiquitious data from everything, without good data vis it will all be uselessâ€</p>
<p>Also something Cory Doctorow said to me last year has really stuck in my mind. When I asked him what happens when Cyberspace everts, he talked about a reverse surveillance society:</p>
<div style="margin-left: 40px;"><em>â€œSurveillance is all about when people in authority know a lot about you. Instrumentation is when you know a lot about the world,â€</em></div>
<blockquote><p>C<em>ory: Well this is like Spook Country the new Gibson novel â€“ What happens when cyber space everts â€“ hmmm? Iâ€™m not sure I have anything very pithy to say on that EXCEPTâ€¦â€¦â€¦ </em><br />
<em> Apart from all the traditional kind of overlay reality stuff, if there is one thing I am actually interested seeing from a virtual world migrating to the real world its instrumentation. </em><br />
<em> I think lot of things that are characteristic of very successful internet based business is that they are extremely finally instrumented so like Amazon knows in aggregate on a second by second basis how their site is being used by people and they can twiddle the dials in real time. </em></p>
<p><em> As users of the world we have very little access to that kind of instrumentation. We donâ€™t even know how the tube is running. The tube knows how the tube is running and we kinda of donâ€™t. I would be really interested in seeing that. Youâ€™ve seen <a href="http://joi.ito.com/">Joi Itoâ€™s</a> WoW interface right. Have you seen it â€¦ </em></p></blockquote>
<p>Joi Itoâ€™s WoW interface seems a long way from the calm, invisible imaginings for ubicomp by early ubicomp visionaries?</p>
<p><strong>AG: Well, heâ€™s got a particular kind of neural wiring. And thereâ€™s not a thing thatâ€™s wrong with that, except that Iâ€™d never, ever want to assert that whatâ€™s appropriate for Joi Ito necessarily is or should be understood to be appropriate for anybody else. The point of calling for open systems and frameworks is to allow us maximum scope of diversity in the ways we choose to interface with the worldâ€™s richness and complexity.</strong><em><strong><br />
</strong></em> <strong><br />
TS: </strong>What new imaginings/possibilites do you see when pixels anywhere are linked to everyware?<strong><br />
</strong><br />
<strong>AG: Product placement. Commercial insertions and injections, mostly.</strong></p>
<p><strong>Beyond that: one of the places where Mark Weiser logic breaks down is in thinking that the platforms we use now disappear from the world just because ubiquitous computingâ€™s arrived. Weâ€™ve still got radio, for example &#8211; OK, now itâ€™s satellite radio and streaming Internet feeds, but the interaction metaphor isnâ€™t any different. By the same token, weâ€™re still going to be using reasonably conventional-looking laptops and desktop keyboard/display combos for awhile yet. The form factor is pretty well optimized for the delivery of a certain class of services, itâ€™s a convenient and well-assimilated interaction vocabulary, none of thatâ€™s going away just yet. And the same goes for billboards and â€œTVâ€ screens.</strong></p>
<p><strong>But all of those things become entirely different propositions in everyware world: more open, more modular, ever more conceived of as network resources with particular input and output affordances. We already see some signs of this with Microsoftâ€™s recent â€œSocial Desktopâ€ prototype &#8211; which, mind you, is a very bad idea as it currently stands, especially as implemented on something with the kind of security record that Windows enjoys &#8211; and weâ€™ll be seeing many more.</strong></p>
<p><strong>If every display in the world has an IP address and a self-descriptor indicating what kind of protocols itâ€™s capable of handling, then you begin to get into some really interesting and thorny territory. The first things to go away, off the top of my head, are screens for a certain class of mobile device &#8211; why power a screen off your battery when you can push the data to a nearby display thatâ€™s much bigger, much brighter, much more social? &#8211; and conventional projectors.</strong></p>
<p><strong>Then we get into some very interesting issues around large, public interactive displays &#8211; who &#8220;drives&#8221; the display, and so forth. But here again, we&#8217;ll have to fight to keep these things sane. It&#8217;s past time for a public debate around these issues, because they&#8217;re unquestionably going to condition the everyday experience of walking down the street in most of our cities. And that&#8217;s difficult to do when times are hard and people have more pressing concerns on their mind.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg"><img class="alignnone size-full wp-image-3045" title="citywarecrash" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg" alt="citywarecrash" width="500" height="375" /></a><br />
</strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2786991056/" target="_blank">Citywarecrash</a> &#8211; photo by Adam Greenfield, &#8220;An occupational hazard for urban screens.&#8221;</em></p>
<p><strong>TS: </strong>I know in <em>Everyware</em> you mentioned that architects have play an important visionary role in imagining ubicomp and I know you work closely with your wife, artist <a href="http://www.nurri.com/">Nurri Kim</a>.Â  Robert Rice asked me the following question &#8211; which I will in turn ask you: &#8220;In terms of augmented reality do you think virtual worlds and virtual reality experts / leaders / are good pioneers for thought and guidance on AR? Or, should we look for new leaders, or where are new leaders emerging? Is the tech similar enough for the old crowd to be useful or is it different enough to be a disadvantage coming from the old models?.<strong>&#8221;<br />
</strong><br />
<strong>AG: I should make it clear that I have absolutely no interest in virtual worlds or virtual reality. The so-called virtual worlds Iâ€™ve experienced seem sad and really rather tatty &#8211; eversions of the most predictable adolescent fantasies of unlimited power, reinscriptions of all the usual politics &#8211; and completely lacking in just about everything that makes life resonant, meaningful and awe-inspiring. And anyway, to paraphrase J.G. Ballard, ordinary, everyday life is now far more vividly and fantastically weird than anything youâ€™ll see in Second Life. I mean, Garry Kasparov was heckled by a radio-control dildocopter, Joe the Plumberâ€™s off to Gaza as a war correspondent, a sea of dust-covered BMWs waits in the long-term parking lot at Dubai International for owners who are never, ever coming back.</strong></p>
<p><strong>Look to virtual worlds for insight into the hard work of negotiating the actual, with its physics, its entropy, its suffering, with all its constraints? Oh my goodness gracious, no.<br />
And look to leaders? Never.</strong><strong> Leaders are for followers, and who wants to be that? I donâ€™t mean you canâ€™t take inspiration and insight from the work of others &#8211; not at all &#8211; but use your own imagination, take some personal risk, do your own damn work.</strong></p>
<p><strong>Now, having said that. This opposition of virtual and physical worlds strikes me as increasingly a false one, as it does many people. The hard-and-fast distinction between â€œthe real worldâ€ and virtual environments make less and less sense, as righteously satisfying as making it can sometimes seem. There may be attributes of this physical environment that are impossible to see or make use of without access to the networked overlay, and those attributes may in time come to constitute the primary wellsprings of a given placeâ€™s meaning. And if youâ€™re offering me some insight that I think could be of utility in resolving the challenge of making this overlay accessible to all, equally, Iâ€™ll gladly accept it, no matter what domain or disciplinary background you claim</strong><strong> as your own. </strong></p>
<p><strong>Am I aware of any such insight coming out of virtual worlds? No. As Bryan Boyer notes, â€œIf you want to start talking about some serious cross-disciplinary pollination then you better take both sides of that disciplinary divide seriously. When your </strong><em><strong>ubi- </strong></em><strong>runs into my building with its boring HVAC, mundane load paths, typical finished floors, plain old foundations, etc., the transformative powers of </strong><em><strong>comp </strong></em><strong>are bracketed pretty seriously by the realities of the physical world.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg"><img class="alignnone size-full wp-image-3064" title="thecloudgate" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg" alt="thecloudgate" width="500" height="375" /></a><br />
<a href="http://flickr.com/photos/studies_and_observations/1904838102/" target="_blank"><em>The Cloud Gate has landed</em></a><em> &#8211; photo by Adam Greenfield, &#8220;Tell me this doesn&#8217;t look *just* like the descriptions of &#8220;stasis fields&#8221; in 70s SF. In fact, the picture looks practically CGId to me.&#8221;</em></p>
<p><strong>TS:</strong> Some people thought the whole world would have been plastered with RFID by now.Â  But before that has happened markerless AR seems to be in our sights.</p>
<p>If I understand it correctly marker versus markerless AR has quite different implications for how the cyberspace of ubicomp evolves?Â  I asked Robert Rice (he is developing a markerless AR platform) to explain some of the differences.Â  He said:</p>
<div style="margin-left: 40px;"><em>markers are discreet physical objects at worst, they are passive images that are linked to some sort of static data in a database somewhere (like a 3D object). If you destroy them, thats it. With markerless stuff, everything is persistent, dynamic, already linked in cyberspace. Marker based stuff requires a secondary infrastructure of hardware for telecommunications</em></div>
<p><em><br />
</em>Robert also pointed out to me that markerless AR may prove even more problematic for privacy:</p>
<div style="margin-left: 40px;"><em>Markers are easy to see, so you know where they are. RFIDs cant really be seen, but they can be detected. With markerless AR, there is nothing obvious to the naked eye you dont know if someone has active AR going on or not, so you could be tracked and not know it. Not much more than today with CCTVs all over the place so, it is the same [a surveillance issue] as marker based, but more subtle or inobvious.</em></div>
<p><em> </em></p>
<p>Do you have any thoughts about the different roles that markerless versus marker techinologies will play in AR and Ubicomp?</p>
<p><strong>AG: I need to admit that Iâ€™ve never until this moment heard the phrase â€œmarkerless AR,â€ although Iâ€™d think itâ€™s more or less self-explanatory to anyone whoâ€™s been following this stuff. Let me make the distinction explicit, shall I, for anyone who hasnâ€™t been? And you or Robert can correct me if Iâ€™ve gotten it wrong.</strong></p>
<p><strong>Augmented reality means that I have some mediating artifact that provides me with a visual overlay on the world</strong><strong>. This could be a phone, it could be a windshield, it could be a pair of glasses or contact lenses, doesnâ€™t matter. And youâ€™re going to use that overlay to superimpose some order of information about the world and the objects in it onto the things that enter my field of vision &#8211; onto what I see. So far, so good: thatâ€™s AR 101.</strong></p>
<p><strong>Now where does that information come from?</strong></p>
<p><strong>What youâ€™re calling marker-based AR implies that thereâ€™s some reasonably strong relationship between the information superimposed over a given object, and the object itself. That object is an onto, a spime, itâ€™s been provided with a passive RFID tag or an active transmitter. And itâ€™s radiating information about itself that Iâ€™m grabbing, perhaps cross-referencing against other sources of information, and superimposing over the field of vision. Fine and dandy.</strong></p>
<p><strong>But thereâ€™s another way of achieving the same end, right? Instead of looking at a suit jacket on a rack and having its onboard tag tell you directly that itâ€™s a Helmut Lang, style number such-and-such from menâ€™s Spring/Summer collection 2011, Size 42 Regular in Color Gunmetal, produced at Joint Venture Factory #4 in Cholon City, Vietnam, and packed for shipment on September 3, 2010, youâ€™re going to run some kind of pattern-matching query on it. And without the necessity of that object being tagged physically in any way, youâ€™re going to have access to information about it. But this set of information isnâ€™t, necessarily, what the object itself, or its creators or merchandisers, want you to know about it; it could be derived from online discussion fora or review sites, or blog posts, or whatever. All there needs to be is a lookup table, essentially, that tells you where to find information about any object in the field of vision whose identity can be established.</strong></p>
<p><strong>Do I have that right? And if I do, then as I understand it, the distinction is primarily a pragmatic one: itâ€™s just easier to get to an augmented world, by far, if we donâ€™t actually have to go to all the trouble of tagging everything in the world with its own dedicated RF transponder. Easier, and cheaper, and quicker, and more environmentally sound besides, because the relevant traffic is in bits not atoms.</strong></p>
<p><strong>Unless Iâ€™ve missed something, you donâ€™t, then, get the distinction between classes of objects and instances of same. Sometimes, when thereâ€™s a 1:1 correlation between the two, thatâ€™s not going to matter: Iâ€™m walking down the street in Madrid, and my glasses or whatever can easily recognize that this building is the Caixa Forum. Thereâ€™s only one of it, and I can get a positive ID via pattern recognition. But for some edge cases &#8211; twins and lookalikes, mostly &#8211; the same thing is generally true of people.</strong></p>
<p><strong>But other times it will matter. Is <em>this specific watch</em> a real, $10,000 Panerai or a $50 Kowloon fakery? How has <em>this</em> black 1998 Honda Civic over here differ from this other one in terms of its use and maintenance history? Does <em>this</em> O-ring gasket need to be replaced? I donâ€™t see how you extract data from specific instances of things without the necessary sensor instrumentation, transmitter, etc., being coextensive with the object in question or very closely colocated with it over time &#8211; in the terminology youâ€™re using, a â€œmarker.â€</strong></p>
<p><strong>So using these terms, Iâ€™d say that â€œmarkerlessâ€ AR comes first, is relatively easy to deploy, and generates not-insignificant value. But &#8211; again, unless Iâ€™m missing something &#8211; there are some things that it wonâ€™t ever be able to do, and for those things you need some provision for self-identification and self-location.</strong></p>
<p><strong>Ultimately I think it&#8217;s a distinction without a difference, from the user&#8217;s point of view. People will care much more about the source of whatever information shows up on their overlay than the precise technical means used to get it there.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg"><img class="alignnone size-full wp-image-3042" title="smileuroncctv" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg" alt="smileuroncctv" width="394" height="500" /></a><br />
</strong></p>
<p><a href="http://flickr.com/photos/studies_and_observations/3274544108/" target="_blank"><em>The surrender to cynicism</em></a><em> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> Much early thinking around ubicomp seems to have come from visionary architects and engineers but recently I was at the <a href="http://www.toccon.com/toc2009" target="_blank">O&#8217;Reilly Tools of Change for Publishing Conference</a> (publishing in the Digital Age) and I met several book futurists.Â  It struck me how ubicomp from the perspective of the book created some interesting questions for how particular material cultures will shape and be shaped by Ubicomp differently.</p>
<p><span class="status-body"><span class="entry-content">I noted, Google seemed well down the path to holy grail â€œconverting images to original intent XML.â€</span></span> And <a id="ricl" title="Peter Brantley" href="http://radar.oreilly.com/peter/">Peter Brantley</a> talked about machine parsed <span class="nfakPe">books</span>.</p>
<p>At TOC there were many suggestions about how b<span class="nfakPe">ooks</span> might manifest as everyware. (Although it did not seem that many people felt books had a special relationship to time and history and would not vanish as one of the great metaphors of calm and solitary enjoyment in our culture soon).Â  Books as everyware will, it seems, include, amongst other things:</p>
<p><span class="nfakPe">books</span> that read <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read context</p>
<p>context that reads <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read me</p>
<p><span class="nfakPe">books</span> linked to mobility &#8211; timeliness and location independence</p>
<p><span class="nfakPe">books</span> that are not <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> becoming babble</p>
<p><span class="nfakPe">books</span> bubbling up from the babble</p>
<p>There is an Institute of the Future of the Book. Will all former material cultures require their own institutes of the future to guide their cultures into everyware?Â  Do you think books transition into everyware is especially significant and why?</p>
<p><strong>AG: But all objects have a relationship to time and history, no?</strong></p>
<p><strong>TS: </strong>Yes! What I meant to convey really was the idea that many people expressed at TOC that books had a privileged relationship to knowledge in our culture that was valuable and related to some aspects of their current form, and that books as everyware, e.g. machine parsed books, and more sociallly generated forms would not replace that entirely.<br />
<em><strong><br />
</strong></em><strong>AG: Gotcha. Well, I certainly agree that books constitute an interesting category unto themselves &#8211; Iâ€™ve held onto my physical books, and in fact still spend a fortune buying new ones, where I stopped buying music on discs a long, long time ago. But I donâ€™t think this state of affairs can or should obtain forever.</strong></p>
<p><strong>Lately thereâ€™s been a good amount of thought around the notion of </strong><strong>&#8220;<a href="http://theunbook.com/about/">unbooks</a>,&#8221; which I regard as</strong><strong> a container for long-form ideas appropriate to an internetworked age. By building on some of the tropes of software development, mostly having to do with version control, open-endedness and an explicit role for the â€œuserâ€ community, unbooks can usefully harness the dynamic and responsive nature of discourse on the Web. At the same time, you preserve the things books are really good at: coherence, authorial voice and intent.</strong></p>
<p><strong>The important part is in acknowledging two points which have usually been understood as contradictory, but which are actually nothing of the sort: firstly, that the expression of ideas in written form has something to learn from the practices that have evolved around the collaborative creation of dynamic, digital documents over the half-century-long history of software; and secondly, that certain ideas require elaboration in the reasonably strongly-bounded form we know as a â€œbook,â€ and cannot meaningfully be shared otherwise. A third point, concomitant to the second, is that despite recent technical advances, screen-based media still cannot, and may not ever fully be able to, deliver the extratextual cues and phenomenological traces that support, inform and extend the meaning of written documents.</strong></p>
<p><strong>The unbook lets you have your cake and eat it too. So, for example, when we publish <em>The City Is Here</em>, one of its manifestations will be a static, physical document &#8211; and hopefully, if we do our jobs well, a very nice one indeed. But even before that, youâ€™ll be able to download a Creative Commons-licensed PDF of every numbered version of the manuscript, from zero onward. Bottom line: you buy the book if, and only if, you want the object. The ideas are free.</strong><br />
<strong><br />
TS: </strong><em><a id="ed35" title="David Brin" href="http://www.davidbrin.com/tschp1.html"> David Brin</a> sees two futures:1) the government watches everybody, and 2) everybody watches everybody (the latter he calls &#8220;sousveillance&#8221;).Â  My friend <a id="suag" title="Ben Goertzel" href="http://www.goertzel.org/">Ben Goertzel</a> says â€œhooking AI up to a massive datastore fed by ubicomp is the first step toward sousveillance?â€ What do you think the role of AI in ubicomp will be?Â  Is it worth thinking about what is the first important â€œAI meets ARâ€ app is?</em></p>
<p><strong>AG: I donâ€™t believe that artificial intelligence as the term is generally understood &#8211; which is to say, a self-aware, general-purpose intelligence of human capacity or greater &#8211; is likely to appear within my lifetime, or for a comfortably long time thereafter.</strong></p>
<p><strong>Having said that, your friend Ben seems to be making the titanic (and enormously difficult to justify) assumption that a self-aware artificial intelligence would share any perspectives, goals, priorities or values whatsoever with the human species, let alone with that fraction of the human species that could use a little help in countering watchfulness from above. â€œHooking [an] AI up to a massive datastore fed by ubicompâ€ sounds to me more like the first step toward enslavementâ€¦if not outright digestion.</strong></p>
<p><em><strong>Sousveillance </strong></em><strong>- the term is Steve Mannâ€™s, originally &#8211; doesnâ€™t imply â€œeverybody watching everybodyâ€ to me, anyway, so much as a consciously political act of turning infrastructures of observation and control back on those specific institutions most used to employing same toward their own prerogatives. Think Rodney King, think Oscar Grant.</strong><em><strong><a href="http://www.davidbrin.com/tschp1.html"><br />
</a></strong></em><br />
<strong>TS: </strong>I have one last question from Usman Haque.</p>
<p><strong>Usman Haque:</strong> insofar as a lot of what adam describes as desirable could be said to constitute pretty radical socio-political change (or perhapsâ€¦ â€œadjustmentâ€) i would be really interested to know how his current work @ nokia is or isnâ€™t able to gel with the themes of his writing. in some senses thereâ€™s quite an undercurrent strongly challenging corporate practices, in other senses it could be seen as gentle nudges. how does adam see it? and how about the nokia behemoth? does he have success nudging nokia towards the kind of world he would like to see (i imagine the answer is â€˜yesâ€™ otherwise he wouldnâ€™t be doing itâ€¦) but iâ€™d love to know more about the limits/challenges.</p>
<p><strong>AG: I am told that Henry Kissinger, on his first trip to China in 1971, asked Zhou Enlai whether he thought the French Revolution had or had not advanced the cause of human freedom.<br />
Zhou thought for a moment, pursed his lips, and replied, â€œIt is too soon to tell.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
	</channel>
</rss>
