<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; open metaverse</title>
	<atom:link href="http://www.ugotrade.com/category/open-metaverse/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Augmented World Expo 2013:  It&#8217;s a wrap!</title>
		<link>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/</link>
		<comments>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/#comments</comments>
		<pubDate>Tue, 09 Jul 2013 03:38:56 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[gestrural interface]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Linked Data]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Philip Rosedale]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Amber Case]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[Augmented World Expo]]></category>
		<category><![CDATA[AWE2013]]></category>
		<category><![CDATA[Ben Cerveny]]></category>
		<category><![CDATA[connected hardware]]></category>
		<category><![CDATA[gesture interaction]]></category>
		<category><![CDATA[Google Glass]]></category>
		<category><![CDATA[hardware startups]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Steve Mann]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[wearables]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6600</guid>
		<description><![CDATA[Augmented World Expo 2013 was really an amazing experience. I&#8217;m co-founder and co-organizer of the conference, along with Ori Inbar, so it has meant a lot to me to see our event grow over the last four years, and thrilling to make such a big splash this year.Â  There were 1,163 attendees, and the expo [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><iframe width="560" height="315" src="//www.youtube.com/embed/4d0k_7pdPGg" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/NQ-g0Jimg7I" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/9GxVQREssdY" frameborder="0" allowfullscreen></iframe></p>
<p><a href="http://augmentedworldexpo.com/" target="_blank">Augmented World Expo 2013</a> was really an amazing experience.  I&#8217;m co-founder and co-organizer of the conference, along with Ori Inbar, so it has meant a lot to me to see our event grow over the last four years, and thrilling to make such a big splash this year.Â  There were 1,163 attendees, and the expo show cased an ecosystem of emerging technologies &#8211; augmented reality, gesture interaction, eyewear, wearables, and connected hardware ofÂ  many stripes, that mark the beginning of natural computing entering the mainstream.  It was a unique opportunity to get up close and personal with what it feels like to be an augmented human in an augmented world! </p>
<p>Videos of AWE 2013â€²s 35 hours of educational sessions and inspirational keynotes are now available on <strong><a href="http://www.youtube.com/user/AugmentedRealityOrg/videos?view=0&amp;shelf_index=0&amp;sort=dd&amp;tag_id=" target="_self">our YouTube channel</a></strong>.  I am sharing <a href="http://www.youtube.com/watch?v=9GxVQREssdY">my own talk</a> (my slides are also up <a href="http://www.slideshare.net/TishShute/augmented-humansaugmentedworld">on slideshare here</a>), and a few of my favorites in this post, but there are far to many to post here, so please browse further on the Augmented World Expo youtube channel.</p>
<p>One notable high point of AWE2013, for me, was the showcase sponsored by <a href="http://www.meta-view.com/about">Meta</a>, a startup developing the first device allowing visualization and interaction with 3D virtual objects in the real world using your hands.  It was made possible by the generous contribution from the private collections of Paul Travers, Dan Cui, Steven Feiner, Steve Mann, and Chris Grayson, and passionate volunteers who are helping advance the industry.  Sean Hollister of The Verge did this excellent  report on the eyewear showcase <a href="http://www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">35 years of wearable computing history at Augmented World Expo 2013<br />
</a>  Also for more on Meta see <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">this article by Dan Farber</a>.</p>
<p>My colleagues at <a href="http://www.syntertainment.com/">Syntertainment</a>, Will Wright, Avi Bar-Zeev, Jason Shankel, and LaurenElliott all gave great talks.  Ironically, weâ€™re not building augmented reality apps or H/W.  We all just happen to continue to be very interested in the field. Â </p>
<p>Thank you to everyone for supporting the event! </p>
<p>The press coverage was truly extensive:</p>
<p style="text-align: left;"><a href="http://www.theverge.com/2013/6/9/4410406/in-the-shadow-of-google-glass-at-augmented-world-expo-2013">In the shadow of Google Glass, an augmented reality industry revs its engines<br />
</a>The Verge, Sean Hollister, June 9, 2013,Â <a href="http://topsy.com/www.theverge.com/2013/6/9/4410406/in-the-shadow-of-google-glass-at-augmented-world-expo-2013">271 Tweets</a></p>
<p><a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">The next big thing in tech: Augmented reality<br />
</a>CNET, Dan Farber, June 7, 2013<br />
Pick up onÂ <a href="http://currentnewsdaily.com/the-next-big-thing-in-tech-augmented-reality/">Current News Daily<br />
</a><a href="http://topsy.com/news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">350 Tweets</a></p>
<p><a href="http://thepersuaders.libsyn.com/awe-2013-conference-report-augmented-reality-and-marketing">AWE 2013 Conference Report: Augmented Reality and Marketing<br />
</a>The Persuaders Marketing Podcast onÂ Dublin City FM, June 23, 2013</p>
<p><a title="AR Dirt Podcast â€“ Episode 26 â€“ Ori Inbar AWE2013 Extravaganza Recap" rel="bookmark" href="http://www.ardirt.com/general-news/ar-dirt-podcast-episode-26-ori-inbar-awe2013-extravaganza-recap.html">AR Dirt Podcast â€“ Ori Inbar AWE2013 Extravaganza Recap<br />
</a>AR Dirt by Joseph Rampolla,Â June 18, 2013</p>
<p><a href="http://www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">35 years of wearable computing history at Augmented World Expo 2013<br />
</a>The Verge, Sean Hollister, June 9, 2013<br />
<a href="http://topsy.com/www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">7 Tweets</a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/06/augmented-reality-bruce-sterling-keynote-at-augmented-world-expo-2013/">Augmented Reality: Bruce Sterling, keynote at Augmented World Expo 2013<br />
</a>Wired, Bruce Sterling, June 9, 2013<br />
<a href="http://topsy.com/www.wired.com/beyond_the_beyond/2013/06/augmented-reality-bruce-sterling-keynote-at-augmented-world-expo-2013/">9 Tweets</a></p>
<p><a href="http://doc-ok.org/?p=598">On the road for VR: Augmented World Expo 2013<br />
</a>Doc-Ok, Staff, June 7, 2013<br />
<a href="http://topsy.com/trackback?url=http%3A%2F%2Fdoc-ok.org%2F%3Fp%3D598">3 Tweets</a></p>
<p><a href="http://www.wassom.com/my-interview-from-augmented-world-expo-2013-video.html">My Interview from Augmented World Expo 2013 [VIDEO] </a><a href="http://wassom.com/">Wassom.com</a>, Brian Wassom, June 7, 2013</p>
<p><a href="http://zenfri.com/2013/06/augmented-world-expo/">Augmented World Expo</a><br />
ZenFri, Staff, June 7, 2013</p>
<p><a href="http://www.fbnsantos.com/?p=9634">AWE2013: Hardware for an augmented world</a><br />
FBNSantos.com, Felipe Neves Dos Santos, June 6, 2013</p>
<p><a href="http://investorplace.com/2013/06/augmented-reality-will-be-the-new-reality/">Augmented Reality Will Be the New Reality</a><br />
InvestorPlace, Brad Moon, June 6, 2013</p>
<p><a href="http://www.techhive.com/article/2040837/wearable-computing-pioneer-steve-mann-who-watches-the-watchmen-.html">Wearable computing pioneer Steve Mann: Who watches the watchmen?</a><br />
TechHive, Armando Rodriguez, June 6, 2013</p>
<p><a href="http://abclocal.go.com/kgo/video?id=9127769">Expo puts augmented reality in the limelight</a><br />
ABC 7 News, Jonathan Bloom, June 5, 2013</p>
<p><a href="http://www.dvice.com/2013-6-5/these-oled-microdisplays-are-future-augmented-reality">These OLED microdisplays are the future of augmented reality</a><br />
DVICE, Evan Ackerman, June 5, 2013</p>
<p><a href="http://www.engadget.com/2013/06/05/visualized-history-of-augmented-and-virtual-reality-eyewear/?utm_medium=feed&amp;utm_source=Feed_Classic&amp;utm_campaign=Engadget">Visualized: a history of augmented and virtual reality eyewear</a><br />
Engadget, Michael Gorman, June 5, 2013</p>
<p><a href="http://www.papitv.com/wikitude-announces-wikitude-studio-and-in-house-developed-ir-tracking-engine">Wikitude announces Wikitude Studio and in-house developed IR &amp; Tracking engine</a><br />
PapiTV, KC Leung, June 5, 2013</p>
<p><a href="http://www.usatoday.com/story/tech/personal/2013/06/05/augmented-reality-expo-google-glass/2392769/">Augmented reality expo aims for sci-fi future today</a><br />
USA Today, Marco della Cava, June 5, 2013</p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/06/augmented-reality-high-dynamic-range-hdr-video-image-processing-for-digital-glass/">Augmented Reality: High Dynamic Range (HDR) Video Image Processing For Digital Glass</a><br />
Wired, Bruce Sterling, June 5, 2013</p>
<p><a href="http://allthingsd.com/20130604/will-wright-at-augmented-reality-conference-dont-augment-reality-decimate-it/">Will Wright at Augmented Reality Conference: Donâ€™t Augment Reality, Decimate It</a><br />
AllThingsD, Eric Johnson, June 4, 2013</p>
<p><a href="http://news.cnet.com/8301-11386_3-57587672-76/philip-rosedales-second-life-with-high-fidelity/">Philip Rosedaleâ€™s Second Life with High Fidelity</a><br />
CNET, Dan Farber, June 4, 2013</p>
<p><a href="http://www.pcworld.com/article/2040801/google-glass-competitors-vie-for-attention-as-industry-grows.html">Google Glass competitors vie for attention as industry grows</a><br />
PC World, Zack Miners for IDG News Service, June 4, 2013</p>
<p><a href="http://daqri.com/press_posts/press-release-4d-augmented-reality-leader-daqri-announces-15-million-financing-2/#.Ua-RjNhNuSo">4D Augmented Reality Leader Daqri Announces $15 Million Financing</a><br />
Press Release, June 4, 2013</p>
<p><a href="http://www.techzone360.com/topics/techzone/articles/2013/06/03/340432-crowdoptic-powers-lancome-virtual-gallery-app-crowd-powered.htm">CrowdOptic Powers Lancome Virtual Gallery App, Crowd-powered Heat Map</a><br />
TechZone 360, Peter Bernstein, June 3, 2013</p>
<p><a href="http://www.craveculture.net/2013/06/augmented-humans-now/">Augmented humans, enhanced happiness?</a><br />
Crave Culture, Angelica Weihs, June 2, 2013</p>
<p><a href="http://www.metaio.com/press/press-release/2013/metaio-vuzix-to-showcase-ar-ready-smart-glasses-at-the-2013-augmented-world-expo/">Metaio &amp; Vuzix to Showcase AR-Ready Smart Glasses at the 2013 Augmented World Expo</a><br />
Press Release, May 30, 2013</p>
<p><a href="http://qz.com/89467/four-ways-augmented-reality-will-invade-your-life-in-2013/">Four ways augmented reality will invade your life in 2013</a><br />
Quartz, Rachel Feltman, May 30, 2013</p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/05/augmented-reality-augmented-world-expo-is-next-week/">Augmented Reality: Augmented World Expoâ„¢ is next week</a><br />
Wired, Bruce Sterling, May 28, 2013</p>
<p><a href="http://www.prweb.com/releases/candy-lab/augmented-reality/prweb10763283.htm">Strike it Rich with Cachetown and AWE 2013 Playing the Gold Rush 49â€™er Challenge In Augmented Reality</a><br />
Press Release, May 24, 2013</p>
<p><a href="http://interact.stltoday.com/pr/lifestyle/PR052413071613074">Local Community College Student Headed to Silicon Valley to Learn More about Augmented Reality</a><br />
St. Louis Post-Dispatch, Staff, May 24, 2013</p>
<p><a href="http://www.cnet.com.au/explore-an-intricate-labyrinth-with-smartphone-ar-339344350.htm">Explore an intricate labyrinth with smartphone AR</a><br />
CNET Australia, Michelle Starr, May 21, 2013</p>
<p><a href="http://thechronicleherald.ca/business/1130672-dartmouth-firm-lands-super-app">Dartmouth firm lands super app</a><br />
Herald Business, Remo Zaccagna, May 21, 2013</p>
<p><a href="http://siliconangle.com/blog/2013/05/17/augmented-world-expo-2013-the-future-of-augmented-reality/">Augmented World Expo 2013â€“The Future of Augmented Reality</a><br />
Silicon Angle, Saroj Kar, May 17, 2013</p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/o6L3dcsLEto" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/FhLx7k07Pa4" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/ON7VUzsNcYI" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/qhVdTFcR6TA" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/REoEj-JkDww" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/ohatuq8tekk" frameborder="0" allowfullscreen></iframe></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Urban Games, Storytelling with Augmented Reality, The Big ARNY, and &#8220;Inside AR:&#8221; Talking with Thomas Alt, Metaio</title>
		<link>http://www.ugotrade.com/2010/09/27/urban-games-storytelling-with-augmented-reality-the-big-arny-and-inside-ar-talking-with-thomas-alt-metaio/</link>
		<comments>http://www.ugotrade.com/2010/09/27/urban-games-storytelling-with-augmented-reality-the-big-arny-and-inside-ar-talking-with-thomas-alt-metaio/#comments</comments>
		<pubDate>Mon, 27 Sep 2010 22:54:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[a collaborative AR game for New York]]></category>
		<category><![CDATA[A Swarm of Angels]]></category>
		<category><![CDATA[Area/Code]]></category>
		<category><![CDATA[ARNY Meetup]]></category>
		<category><![CDATA[Augmented reality Event 2010]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented reality googles]]></category>
		<category><![CDATA[augmented reality HMDs]]></category>
		<category><![CDATA[big urban games]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[Games That Know Where You Live]]></category>
		<category><![CDATA[gestural interfaces for augmented reality]]></category>
		<category><![CDATA[Inside AR]]></category>
		<category><![CDATA[ipad for augmented reality]]></category>
		<category><![CDATA[Junaio]]></category>
		<category><![CDATA[Junaio glue]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kevin Slavin]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Metaio's AR products]]></category>
		<category><![CDATA[mobile AR platforms]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Parrot AR Drone]]></category>
		<category><![CDATA[Peter Meier]]></category>
		<category><![CDATA[reality games]]></category>
		<category><![CDATA[story telling with AR]]></category>
		<category><![CDATA[The Big ARNY]]></category>
		<category><![CDATA[Thomas Alt]]></category>
		<category><![CDATA[Unifeye]]></category>
		<category><![CDATA[urban augmented realities]]></category>
		<category><![CDATA[urban games]]></category>
		<category><![CDATA[Web 2.0 Expo]]></category>
		<category><![CDATA[webpads]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5749</guid>
		<description><![CDATA[Today Metaio is holding Inside AR in Munich, Germany.Â Â  Metaio (the picture above shows Metaio co-founders Thomas Alt and Peter Meier), is behind some of the best known commercial and industrial AR experiences of recent years.Â  But as important as the many AR projects they have executed are the AR tools that Metaio has made [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/GF_Terminal_2.jpg"><img class="alignnone size-medium wp-image-5750" title="GF_Terminal_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/GF_Terminal_2-300x223.jpg" alt="GF_Terminal_2" width="300" height="223" /></a></p>
<p>Today <a href="http://www.metaio.com/" target="_blank">Metaio</a> is holding <a href="http://www.metaio.com/insidear/" target="_blank">Inside AR</a> in Munich, Germany.Â Â  <span><span><span>Metaio (</span></span></span>the picture above shows Metaio co-founders Thomas Alt and Peter Meier)<span><span><span>,</span></span></span><span><span><span> is behind some of the best known commercial and industrial AR experiences of recent years.Â  But as important as the many AR projects they have executed are the AR tools that Metaio has made available to developers.Â  <a href="http://www.metaio.com/products/" target="_blank">Metaio&#8217;s AR products and tools</a> have played an important role in bringing AR to a wider public, and given many developers the opportunity to explore AR. </span></span></span></p>
<p><span><span><span> </span></span></span><a href="http://www.metaio.com/insidear/" target="_blank">Inside AR</a> is a great opportunity to see what these AR pioneersÂ  will be up to in the coming months.Â Â  I could not make it to Munich this year.Â  But,<span><span><span> fortunately, I had the opportunity to talk with Thomas Alt, recently.Â Â  In this conversation &#8211; see below, I got a chance to discuss what was going on inside AR with Metaio.</span></span></span><span><span><span> </span></span></span></p>
<p><strong><span lang="EN-US"> </span></strong></p>
<p>The Fall season is always jam packed with great events, and I wish I could be in two places at once.Â  But this week, I will be in my home town, NYC, attending<span><span><span> <a href="http://www.web2expo.com/webexny2010/" target="_blank">Web 2.0 Expo</a> which, reflecting the heat in the NYC tech community, is a sold out event with a very exciting schedule this year (more on some of the presentations that I will be attending later in this post).Â  If you missed out on tickets to Web 2.0 Expo, a</span></span></span><span><span><span>ll Keynotes <a href="http://is.gd/fpnwp" target="_blank">will be Streamed Live: TUES 9/28 to THURS 9/30</a>, and keep your eye on @<a rel="nofollow" href="http://twitter.com/w2e">w2e</a> and #w2e on twitter. </span></span></span></p>
<p><span><span><span> </span></span></span><span><span><span> </span></span></span>Meanwhile, I am missing<a href="http://www.metaio.com/insidear/" target="_blank"> Inside AR</a>, which had some great speakers lined up, including fellow New Yorker, John Swords, partner and Ringleader at <a href="http://circ.us/">Circ.us</a>.Â  Hopefully, Swords will share his experiences at next month&#8217;s <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY Meetup</a> which will be <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">&#8220;joining forces with another vibrant community &#8211; NY Gaming &#8211; for an unforgettable night of Augmented Reality Games&#8221;</a> on Tuesday, Oct 19th, 6:30 PM at <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/venue/?eventId=13799452&amp;popup=true&amp;venueId=1382669" target="_blank">AOL Ventures</a> in New York, NY.</p>
<p>At the most recent ARNY @swords gave a brilliant talk on the possibilities for AR Game development on the newly available opensource <a href="http://ardrone.parrot.com/parrot-ar-drone/usa/" target="_blank">Parrot ARDrone platform</a>.Â  It was great to hear from social game guru @swords on his plans for Parrot ARDrone games, and more.Â  The picture below of an <a href="http://www.flickr.com/photos/johnswords/4982892669/" target="_blank">ARDrone camera view is from John Swords Flickr set</a>.Â  Swords was flying it inside his garage because the winds outside were too strong.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/4982892669_33fc14799d_b.jpg"><img class="alignnone size-medium wp-image-5754" title="4982892669_33fc14799d_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/4982892669_33fc14799d_b-300x200.jpg" alt="4982892669_33fc14799d_b" width="300" height="200" /></a></p>
<p>Also, I kicked off what will hopefully be an ongoing discussion on, <strong>&#8220;Story Telling with AR and the Big ARNY a collaborative AR Game for NY,&#8221;</strong> with a few slides.Â  I have opened up the presentation document for collaboration, so please ping me if you would like to be added as a contributor/editor, and are interested in getting involved.</p>
<p><iframe src="https://docs.google.com/present/embed?id=dhj5mk2g_633gbs95qgm" frameborder="0" width="410" height="342"></iframe></p>
<p><a href="http://ogmento.com/team" target="_blank">Ori Inbar</a>, CEO and co-founder of <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a>, <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a> and my co-chair on <a href="http://augmentedrealityevent.com/" target="_blank">Augmented Reality Event 2010</a>, suggested The Big ARNY &#8211; A Collaborative AR  Game Development Project modelled after A Swarm of Angels <a href="http://www.ugotrade.com/2009/12/06/augmented-reality-devcamp-nyc-the-big-arny-a-collaborative-ar-game-project-modeled-after-swarm-of-angels/" target="_blank">last year at the First ARNY Meetup</a> &#8211; so let&#8217;s make it happen!Â  I will be catching up with Ori in October about what Ogmento has been up to since they became <a href="http://techcrunch.com/2010/05/26/ogmento-first-ar-gaming-startup-to-win-vc-funding/" target="_blank">the first VC backed AR Game company</a>!</p>
<h3>&#8220;Games allow us to  see each other, for a moment, in a way that living in a city prevents&#8221; &#8211; Kevin Slavin</h3>
<p><span><span><span>I believe that, AR, to get beyond the stage of &#8220;interface du jour&#8221; needs to offer us new ways to relate to each other and the world around us so that we can actually improveÂ  and deepen our engagement with reality not just create experiences that are primarily opticalÂ  (see James Turner&#8217;s interview with Kevin Slavin <a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_blank">&#8220;Reality has a gaming layer&#8221;</a> on not letting &#8220;</span></span></span>the pleasure of a game and the meaning of a game and the experience of a game rest primarily in the optics.<span><span><span>&#8220;Â  And see my recent post, </span></span></span><a title="Permanent Link to Urban Augmented Realities and Social Augmentations that Matter: Talking with Bruce Sterling, Part 2" rel="bookmark" href="../../2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/">Urban Augmented Realities and Social Augmentations that Matter: Talking with Bruce Sterling, Part 2</a>).</p>
<p><span><span><span>Two of the most inspired creators of urban games,Â  Kevin Slavin and Kati London of <a href="http://areacodeinc.com/" target="_blank">Area/Code </a> will be speaking at <a href="Web 2.0 Expo" target="_blank">Web 2.0. Expo</a> tomorrow. Â  And you can be sure I will be at both these sessions. </span></span></span><a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15258" target="_blank">Loitering on the Motherboard</a>, Kevin Slavin,<a href="http://www.web2expo.com/webexny2010/public/schedule/full#s2010-09-28-14:35"> </a>is <a href="http://www.web2expo.com/webexny2010/public/schedule/full#s2010-09-28-14:35">2:35pm</a> <a href="http://www.web2expo.com/webexny2010/public/schedule/grid/2010-09-28">Tuesday, 09/28/2010</a>, and <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Games that Know Where you Live</a>,<a href="http://www.web2expo.com/webexny2010/public/schedule/full#s2010-09-28-16:55"> </a>Kati London &#8211; is a keynote that will also be <a href="http://www.web2expo.com/webexny2010/public/content/livestream">live streamed</a> &#8211; <a href="http://www.web2expo.com/webexny2010/public/schedule/full#s2010-09-28-16:55">4:55pm</a> <a href="http://www.web2expo.com/webexny2010/public/schedule/grid/2010-09-28">Tuesday, 09/28/2010</a></p>
<p><span><span><span> Recently <a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/86516/?cmp=il-radar-conf-web2expony-slavin" target="_blank">Kevin Slavin</a> was interviewed by James Turner, on O&#8217;Reilly Radar.Â Â  This, </span></span></span><span><span><span><a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_blank">Reality has a gaming layar</a>,</span></span></span><span><span><span> is a must read piece about a &#8220;world where games shape life and life shapes games&#8221;Â  (<a href="http://twitter.com/timoreilly/statuses/25413313179" target="_blank">see @timoreilly</a>).Â <a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_blank"> </a></span></span></span></p>
<p><span><span><span><br />
</span></span></span></p>
<h3>Interview with Thomas Alt</h3>
<p><span><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Thomas_Alt_1.jpg"><img class="alignnone size-medium wp-image-5751" title="Thomas_Alt_1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/09/Thomas_Alt_1-224x300.jpg" alt="Thomas_Alt_1" width="224" height="300" /></a><br />
</span></span></span></p>
<p><strong>Tish Shute: </strong> Perhaps you could just start with your background Thomas because I think thereâ€™s a lot of newcomers to AR but you are really one of the first movers in commercial AR.  How long youâ€™ve been involved in this?</p>
<p><strong>Thomas Alt:  Actually Iâ€™m an ex-researcher in augmented reality.  I started with me actually after getting my masterâ€™s work in engineering from the Technical University of Munich working for a big company called Volkswagen.  And at that time,1999, we got a research grant for researching how augmented reality could change manufacturing processes in the automobile industry.</strong></p>
<p><strong>And from the research work there, I basically went back to school, did my PhD about augmented reality. And while speaking at a conference, I met Peter Meier who is the co-founder of the company who was also a masterâ€™s student writing his thesis about augmented reality.  That was in 2002.</strong></p>
<p><strong>And so it really was in the very early days of augmented reality. And both Peter and myself we got really excited about the technology; we saw endless possibilities.  We said, â€˜OK. Letâ€™s just found a company.  We actually founded the company in early 2003 with virtually no money. As a matter of fact the founding capital of the company was 25,000 Euros and this 25,000 Euros were won in a case competition in Germany &#8211; a business plan competition.</strong></p>
<p><strong>Tish Shute:</strong> So you won 25,000 Euros on this case competition and thatâ€™s where Metaio started&#8230;.</p>
<p><strong>Thomas Alt: Exactly.  And to legally found a company in Germany it takes exactly 25,000 Euros so that was the founding capital.  We started pretty much like good old SAP started.Â  It wasnâ€™t in a garage though it was a very small office and we basically built up the business through work,  so we donâ€™t have any investors or whatever.  Right now we are 66 people located in Munich where our headquarters have been for five years. We have some presence in the US, and we have a venture company in Seoul, South Korea.</strong></p>
<p><strong>Tish Shute:</strong> Awesome. I just noticed how fast youâ€™ve been growing.  So right now, Iâ€™m going to ask a couple of questions about where you see the technology and the emerging industry going.</p>
<p>First, what are the platform of choice for Mobile Augmented Reality at the moment?</p>
<p><strong>Thomas Alt: Obviously in the cellphone hardware space there&#8217;s a fierce competition going on. It&#8217;s yet to be defined what will be the prevailing platform right now, obviously it&#8217;s the iPhone is big now, right? But Android is catching on very, very fast.</strong></p>
<p><strong>Tish Shute: </strong>You have pioneered bringing a cross platform SDK for vision assisted AR to a wide community of developers with Junaio and with your partnership with<a href="http://www.kooaba.com/" target="_blank"> Kooaba</a> &#8211; a visual search company from Europe.</p>
<p><strong>Thomas Alt: Yes, yes, and this is how we would, also in the future like to position ourselves with Junaio.Â  Junaio will be a platform, a technology platform, which will allow users to do whatever they want to do in augmented reality.  The API of Junaio is huge in the sense you can do anything from outdoor gaming, to visual search, to normal, uh, lay out style, you know, find the next burger king a mile away kind of super impositions.</strong></p>
<p><strong>Tish Shute: </strong>The only licensing you pay is for unifeye right? When you want to use your tool kit right?</p>
<p><strong>Thomas Alt: Yes and this is how weâ€™re distinguishing it.  Junaio is our consumer brand targeting newbie AR developers, with limited programming skills,  while the Unifeye platform is really our B to B platform where B to B customers can create their individual AR experiences.</strong></p>
<p><strong>Tish Shute: </strong>Yes which is what my friend Patrick Oâ€™Shaughnessey, <a href="http://patchedreality.com/" target="_blank">Patched Reality</a>, did for the Ben and Jerry&#8217;s app he created using Unifeye.<strong> </strong></p>
<p><strong>Thomas Alt: exactly&#8230;</strong></p>
<p><strong>Tish Shute:</strong> It is a lot of work developing for so different mobile platforms isnâ€™t it.  Junaio is on Android and iphone but you havent moved Junaio to Symbian?</p>
<p><strong>Thomas Alt: To be honest with you right now its a matter of priorities we have other things we want to do first.  And from analyzing the user base, iphone was a big step Android was a big step and now we are pretty much seeing what is happening next.  As you know Nokia going into different directions as far as their smart phone operating system goes, and so on and so forth.Â  There are also capacity constraints.  And right now obviously the most &#8211; potentially not the most possible users, but the users most inclined to do AR on a day to day basis are the ones using the iphone and android devices.  But obviously there are a lot bigger cellphone manufacturers out there.Â   But just you know even the mobile web users arenâ€™t as strong as the users on the iphone and android devices?</strong></p>
<p><strong>Tish Shute: </strong>So what do you think the  iphone 4 has that brought to the AR picture?</p>
<p><strong>Thomas Alt: Very fast camera access, very good for marker recognition.  If you go to the Metaio site you&#8217;ll find a movie where we show on the iphone 4 app for a real augmented reality Leggo peice &#8211; this is something which is very nice</strong></p>
<p><strong>Tish Shute: </strong>Yes I see that, yes that is nice, yes, yes very nice. The Unifeye SDK is really putting markerless AR into the mainstream.</p>
<p><strong>Thomas Alt: Yesterday we launched the first, a err very nice shopping&#8230; shopping solution for , but that&#8217;s completely external.</strong></p>
<p><strong>Tish Shute:</strong> Oh yes &#8211; the augmented reality shopping for seventeen.com, i was going to ask you about that, because it is the first augmented reality online shopping using natural feature tracking.</p>
<p>Also I am very excited to see the gestural interface, awesome!</p>
<p>The seventeen augmented reality shopping app is a PC experience but are you working on developing gestural interfaces for mobile AR?</p>
<p><strong>Thomas Alt: We are continually pushing the envelope of whatâ€™s possible with AR. Gestural interfaces for mobile AR is certainly the next step in taking what weâ€™ve done on the PC and making it more portable by using the mobile platform. One thing to keep in mind here is the limitations of mobile platforms and size of the screen needs to fit and make sense for the user experience.</strong></p>
<p><strong>Tish Shute:</strong> I know you started off as an AR researcher (although as you mentioned earlier you have been working in commercial AR and building Metaio for a long while now.</p>
<p>So in addition to how we are progressing towards gestural interfaces for augmented reality, I would like to ask some questions about AR eyewear.  We wonâ€™t really have hands free AR without eyewear.   What is your projection on when we will see consumer AR eyewear? And, Do you have a any comments on those speculating that we will not see AR eyewear go mainstream for 20years?! What is Metaio doing to move eyewear technology along?</p>
<p><strong>Thomas Alt: Well as you know, it&#8217;s always, you know on the technological roadmap, and we&#8217;re still doing research projects,  in our AR research department.   We have worked on things like calibrating eyewear for augmented reality.   There is some nice development there.</strong></p>
<p><strong>But really, commercially, the whole thing with eyewear has never caught on on a level which would make it a valuable avenue, business avenue, at least for Metaio.   So, I guess as an ex-researcher, it&#8217;s still a very interesting, a very good technology.  And it would definitely change the marketplace radically when available.  But as per right now, there are very few commercial applications.</strong></p>
<p><strong>Tish Shute: </strong>Are the obstacles to AR eyewear technological obstacles or is it just a question of a a business model.  I mean is it realistic to see eyewear in the next 3 to 5 years at a price point affordable to consumers, where you really, truly can have eye tracking? You know, the whole problem there was with virtual reality and eyewear giving people a headache.   How far have we come in terms of the technology?</p>
<p><strong>Thomas Alt: Well, it&#8217;s not so much technological factors &#8217;cause all fundamental problems are solved. It&#8217;s more a rather large corporation, I guess, would have to step up to the plate and say okay, do, let&#8217;s get all the state of the art in electronics and develop just a perfect HMD.</strong></p>
<p><strong>Tish Shute:</strong> Something Yohan Baillotâ€™s company <a href="http://www.simulation3d.biz/" target="_blank">Simulation 3D</a> is doing at is looking at is hooking up eyewear to smart phones.</p>
<p><strong>Thomas Alt: Yes exactly that would be even better. Metaio has done a strategic move into this HMD space for augmented reality about a year ago by acquiring a bankrupt company.  I mean, we had considerable IP around it from our research base but in the long term we still believe in it and we did a move about a year ago in buying what was left over from a bankrupt company including a lot of IP, which basically goes into the direction of mobile augmented reality but also mobile augmented reality in connection with head mounted displays.</strong></p>
<p><strong>Thereâ€™s actually a press release about this but that&#8217;s about a year ago&#8230;</strong></p>
<p><strong>I know that the whole HMD thing&#8230; I mean, I&#8217;ve seen companies come and go. Metaio has worked previously, very closely, with Microvision of Seattle.  We have worked with a German company, doing HMDs and we have worked with Vuzix.Â  We are still working with Vuzix, so we&#8217;re still consider it very valuable.Â  But right now, I mean, it&#8217;s just not a big part of our commercial pipeline, to put it that way.</strong></p>
<p><strong>Tish Shute:</strong> It was interesting what Bruce Sterling said in <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">his keynote at ARE 2010</a>.Â  He actually made a strong case for why smart phone augmented reality may be more interesting because it&#8217;s less immersive. I mean, he raised the question of the fact that if you really truly had AR eyewear and HMDs you&#8217;d re-enter the world of virtual reality or as he called it ARâ€™s Gothic step sister VR would rise from the grave&#8230;.</p>
<p><strong>Thomas Alt: Yeah, well, that&#8217;s a cultural or even a philosophical question and we have discussed it a lot, especially in the industrial domain. Also will the deployment of HMDs come about from end consumers using it in their spare time, or from professional users using the idea in their work time?</strong></p>
<p><strong>Tish Shute:</strong> Do you think it surprised people who have been working in augmented reality research how much people have engaged with the idea of smart phones as the mediating device for AR, and that rather than having the always on experience that eyewear would give us, we use smart phones as a magic lens of a smart phone when we need to or want to.  Some people were skeptical that anyone would want to hold up a little window to look at augmentations of the world &#8211; a magic lens.   I mean, it wasn&#8217;t self-evident that that would be an experience people enjoyed, and it turns out that it was.</p>
<p><strong>Thomas Alt: That&#8217;s actually a very good analogy. And also in my view, I mean, certain behaviors just change also, right? I mean, this is exactly what Apple&#8217;s trying with the iPad, right? You&#8217;re taking the iPad, and all of the sudden you&#8217;re not constrained to a laptop or whatever. And it&#8217;s truly a companion of the couch, in-bed Web, in the kitchen, and so on and so forth. So digital usage with the iPad, which is a different market, and I&#8217;m aware of that but as an example, the iPad has changed our behavior. And obviously, the augmented reality guys are hoping that something similar happens with AR.</strong></p>
<p><strong>Tish Shute:</strong> Which of course brings up the question, I&#8217;m assuming that some of the next generation of slates/ipads are going to have front and back cameras, GPS, and compass, right?</p>
<p><strong>Thomas Alt: Actually we know that.</strong></p>
<p><strong>Tish Shute: </strong>You know that, yes. I assume that you know that, because are you working on some prototypes, and have you got some plans?</p>
<p><strong>Thomas Alt: You have to understand that I cannot talk as I&#8217;ve talked as a researcher. It&#8217;s the rules, so I have to be a little bit careful about what I say. We very much think that a webpad, or whatever pad, you would want to call it is on some occasions very good device for AR.</strong></p>
<p><strong>Tish Shute:</strong> Yeah. But it is an interesting point with holding up a larger device, because you hands aren&#8217;t free,  but the neat thing about the phone for augmented reality is that you really can do a lot with your thumb, as we&#8217;ve found and just the position of the phone.  But, how will this work it with the two hands on the larger device?</p>
<p><strong>Thomas Alt: Keep in mind, everyone&#8217;s talking about mobile augmented reality, but really where the case for augmented reality, at this point, is the strongest is in the installation business, it&#8217;s in the web business&#8230; Not necessarily only commercially, but also use case-wise. There are tons of museums out there which are using our augmented reality system in an installation fashion, and to communicate products better, and more efficiently, and so on, and so forth.</strong></p>
<p><strong>So, I know that the hype is clearly on the mobile augmented reality side, but there are many examples augmented reality experiences where holding up a larger device is not a big obstacl</strong>e.</p>
<p><strong>Tish Shute: </strong>Well this brings me to some questions about the future of mobile AR.  My interview with Jay Wright focused on how we are now in a new period for AR bringing together computer vision, visual search into a mobile stack that is really optimized for AR.   What do you see emerging in mobile AR as we move beyond compass, GPS, camera, accelerometer based AR into markless image-based AR.   What will the new use cases and where will we see mainstream users getting in AR.   Will AR games be the first mainstream AR experiences?</p>
<p><strong>Thomas Alt: My partner is actually, first of all, one of my best friends, second of all, very emotional, third of all, very intelligent, and he said the other day something I think very valuable in this area. He said, basically think about Mobile Augmented Reality, Thomas. There&#8217;s really a very limited number of use cases which you can do if you look at these classical Point and Find applications, ok? But there are almost unlimited number of use cases when Augmented Reality becomes a day to day companion, ok? So what he meant is, ok, I&#8217;m looking at my normal day&#8230; I&#8217;m looking at the city, I&#8217;m walking throughout this, I&#8217;m coming home, I&#8217;m having dinner basically. I can deploy Augmented Reality in a pure POI search fashion perhaps not even once. Ok when I&#8217;m travelling it&#8217;s a different story, but in an ordinary day I might use a POI search not even once.</strong></p>
<p><strong>But where this ultimately leads, you know, is even in the 15 minutes I&#8217;m having breakfast, I&#8217;m using AR &#8211; looking at the cereal box with my cell phone, I&#8217;m taking part in a sweepstakes or whatever. So from that we draw the conclusion that as a general strategy for Junaio, we should basically throw as much technology as possible into Junaio, make it halfway self-explanatory, and just give people the possibility to come up with ideas on how to deploy augmented reality continuously.</strong></p>
<p><strong>We have actually got a creative team from an Art School working on that, and just, you know, with very little programming skills coming up with things you can do with Augmented Reality on a day-to-day level. And it could be a scavenger hunt game, in the city, with monsters flying around, it could be the normal POI routine, it could be marketing purposes, and so on and so forth. And I think that&#8217;s really the roadmap, and this is a little bit similar on a more technical level, to what Qualcomm is doing, &#8217;cause they&#8217;re floating out possibilities or capabilities I want to call them, and Metaio is doing that, but on a higher level [re the tools] meaning on a Junaio level.</strong></p>
<p><strong>Junaio is a capability platform.  It is also a way for Metaio to demonstrate the capabilities of our technology.  We will offer all the  possibilities for AR and more that we have already demonstrated on PC augmented reality.</strong></p>
<p><strong>Tish Shute:</strong> What is the business model for Junaio?   Are you encouraging developers to develop business on your platform ?</p>
<p><strong>Thomas Alt: Junaio is our end consumer platform and our business model is similar to the way Google structures its business model. We work with OEMs, content partners, brands, and developers to offer free content to our end users. Where we do charge is on the advertising side, more specifically contextual and location based advertising. At the current stage, we are focused on building the content base, fostering our developer community, but in the near future, we will be introducing advertising channels.</strong></p>
<p><strong>First of all you have to have very good use cases in the platform basically. And then to put a business model on top of that from a technology stand point is not hard &#8211; its a pay channel.  Its all prepared for this.</strong></p>
<p><strong>Tish Shute: </strong>You have quite a broad base as a company donâ€™t you &#8211; you do everything from industrial AR, marketing to technology licensing and more?</p>
<p><strong>Thomas Alt: Basically, thereâ€™s a lot of things people donâ€™t see us.  There is also the Unifeye PC SDK and we have a client base and partners who are sourcing software from us, and we are doing great pieces. I mean the hype of augmented reality is really coming to a peak. There are lots of pieces that are not even talked about any more.  Chinaâ€™s GQ magazine launched with AR from Metaio, the biggest AR campaign anywhere &#8211; there are a lot of potential readers in China.  And um, so thatâ€™s our business model&#8230; we have our IP, our patents and so on. And on this we can move on onto the mobile platform whenever itâ€™s advisable or feasible.</strong></p>
<p><strong>Tish Shute:</strong> Right. Yeah. I mean uh, youâ€™re very fortunate to have this base built on uh, years of developing IP.  What are the most important areas of AR that Metaio holds IP and patents in, in your view?</p>
<p><strong>Thomas Alt: thereâ€™s sleepless nights in that too&#8230;</strong></p>
<p><strong>So far we&#8217;re extremely excited about what&#8217;s going on with Junaio, it&#8217;s one of our big, big success stories. But we are sensible and trying to experiment because, you know, analogies from the past won&#8217;t really work in my view for Augmented Reality in a sense, that, you know, you better bring for a new system to fly, for a new technology to fly, you better bring a very concrete use case to the table, ok? And a well-defined use case. And we are, right now, with Junaio, in a state where we are checking out what could be such a very defined use case.</strong></p>
<p><strong>Tish Shute:</strong> So how many users does Junaio have now?</p>
<p><strong>Thomas Alt: Let&#8217;s put it that way, we are, especially in the last 2 months, we are very satisfied. But we are not disclosing that, because users, and we&#8217;re seeing that from the competitive landscape, always needs 1 page of description what exactly a user is.</strong></p>
<p><strong>You understand what I&#8217;m saying? So this is why, &#8217;cause we don&#8217;t want to up or downplay things, we are very careful saying, with users. Because I mean we have people who are actually also commercially very interested in Junaio&#8230;Â  We go through with them and discuss what exactly a user is. Cause there&#8217;s more then&#8230;a download is not a user. An app or something on your phone is not a user, basically, in my opinion.</strong></p>
<p><strong>Tish Shute:</strong> I am still waiting to see someone do something with AR and the Four Square API, or now the Facebook Places API.  Do you see an interesting potential in the marriage of the rapidly emerging location based social networking space and augmented reality?</p>
<p><strong>Thomas Alt: Definitely. Augmented reality offers a way for users to find information around them easily. Adding in the social networking component such as geo-tagging, rating, commenting can enhance the user experience and create engagement beyond just viewing the information. For example, within junaio, an average user can create their own personal channel and geo-tag photos or leave text messages at different locations. They can create a virtual tour of San Francisco and share it with their friends. By connecting the social side with good content, the augmented reality experience becomes more fun and interactive.</strong></p>
<p><strong>Tish Shute: </strong> And, of course, thereâ€™s the Junaio API.  Are you beginning to see developers use that?</p>
<p><strong>Thomas Alt: Yeah exactly, I mean if you go to Junaio.com, you can get a login, you have an API description. And the way it works is, that you bascially set up a call, which contains the information you would like to have in your individual channel. You submit it to us, it will get checked for profanity and other things, basically. And then we admit it to Junaio basically. The API is huge.Â Â  You can also use Junaio indoors.</strong></p>
<p><strong>This is very relevant. And there&#8217;s a tool chain for that, and so on and so forth. You can do mission-based search with Junaio. It&#8217;s in there, it&#8217;s called Junaio Glue. And there will be another very interesting feature coming up in a couple of weeks. And you can just do it, you can do a scavenger hunt, a game, normal POI search, and so on and so forth. And it&#8217;s all active. And that&#8217;s, what&#8217;s sometimes difficult for us to communicate, is it&#8217;s really a capabilities platform, but on the other hand it&#8217;s obviously very good to developers. And I mean on the developers side there&#8217;s huge interest.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/09/27/urban-games-storytelling-with-augmented-reality-the-big-arny-and-inside-ar-talking-with-thomas-alt-metaio/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>The AR Wave Project: An Introduction and FAQ by Thomas Wrobel</title>
		<link>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/</link>
		<comments>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/#comments</comments>
		<pubDate>Sat, 05 Dec 2009 02:50:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Blps]]></category>
		<category><![CDATA[AR DevCamp]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[AR Wave project]]></category>
		<category><![CDATA[AR Wave Wiki]]></category>
		<category><![CDATA[ARBlip]]></category>
		<category><![CDATA[ARDevCampNYC]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented Realit]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[multiuser multisource augmented reality]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[PyGoWave Qt-Based Desktop Client]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[storing geolocated data on Wave Servers]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Wave enabled augmented reality]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4960</guid>
		<description><![CDATA[ImagesÂ  from Mitsuo Iso&#8217;s Denno Coil (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator. Thomas Wrobel, Sophia Parafina, Joe Lamantia, Matthieu Pierce, and I will lead a Â session tomorrow for AR DevCampNYC introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM.png"><img class="alignnone size-medium wp-image-4961" title="Screen shot 2009-12-04 at 6.43.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM-300x181.png" alt="Screen shot 2009-12-04 at 6.43.24 PM" width="300" height="181" /></a><br />
</strong></p>
<p><em>ImagesÂ  from Mitsuo Iso&#8217;s<a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank"> Denno Coil</a> (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator.</em></p>
<p><a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://opengeo.org/about/team/sophia.parafina/" target="_blank">Sophia Parafina</a>, <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia, </a><a href="http://matthieupierce.com/" target="_blank">Matthieu Pierce</a>, and I will lead a Â session tomorrow for<a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank"> </a><a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and I will both be at <a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> at the <a title="http://openplans.org/contact/" rel="nofollow" href="http://openplans.org/contact/">The Open Planning Project office (TOPP)</a>.Â  The <a href="http://pygowave.net/" target="_blank">PyGoWave</a> crew will be introducing <a href="http://livestream.com/pygowave" target="_blank">PyGoWave via LiveStream</a>.</p>
<p>At 1.30pm EST to 2.30pm EST there will be a shared <a href="http://pygowave.net/" target="_blank">PyGoWave</a>/AR Wave session <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">with Mountain View </a>(if bandwidth permits).</p>
<p>The skype conference will be at ardevcampnyc . Â To participate in Wave,Â  please join the public Wave, Â <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BH83lcj6RA" target="_blank">AR Wave: AR DevCamp Session</a>. Â There is also a <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki up now &#8211; see here</a>.</p>
<p><a href="tridarras.com/#http://www.dimitridarras.com/images/dd_work.jpg" target="_blank">Dimitri Darras </a>(avatar Dimitri Illios) is working on streaming the AR DevCampNYC sessions into Second Life,Â  <a href="http://slurl.com/secondlife/Ambleside/228/247/25" target="_blank">SLURL here</a>.</p>
<p>Thomas has done a very nice introduction and FAQ below.Â  This should help people new to this project to get up to speed quickly.</p>
<p>There are already several Waves that show the history of this project including: <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB" target="_blank">AR Wave: Augmented Reality Framework Development</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BeyLQLb4ED" target="_blank">AR Wave Use Cases</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252Bok4URyFyR" target="_blank">PyGoWave AR Tech Discussion</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BJAcNzz16A" target="_blank">AR Wave Augmented Reality Wave Development</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252B0VnNxxoOB.1" target="_blank">AR Wave / Muku Organization and Admin</a>.</p>
<p>Also I have several posts for people interested in more of the background, including: <a title="Permanent Link to The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!" rel="bookmark" href="../../2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/">The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</a>, <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">AR Wave: Layers and Channels of Social Augmented Experiences</a>, <a title="Permanent Link to Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave" rel="bookmark" href="../../2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/">Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave.</a></p>
<p>Thomas uses the term Arn (augmented reality network) which is one of the candidate names for the project, Muku (crest of a Wave) is another suggestion.Â  Thomas&#8217; intro and FAQ below can also be found <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">here</a>.</p>
<p><strong><br />
</strong></p>
<h3><strong>What is the AR Wave Project?</strong></h3>
<p><strong> </strong></p>
<p>In simple terms its a protocol for storing <a id="zblc" title="geolocated" href="http://en.wikipedia.org/wiki/Geolocation">geolocated</a> data on Wave servers that&#8217;s currently being developed.</p>
<p>We believe this will help lay the foundations for an open, universally accessible, and decentralised system for shared augmented reality overlays which various clients can connect to and use.</p>
<p>This AR Network should spark a lot more rapid adoption of AR technologies, give existing browsers more functionality, and provide the network infrastructure, allowing many of the fictional depictions of AR to become a reality one day.</p>
<p><strong>The AR Network.</strong></p>
<p>When we speak of a future AR Network, we mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.</p>
<p>Where people can just point their phone, webcam, or pair of AR glasses anywhere where a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.</p>
<p>The Arn should be an inclusive and open platform where any number of devices can connect to, and anyone can make and host their own location-specific models or data.</p>
<p>It should allow people to communicate both publicly and privately, and not have their vision constantly cluttered with things they donâ€™t want to see.</p>
<p>This is our vision, and we think a Wave protocol will help it become a reality.</p>
<p><strong>Why Wave?</strong></p>
<p>Wave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</p>
<p>These are all incredibly useful properties for any AR-experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the user: a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secure: data is only exchanged when necessary, and will stay local to just one server if no one else needs to view it.</p>
<p>Wave allows bots to run on it and thus allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</p>
<p>For all these reasons and a few more, Wave makes a great platform for AR.</p>
<p><strong>How?</strong></p>
<p>In basic terms, we will diverse a standard way to geolocate a bit of data and store it as aÂ <a id="u0cd" title="Blip" href="http://google.about.com/od/b/g/google_wave_blip.htm">Blip</a> within a wave.</p>
<p>This data could be a 3d mesh, a bit of text, or even a piece of audio.</p>
<p>Then various clients on various devices could logon, locate, interpret and display this data as they see fit.</p>
<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4962" title="Screen shot 2009-12-04 at 7.56.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM-300x168.png" alt="Screen shot 2009-12-04 at 7.56.58 PM" width="300" height="168" /></a></p>
<p><em>Click on image above to enlarge.</em></p>
<p>A typical example of this might be holding up your phone and seeing messages written by your friends and family in the locations which they are relevant.</p>
<p>You could see an arrow hovering over the cafÃ© your meeting a friend at, notes above their flat saying if they are in or out, or messages by shops telling you to pick up the particular brand of cereal they like.</p>
<p>This data would be personal to just yourself and whoever you invite to share that wave with.</p>
<p>Other forms of data could be public, like city-maps, online games, or historical landmarks being recreated. Custom views of the world with data for entertainment, commercial, environmental or informative purposes.</p>
<p>The possibilities with geolocated data are endless, as are the various ways to display and make use of them.</p>
<p>One of the things I&#8217;m most passionate about is people being able to see many different types of data, both public and private at the same time and from many different sources at once.</p>
<p>For instance, if your playing a AR game, why shouldn&#8217;t your chat window be viewable at the same time?</p>
<p>If you have skinned your environment with a custom view of the world, why shouldn&#8217;t you also see mapping or restaurant recommendations?</p>
<p>The ways to present these layers of data and toggle them on/off in the most intuitive and flexible ways would be a task for the client markers, and I&#8217;m sure we will see many innovations in those areas.</p>
<p>But by using Wave it at least provides the framework for having multiple information sources controlled by many different people yet accessible, and user-submittable, via the same protocol.</p>
<p><strong>Who?</strong></p>
<p>This idea first sprouted from a paper I route focusing on the potential for IRC to be used for AR;</p>
<p><a id="ig44" title="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf" href="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf">http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf</a></p>
<p>I suggested near the end Wave might be a better alternative (using Google Wave was an idea Tish Shute, Ugotrade, brought up in response to the Arn prototype design on IRC), and it quickly became apparent that Wave was a very suitable medium.</p>
<p>Since then, there was a lot of interest, and numerous people have offered to help.</p>
<p>In particular, recently, the <a id="vms1" title="PygoWave" href="http://pygowave.net/blog/">PygoWave</a> team is helping us out, as they have an existing server supporting c/s protocol, which is currently being actively developed.</p>
<p><strong>Where?</strong></p>
<p>You can join the general discussion here;<br />
<a id="wvja" title="Augmented Reality Wave Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BJAcNzz16A">Augmented Reality Wave Development</a></p>
<p>Technical side here;<br />
<a id="qw95" title="Augmented Reality Wave Framework Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Augmented Reality Wave Framework Development</a></p>
<p><strong>When?</strong></p>
<p>There&#8217;s lots still to do, and we are at an early stage.</p>
<p>Our current targets: (last updated 11/12/2009)</p>
<ul>
<li>Getting reading/writing of prototype ARBlips to the PygoWave sever. (the PygoWave team have already made a standalone client and have the protocol for this sorted!)</li>
<li>Establishing a minimal spec for ARBlips to be later expanded.</li>
<li>Writing a very simple prototype online client showing how to store/retrieve the data.</li>
<li>Expanding client to work for some use-cases.</li>
<li>Establish a logo/branding for the project.</li>
</ul>
<p><strong>Other FAQs.</strong></p>
<p><strong>Where&#8217;s the catch?</strong></p>
<p>While we believe Wave is highly suitable for development, it has the drawbacks of being a new system with just a few servers worldwide, which (at the time of writing this), have not yet been federated together yet.</p>
<p>Naturally, as a new technology, its likely to have some growing pains. And building a new technology on other new technology will multiply that somewhat. The first pain is the lack of a standard client / sever protocol. PygoWave have stepped in to the rescue a bit here, by being not just one of the most developed Wave server other then Google, but also leaping ahead with support for Json based c/s interaction. Google has stated they want community to take the lead on on a c/s protocol, so we are hoping they will adopt a Json variant, or a XMPP one and add it to the spec. We hope in much the same way as POP3/IMAP have been a standard for email server interaction, a similar one will develop for Wave.</p>
<p>In the meantime we plan to keep the code for writing ARBlips somewhat abstracted so as to make it easy to adapt in future.</p>
<p>As for the newness of Wave and other potential problems it will bring, we aren&#8217;t that worried as its built on <a id="jnw1" title="XMPP" href="http://en.wikipedia.org/wiki/XMPP">XMPP</a>, which has proved reliable already.</p>
<p>The other catch is we are unfunded, which slows development down considerable as we have to fit it around our other jobs.</p>
<p><strong>I&#8217;m making my own AR Browser, and am slightly interested in maybe supporting you.</strong></p>
<p>We are naturally very keen for support, and particularly for those with skills and visions to give feedback on the proposed protocol. Specifically: what do you want stored in a blip?</p>
<p>That&#8217;s what&#8217;s important at this stage.</p>
<p>We don&#8217;t see the Arn as a replacement for existing browser systems at the moment. We don&#8217;t want to restrict innovation or development in this fast developing market as we are very impressed at what&#8217;s been achieved so far. In many ways our task is small in comparison to what&#8217;s already accomplished.</p>
<p>However, we do believe the Arn will make a good addition to existing browser systems. It will allow users contribute data and have social features without having to worry about accounts or hosting.</p>
<p>It will still be quite some work to support; new GUIs will need to be developed to make it easy to submit data from the devices, as well as to login to waves.</p>
<p>However, we hope over time to build a set of example libs to make the read/writing of ARBlips as as easy as possible to implement in your software.</p>
<p>Perhaps a good way to think about it is existing AR Browsers are like word-processors, supporting the Arn will be like adding support for *.txt, but doesn&#8217;t limit what you can do with your own format.</p>
<p><em>Eventually</em> we do hope ARBlips hosted on Wave will become the majority of AR data, and its functionality will be analogous to the internet is today. We truly believe in the long run a standard is essential.</p>
<p>But for now we think merely getting a baseline format established for how AR data can be communicated will increase user-ability, usefulness, and help the market grow.</p>
<p><strong>Can I help?</strong></p>
<p>Sure.</p>
<p>We particularly need people with technical skills in relevant fields. (both gwt/javascript web programming and c++(/qt)standalone programming help very welcome!).</p>
<p>But we also welcome people just with vision to help focus use-cases and to conceptualise what we want to be able to do with the system.</p>
<p>Please either join the relevant AR Waves or <a href="http://arwave.wiki.zoho.com/HomePage.html">Wiki</a></p>
<p>We are especially interested in those with JSON and Comet experience. Specifically those with the abilities to make standalone applications to read/write to a sever using these methods.</p>
<p><strong>What type of data will a AR Blip store?</strong></p>
<p>This is still actively being decided, but essentially its a physical hyperlink.</p>
<p>A connection between a physical location (or object, see below) and a piece of data.</p>
<p>Specifically, we are thinking about the following fields;</p>
<p>Location in X,Y,Z,<br />
Coordinate System used for the above,<br />
Orientation,<br />
MIMEType <span style="color: #666666;">[the type of data stored]</span><br />
DataItself <span style="color: #666666;">[either a http link for 3d meshs and other larger data, or an inline text string if its just a comment]</span><br />
DataUpdateTimestamp <span style="color: #666666;">[so clients know if its necessary redownload]</span><br />
Editors <span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="color: #666666;"><span style="background-color: #ffffff;">[the user/s that edited/created this blip]</span></span></span></span></span></span><br />
ReferanceLink <span style="color: #666666;">[data needed to tie the object at a non-fixed location, such as an image to align it to an object in realtime],</span><br />
Metatags <span style="color: #666666;">[to describe the data]</span></p>
<p><strong>Are you purely tying stuff to fixed geolocations?</strong></p>
<p>Certainly not <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
As part of of the spec we wish to be able for people to be able to link data to dynamically moving objects, trackable by image or other methods.</p>
<p>The idea being that one day someone could link a piece of text or 3d mesh to an image on a t-shirt they are wearing, or perhaps link a dynamically updating twitter feed, or perhaps provide information on a product (based on its logo).</p>
<p>There&#8217;s a large number of possibility&#8217;s for image-based linking alone, and that&#8217;s not even considering possibilities like linking RFIDs, or other forms of less precise but invisible binding data.</p>
<p>We need a lot of feedback from those companies already doing markless tracking. What types of images do you need, idly to link a mesh to an object? is one enough?</p>
<h3><strong>Summary of AR Wave Work to Date</strong></h3>
<p><strong>Purpose:</strong> To provide an open, distributed, and universally accessible platform for augmented reality. To allow the creation of augmented reality content to be as simple as making an html page, or contributing to a wiki.</p>
<p><strong>Specific Goal:</strong> To establish a method for geolocating digital data in physical space (or linking it to physical objects) using wave as a platform.</p>
<p>(For justification as to why we are using Wave see: <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">our faq</a> )</p>
<p><strong>Wave as a platform</strong></p>
<p>We are developing on the <a title="PyGoWave" href="http://code.google.com/p/pygowave-server/" target="_blank">PyGoWave</a> server at the moment but the goal is to be compatible with all Wave servers</p>
<p>PyGoWave has already achieved an important aspect in enabling the project in being a waveserver with a working and well documented server protocol. This allows both standalone and webbased clients to interface with it already.Â  See -Â <a href="http://github.com/p2k/pygowave-qt">The PyGoWave Qt-Based Desktop Client</a></p>
<p>This is one of the reasons why we have chosen to develop for the Pygo server at this stage.</p>
<p>However, the overall goal of AR Wave is to have a framework compatible with all servers using the Wave Federation Protocol. As more wave servers get c/s protocols then ARblips (the data needed to geolocate objects) could be posted and retrieved from various servers using the same client software. For this a standard should emerge. Just as websites don&#8217;t have to be hosted on specific servers, neither should AR data need to be hosted on specific wave servers.</p>
<p>In order to reach our goal, there are a few very achievable steps involved &#8211; see below.</p>
<p><strong>Feedback</strong></p>
<p>We are still actively seeking feedback, so feel free to join the <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Wave discussions, </a>and see the history of how the specifications of the protocol evolved. You can also read the justification for some of the choices already made. Note a new discussion for AR DevCamp will be begin at <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BH83lcj6RA">AR Wave: AR DevCamp Session</a></p>
<p>This will, of course, only be the first draft of the specification, and it is sure to develop much in future.<br />
The important thing now is to make working prototypes while maintaining flexibility.</p>
<p>So what do we need to do?</p>
<p><strong>Steps :</strong></p>
<p><strong>* Establish the overall method &#8211; Done.</strong></p>
<p>Each Wave will be a layer on reality which an individual or a group can create.Â  Each Blip in this Wave refers to either a small piece of inline data (like text) or a remote piece of larger data (like a 3D mesh) as well as the data needed to pin-point it in either relative or absolute real space.<br />
We call these blips: ARblips. They are simply blips that stored the data necessary to augment a single object onto a specific bit reality.</p>
<p>It is up to the clients how they interpret and display the data. They could interpret it as a simple 2d list of nearby objects, or as an advanced 3D overlay, whereby multiple waves from different sources could to be viewed at once. Whatâ€™s important is that there is a standard way to link the digital data to the real world space.</p>
<p>* Establishing the specification for the ARblip &#8211; In progress<br />
We have a good idea of whatâ€™s needed to be stored in an ARblip, and we have hammered out a rough format.<br />
The data might be stored as blip-annotations, but this has yet to be finalised.<br />
A rough outline of the type of data stored can be seen in this c++/qt header for ARblip data can be seen at the end of this document.</p>
<p>* Storing and retrieving these pieces of ARblip data on the PyGo server &#8211; In progress.<br />
The Pygowave team has made some excellent libraries that should make reading and writing data on the PyGoWave server very trivial for those with c++ skills.<br />
This, however, is a real critical step, so more developers with C++ skills are very welcome!</p>
<p>* Making the above client mobile, and using a devices gps device to place the data. &#8211; Not started.<br />
The next step would be to port the code to a mobile phone and use it&#8217;s gps-inputÂ  to post geolocated data and view what others have posted. This would be a fairly simple and not to useful app in itself. However, it would mark the first time anyone could post AR data and anyone could view it, all using open-source infrastructure.<br />
As a bonus, because we are using wave infrastructure, the updates to any ARblip should appear in near realtime.</p>
<p>* To continue with the proof of concept, we would like to have simultaneous wave input from a PC<br />
and mobile phone at the same time. &#8211; Not started.<br />
For example, someone could post a pin on Google maps API and have that data posted to a ARBlip in a wave. Someone logged into that wave on their mobile device would then see the data posted appear.<br />
More so we hope that when the Google map pin is dragged about, the mobile phone viewer, with just a few seconds lag, will see its location updated in real time.</p>
<p>We hope to make a modest yet practical app at this stage.</p>
<p>* After all this, we can go onto the interesting things:<br />
3D data, camera-overlays, data fixed to objects and many more.Â  There&#8217;s plenty of existing software using these features (such as Wikitude, Layer) and some that are even open source software (like Gamaray and Flashkit). The open source code can give us a leg-up. However, we prefer to establish the protocol first. So naturally, these fancy features aren&#8217;t a priority for us. Rather we think our energy is better spent establishing the protocols and infrastructure so that other people can build more advanced bit of software easier.</p>
<p>However, once our primary goals are established, we will look to make a open source augmented reality browser ourself which will surely feature many of these features.</p>
<p>Overall, we hope once we have a simple proof of concept, there will be many groups, both existing and new, wanting to use this Wave system for their own apps, games and data.</p>
<p><strong>Conclusion</strong>:<br />
Really it&#8217;s now all about growing the community. We hope as soon as we show how great Wave can be for augmented reality, that lots of individuals and teams will start making their own clients to read/write geolocated data.<br />
Overall we don&#8217;t think anything we make will be that impressive in itself. That&#8217;s not our goal.<br />
We instead hope that our project will enable AR-content to be made as easily as web content. That games, information and apps will be able to be created without the creators having to worry<br />
about the infrastructure behind it.</p>
<p><strong>Technical information -</strong><strong> </strong></p>
<p><strong><br />
</strong><strong>Current ARBlip header file</strong></p>
<p>(below is a c++/qt header file for an ARBlip object that should illustrate the data being stored)</p>
<hr />class <strong>arblip</strong></p>
<p>{</p>
<p align="left"><strong>public</strong>:</p>
<p align="left">arblip();</p>
<p>~arblip();</p>
<p>arblip(QString,QString,double,double,double,int,int,int,QString);</p>
<p>QString getDataAsString();</p>
<p>QString getEditors();</p>
<p>QString getRefID();</p>
<p>QString getXAsString();</p>
<p>QString getYAsString();</p>
<p>QString getZAsString();Â bool isFaceingSprite();Â <strong> </strong></p>
<p><strong><br />
private</strong>:</p>
<p>//ID reference. This would be a unique identifier for the blip. Presumably the same as Wave uses itself.</p>
<p>QString ReferanceID;</p>
<p>//Last editor(s)</p>
<p>QString Editors;</p>
<p>int PermissionFlags = 68356; Â // default 664 octal = rw-rw-r&#8211;</p>
<p>//Location</p>
<p>double Xpos;Â Â  // left/right</p>
<p>double Ypos;Â Â  // up/down</p>
<p>double Zpos;Â  // front/back</p>
<p>//Orientation</p>
<p>// names, ranges and directions are taken from aeronautics.</p>
<p>// If no orientation is specified, itâ€™s assumed to be a facing sprite.</p>
<p>// Roll: rotation around the front to back (z) axis. (Lean left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects right side down.</p>
<p>int Roll;</p>
<p>// Pitch: rotation around the left to right (x) axis. (tilt up or down)</p>
<p>// Range +/- 90 degrees with + values moving the objects front up. (looking up)</p>
<p>int Pitch;</p>
<p>// Yaw: rotation around the vertical (y) axis. (turn left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects face to its right.</p>
<p>int Yaw;</p>
<p>bool FacingSprite; //if no rotation specified, this should default to true</p>
<p>//if set to true when a rotation is set, then it keeps that rotation relative to the viewer</p>
<p>//not relative to the earth.</p>
<p>//Data format</p>
<p>QString DataMIME;</p>
<p>QString CordinateSystemUsed; //The co-ordinate system used. This should be a string representing a Open Geospatial Consortium standard. This could be earth-relative for gps co-ordinates, or in some cases relative to the viewer, for data to be displayed in a HUD like style.</p>
<p>//Data itself</p>
<p>QString Data;</p>
<p>QString DataUpdatedTimestamp; //Time the Data was updated changed</p>
<p align="left">//Note; A seperate timestamp should be used for updates that dont effect the data itself.<br />
//(such as if a 3d object moves, but its mesh isnt changed)</p>
<p>//Data metadataÂ QMap&lt;QString, QString&gt; Metadata;</p>
<p>};</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</title>
		<link>http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/</link>
		<comments>http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/#comments</comments>
		<pubDate>Fri, 20 Nov 2009 04:53:07 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AR browsers]]></category>
		<category><![CDATA[AR Dev camp]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[calo]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[real time internet]]></category>
		<category><![CDATA[siri]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[The Copenhagen Wheel]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the outernet]]></category>
		<category><![CDATA[the sentient city]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Web Squared]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4869</guid>
		<description><![CDATA[The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now! View more presentations from Tish Shute. Click on the image below or here to watch this presentation and others from Momo13]]></description>
				<content:encoded><![CDATA[<div id="__ss_2542526" style="width: 425px; text-align: left;"><a style="font:14px Helvetica,Arial,Sans-serif;display:block;margin:12px 0 3px 0;text-decoration:underline;" title="The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!" href="http://www.slideshare.net/TishShute/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526">The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!</a><object style="margin:0px" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="355" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" /><param name="allowfullscreen" value="true" /><embed style="margin:0px" type="application/x-shockwave-flash" width="425" height="355" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<div style="font-size: 11px; font-family: tahoma,arial; height: 26px; padding-top: 2px;">View more <a style="text-decoration:underline;" href="http://www.slideshare.net/">presentations</a> from <a style="text-decoration:underline;" href="http://www.slideshare.net/TishShute">Tish Shute</a>.</div>
<p>Click on the image below or <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">here to watch</a> this presentation and others from <a href="http://www.mobilemonday.nl/">Momo13</a></div>
<p><a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"><img class="alignnone size-medium wp-image-4876" title="Screen shot 2009-11-20 at 1.32.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-20-at-1.32.24-PM-300x167.png" alt="Screen shot 2009-11-20 at 1.32.24 PM" width="300" height="167" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Composing Reality and Bringing Games into Life: Talking with Ori Inbar about Mobile Augmented Reality</title>
		<link>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/#comments</comments>
		<pubDate>Wed, 06 May 2009 14:50:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Kids With Cameras]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternative reality gaming]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented times]]></category>
		<category><![CDATA[Better Place]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Caryatids]]></category>
		<category><![CDATA[Come Out and Play]]></category>
		<category><![CDATA[composing reality]]></category>
		<category><![CDATA[Cory Doctorow]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[game development conference]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[games for preschoolers on the iphone]]></category>
		<category><![CDATA[games on the iphone]]></category>
		<category><![CDATA[GDC 2009]]></category>
		<category><![CDATA[GE augmented reality ad]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Int 13]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[iPhone OS 3]]></category>
		<category><![CDATA[iphone versus the android]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[jane mcgonigal]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kweekies]]></category>
		<category><![CDATA[Loopt]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microsoft Tag]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile gaming]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Netweaver]]></category>
		<category><![CDATA[open source augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pookatak]]></category>
		<category><![CDATA[Pookatak Games]]></category>
		<category><![CDATA[reality experiences]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Shai Agassi]]></category>
		<category><![CDATA[smart environments]]></category>
		<category><![CDATA[smart objects]]></category>
		<category><![CDATA[The End of Hardware]]></category>
		<category><![CDATA[the Pong for augmented reality]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous augmented reality]]></category>
		<category><![CDATA[ubiquitous experience]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[WARM 09]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[WikiMouse]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3448</guid>
		<description><![CDATA[Recently, I talked to Ori Inbar (above), formerly senior vice- president at SAP.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of Pookatak Games &#8211; a video game company, &#8220;with a vision to upgrade the way people experience the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost.jpg"><img class="alignnone size-medium wp-image-3449" title="oriinbarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost-300x199.jpg" alt="oriinbarpost" width="300" height="199" /></a></p>
<p>Recently, I talked to <a href="http://gamesalfresco.com/">Ori Inbar</a> (above), formerly senior vice- president at <a href="http://www.sap.com/">SAP</a>.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of <a href="http://gamesalfresco.com/about/" target="_blank">Pookatak Games</a> &#8211; a video game company, <strong>&#8220;with a vision to upgrade the way people experience the world.&#8221;</strong> Ori will be participating May 20th, in<a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank"> O&#8217;Reilly&#8217;s Where 2.0 panel, &#8220;Mobile Reality</a>&#8221; -Â  an event not to be missed IMO.</p>
<p>The taste for computing anywhere anytime has entered human culture via the iphone and is spreading like chocolate cake and pizza at a preschool party (see <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_self">why the iPhone changed everything</a>).Â  And while the full flowering of the next step is yet to come &#8211; computing anywhere, anytime by anyone and <strong>anything </strong><a href="http://en.wikipedia.org/wiki/Internet_of_Things" target="_blank">(&#8220;the internet of things&#8221;</a>), our love for these first devices capable of being <strong>mediating artifacts for ubiquitous computing</strong> (Adam Greenfield) is a vital first step to free us from our tethers to computer screens, and fulfill the promise of augmented reality.</p>
<p>If you need more convincing on the pivotal role augmented reality will play as the web moves into the world, check out Tim O&#8217;Reilly&#8217;s recent comments in <a id="iz1_" title="this video clip on Augmented Times" href="http://artimes.rouli.net/2009/04/tim-oreilly-on-recognition-rfid-and-web.html" target="_blank">this video clip posted on Augmented Times</a> and <a id="wtf4" title="here" href="http://radar.oreilly.com/2008/02/augmented-reality-a-practical.html" target="_blank">here</a> early last year.</p>
<p>From another perspective, the gloomy specter of economic and environmental catastropheÂ  is driving a movement to &#8220;<a id="h5pf" title="infuse intelligence into the way the world work's&quot;" href="http://news.bbc.co.uk/2/hi/technology/7992480.stm" target="_blank">infuse intelligence into the way the world work&#8217;s.&#8221;</a> But the challenge for a smart planet is not just about making environments smart, it is about using smart environments to enable people to act smarter (<a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see my interview with Adam Greenfield</a>).</p>
<p>We need a rapid upgrade in both the way the world works, and the way we experience the world.</p>
<p>((Note:Â  It is time to read (if you haven&#8217;t already) <a href="http://search.barnesandnoble.com/The-Caryatids/Bruce-Sterling/e/9780345460622" target="_blank">Bruce Sterling&#8217;s Caryatids</a> (<a href="book of the year for 2009" target="_blank">Cory Doctorow&#8217;s book of the year for 2009</a>) &#8220;as a software design manual&#8221; (<a href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">see Julian Bleeker</a>) because Caryatids reveals the Gordian knots of human folly, greed, compassion and desire entwined in near future designs for technologies to save the world.))</p>
<p>Ori Inbar, worked with Shai Agassi (Shai is now leading the world changing <a id="v5ow" title="Better Place" href="http://www.betterplace.com/" target="_blank">Better Place</a> ) driving <a id="gf_5" title="Netweaver" href="http://en.wikipedia.org/wiki/NetWeaver" target="_blank">Netweaver</a> from a mere concept to a &#8220;major, major business for SAP.&#8221; So Ori has already been through the cycle of working in a very small startup and growing it into a billion dollar business.Â  He has both the experience and the passion to realize his vision for augmented reality.</p>
<p>At Pookatak, he explains :</p>
<p><strong>&#8220;We design â€œreality experiencesâ€ that make usersâ€™ immediate environments more significant to them. We wish to free young and old from getting lost in front of the screen. By delivering the worldâ€™s information to peopleâ€™s field of view, and by weaving real world objects into interactive narratives, we help people rediscover the real world.&#8221;</strong></p>
<p>Pookatak will release their first game this summer. Currently it is under wraps. But Ori gives us some glimpses of what is to come in the interview below.</p>
<p>In addition to founding Pookatak, Ori is involved in a broader effort to move augmented reality forward. On his blog, <a id="ie5s" title="Games Alfresco" href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> &#8211; he recently welcomed <a href="http://gamesalfresco.com/about/" target="_blank">a new partner, Rouli Nir</a>, Ori has focused his eye of wisdom on every significant recent advance in Augmented Reality (check out <a id="zr9y" title="this essence of Ori's thinking in a fast paced video" href="http://gamesalfresco.com/2009/03/09/augmented-reality-today-ori-inbar-speaks-at-warm-2009/" target="_blank">this essence of Ori&#8217;s thinking in a fast paced video</a> presentation for <a href="http://gamesalfresco.com/2009/02/12/live-from-warm-09-the-worlds-best-winter-augmented-reality-event/" target="_blank">WARM â€˜09</a>).</p>
<p>Also Ori is one of the organizers of the interactive media track at <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a>.Â  At ISMAR this year, Ori explained,<strong> &#8220;we are trying to bring in people that develop interactive experiences for consumers, beyond the traditional attendees coming from a research perspective.</strong>&#8221;</p>
<p>In the interview below, Ori explains much of his thinking on how augmented reality will become commercially successful.Â  Enjoy it, think about it, and share it. And most importantly, if you can, get involved with ISMAR 2009.</p>
<p>OriÂ  has inspired me to participate in <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> this year.Â  Ori pointed out:</p>
<p><strong>The </strong> <a href="http://campwww.informatik.tu-muenchen.de/ismar09/lib/exe/fetch.php?id=ismar09%253Astart&amp;cache=cache&amp;media=ismar09:ismar09-cfp_090211_final.pdf" target="_blank">call for papers</a> <strong>is on, and this year it targets well beyond the typical research papers audience and into interactive media and art folks. </strong></p>
<p><strong>There are plenty of opportunities such as:</strong></p>
<p><strong>Art Gallery</strong></p>
<p><strong>Demonstrations</strong></p>
<p><strong>Tutorial</strong></p>
<p><strong>Workshops</strong></p>
<p>It&#8217;s a huge opportunity to shape the emergence of augmented reality.<br />
<br /></br></p>
<h2><strong> Interview With Ori Inbar</strong></h2>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png"><img class="alignnone size-full wp-image-3479" title="picture-41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png" alt="picture-41" width="107" height="146" /></a><br />
<h3>Making Augmented Reality Commercially Successful</h3>
<p><strong>Tish Shute: </strong>You are considered a key trail blazer in AR and you have the go to blog for augmented reality!Â  What are the most important lessons you have learned researching, writing, and developing AR in the last couple of years?</p>
<p><strong>Ori Inbar: You need to have a vision. You need to know where this is going to go in ten or fifteen or twenty years. But you&#8217;ve got to start with something really simple that makes use of the technology you have on hand. And do something that is practical, that people will like, and something they would actually want to buy. Its as simple as that. I&#8217;m currently looking at what we could do with existing technology. First of all, you have to put it in front of people. Right now most people have never heard about the term augmented reality. Go into the street, and ask 100 people about it, maybe 2 would know about it. So you need to put it in front of people because most people think it&#8217;s still science fiction or a special effect you see in movies, not something you can experience in real life. </strong></p>
<p><strong>Tish: </strong>It seems to me to that for augmented reality applications to become popular with existing technology the key breakthrough would be getting people to hold up their phones. What are the obstacles to getting people to use their mobile devices like this?</p>
<p><strong>Ori: There&#8217;s a really nice cartoon by </strong><em> </em><strong><a href="http://www.tonchidot.com/">Tonchidot</a> (below) &#8211; the Japanese company behind the Sekai Camera. It&#8217;s an illustration showing the evolution of man, from ape to man (holding a cell phone looking down), to the developed man holding a device like a camera &#8211; in front of its eyes.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37.png"><img class="alignnone size-medium wp-image-3454" title="picture-37" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37-300x221.png" alt="picture-37" width="300" height="221" /></a><strong></strong></p>
<p><strong>Which is exactly what you&#8217;re talking about. People ask, &#8220;are people going to walk with this like that all day long?&#8221; Probably not. I mean you have to build it in a way that doesn&#8217;t require them to hold it like that all the time. People are used to this gesture with the ubiquitous digital cameras. I tested one of my prototypes on a two and a half year old girl. She had no problem holding it just like she holds a camera.<br />
</strong><br />
<strong>Tish:</strong> <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank"> Blair MacIntyre</a> mentioned, &#8220;The problem with the mobile phone as a AR device is a problem of awareness,&#8221; i.e., you have to have a way of letting people know when there&#8217;s something interesting wherever they are. One of the issues regarding this is if you get too many alerts, then you tune them out.</p>
<p><strong>Ori: First of all Blair is one of the people in academia that get it. Because he looks at it from an experience perspective. Not just as an interesting technical problem to solve. Let&#8217;s start with getting people to enjoy this new experience. The AR demos so far were mostly eye candies, and mostly for advertising &#8211; the<a href="http://ge.ecomagination.com/smartgrid/#/landing_page" target="_blank"> GE AR ad</a> created a lot of buzz; but you look at it for 10 seconds and you forget about it.Â  You need to build something that people would want to experience over time and would be willing to pay for. I think that&#8217;s the big test, right?</strong></p>
<p><strong>Now in terms of having a ubiquitous experience where you&#8217;re continously connected, it doesn&#8217;t have to be an overwhelming experience. Just like some of the social media tools we&#8217;re using today, we decide when to connect, and we filter out the trash. You could get alerts only for things that really matter to you, not for everything that happens in your immediate environment. </strong></p>
<p><strong>There will be many layers of information, and it&#8217;ll be up to you to pick the ones you want to experience. The real benefit is that you get the information in your own field of view and in context of where you are or what you do.</strong></p>
<p><strong>Tish:</strong> So what are you working on these days?</p>
<p><strong>Ori: We are working on a little app that targets a very different audience than what you&#8217;d expect: pre schoolers. We think we can encourage them to get away from a PC or TV screen and learn something while playing &#8211; in the real world. You&#8217;ll hear more about it as soon as this summer. Nuff said.</strong></p>
<p><strong>But, it is a small application that will run on the iPhone. People ask how many pre-schoolers own iPhones? Well, their parents do. </strong></p>
<p><strong>Tish:</strong> Yes there are certainly many New York kids with iPhones &#8211; my kid now has my old iphone.Â  He has pretty much switched from playing games on his DS to the iPhone. I noticed in your WARM video you place a big emphasis on AR as something that will get kids away from screens and engaged with reality.Â  This is something parents will approve of!</p>
<p><strong>Ori: Yes I saw something really interesting at my kids&#8217; party one day; they were all sitting around the room &#8211; looking down at their own DS screens.Â  You could play the DS anywhere, but kids would usually play it on the sofa, looking at the screen, isolated from the world. With an iPhone and a camera, and the application we&#8217;re producing, reality becomes part of the game. Yes that makes it all of a sudden much more interesting for parents. Because kids are spending so much time in front of the screen, all of a sudden they&#8217;re something that will encourage them to interact with real objects, real things. Every parent I&#8217;ve talked to loves that idea.</strong></p>
<p><strong>Tish:</strong> Yes that is what is cool about the work of <a href="http://www.katilondon.com/" target="_blank">Kati London</a> &#8211; I think I saw someone say this on Twitter, &#8220;Kati puts the computer in the game not the game in the computer.&#8221;</p>
<p><strong>Ori: Yes, kids are spending more time in front of games and the computer because it&#8217;s more interesting. It captivates them with &#8220;<a id="x_z0" title="game pleasures" href="http://8kindsoffun.com/">game pleasures</a> &#8221; that tap into their brain&#8217;s dopamine circuitry &#8211; constantly seeking reward and satisfaction. So you&#8217;re not going to be able to tell them to go back to playing in reality without these pleasures. We have to study these mechanics from games and bring them into reality. It&#8217;s about programming real life; and augmented reality helps you achieve that.</strong></p>
<p><strong>Here&#8217;s an example: cause and effect; in a game when you do something you always get an immediate effect. You&#8217;re good, you get a reward. You&#8217;re not good, you get a cue to improve. In real life you do things and you could wait 2 or 3 years until you actually get feedback (if you&#8217;re lucky). Augmented Reality allows you to bring these mechanics into the real world. I think that&#8217;s going to help kids rediscover reality, in a new sense, which is what every parent is dreaming about.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know how much you can say about your app. But in regard to doing augmented reality on the iPhone.. there&#8217;s no compass. Is this a limitation?</p>
<p><strong>Ori: True, no compass yet. But the camera gives you a lot of information that you can interact with. When you run the application, you see the world in front of you, and if the app can recognize real life objects &#8211; it can put virtual elements on top of it.</strong></p>
<p><strong>Tish:</strong> But not with any accuracy unless you&#8217;re using markers. Are you using markers?</p>
<p><strong>Or</strong><strong>i: We&#8217;re using natural feature recognition. It doesn&#8217;t have to be an ugly looking marker. It can be any image.</strong></p>
<p><strong>Tish:</strong> So you&#8217;re using image recognition. Are you working with one of these image recognition startup companies (<a id="nws6" title="list here" href="http://www.educatingsilicon.com/2008/11/25/a-round-up-of-mobile-visual-search-companies/" target="_blank">list here</a> )?</p>
<p><strong>Ori: We&#8217;re working with one of those. What&#8217;s unique about it is it runs very nicely on any cell phone, and on the iPhone it works the best. For this first app, it doesn&#8217;t really matter where you are physically; the geolocation is not part of the experience. </strong><span style="background-color: #ffff00;"><br />
<strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong> For a truly engaging AR experience we will need more of a backend than is currently available?</span><br />
</span><br />
<strong>Ori: I call the backend the cloud, where you have all this information and ways to access it from anywhere. Actually I think it&#8217;s become pretty mature today. If you look at the different elements required to enable an augmented reality experience to work, you have &#8211; first &#8211; the user whose always in the center. Then you have the lens. The lens can be an iPhone, or glasses, even a projector. The lens allows you to watch, sense and track information in the real world: people, places, things. Then in the backend you have the cloud where you store and retrieve information.</strong></p>
<p><strong>So if you look at the maturity of these different elements, I think the cloud is in pretty good shape. Because there&#8217;s so much information we&#8217;re collecting and storing. Anything from Google, Wikipedia, Facebook, all that kind of stuff, it&#8217;s a lot of useful information you can access from anywhere using APIs. And a lot of it is also starting to include geolocation information. Take <a id="zhag" title="Loopt" href="http://www.loopt.com/" target="_blank">Loopt</a> or Google&#8217;s <a href="http://www.google.com/latitude/intro.html" target="_blank">friends service</a> that allows you to see where your friends are and what they&#8217;re doing. There&#8217;s tons of information out there and it&#8217;s pretty easy to access it. Now what do you do with it is the question?</strong></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> is such a simple and brilliant application and nobody thought about doing it until this guy from Salzburg did. It doesn&#8217;t have any sophisticated visual tracking. It knows your position and it&#8217;s simply looking at the angle you&#8217;re pointing to. Based on these parameters it brings information from Wikipedia that pertains to your field of view. So most of it was already there. It&#8217;s just a matter of connecting the pieces in an experience that is valuable for people.</strong></p>
<p><strong>Tish: </strong>It is the uptake of even a very simple technology that puts the magic in it.</p>
<p><strong>Ori:Â  Yes, take Twitter. If you go to its homepage it looks like a very simple boring app but it is something that is both enjoyable and very useful to people.</strong></p>
<h3><strong>Why you should participate in ISMAR 2009</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40.png"><img class="alignnone size-medium wp-image-3478" title="picture-40" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40-222x300.png" alt="picture-40" width="222" height="300" /></a><br />
<strong>Tish: </strong>I know that you are involved in organizingÂ  <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> (picture above from Ori&#8217;s post on <a href="http://gamesalfresco.com/2009/02/23/ismar-2009-the-worlds-best-augmented-reality-event-wants-you-to-contribute/" target="_blank">&#8220;ISMAR 2009: The World&#8217;s Best Augmented Reality Event&#8230;,</a>&#8220;) and there is a call out for papers and for volunteers, can you tell me more about it?</p>
<p><strong>Ori: Yes, we hope to have the first ISMAR where we practice what we have just discussed: let&#8217;s build on all the research invested so far and instead of thinking only about 5-10 years from now, let&#8217;s see what we can do today. So we are bringing people in from other disciplines &#8211; artists, interactive media developers and people from the entertainment industry.Â  The goal is to use the technology to make something interesting for people &#8211; again, something that people would buy, and making it commercially successful.Â  Many people either don&#8217;t know about ISMAR because in the past it was a pure engineering-orientated event and peopleÂ  from a commercial perspective of AR weren&#8217;t attracted to it.Â  The Chair of the Event this year is based in Florida and he is going to bring in a lot of people from the entertainment industry such as Disney. I think this will transform this event into something more like SIGGRAPH &#8211; more of an industry event.Â  As one of the organizers of the interactive media track we are trying to bring in people that want to build applications for consumers.</strong></p>
<p><strong>Tish:</strong> In terms of AR applications what are the flagships today?</p>
<p><strong>Ori: There are very few because it&#8217;s just the beginning. There&#8217;s one tiny studio in France called <a id="z1ln" title="Int 13" href="http://www.int13.net/en/" target="_blank">Int 13</a> . They&#8217;ve created maybe the first commercial game running on a mobile device using AR technology. It&#8217;s called <a href="http://www.youtube.com/watch?v=Te9gj22M_aU" target="_blank">Kweekies</a>. It was one of the contenders for the Nokia Mobile innovation awards. They were one of the ten finalists, but they didn&#8217;t win it. It&#8217;s looks really cool. It&#8217;s somethng that runs on your desk, with a marker. Many AR folks say markers are the past, markers are ugly. But it&#8217;s still a cool experience. I think people will go for it.</strong></p>
<p><strong>Tish:</strong> Yes I think we will have to look to small companies that are free to think creatively to lead the way.Â  It seems many games companies are tied up pulling off huge big budget projects and enterprise is still catching up on how to use social media!</p>
<p><strong>Ori: Yes, last year I was in the game development conference (GDC); there was no mention of augmented reality &#8211; not on the exhibition floor, none of the sessions, nobody talked about it. I was stunned. Then this year, there was a little a change. There were like three demos on the exhibition floor, <a href="http://www.metaio.com/" target="_blank">Metaio,</a> <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and a Dutch company called <a href="http://www.augmented-reality-games.com/" target="_blank">Beyond Realit</a>y.Â  And then there was Blair&#8217;s talk, which was very very cool. The room was packed with people. And after the talk there were dozens of people lining up to talk with him about the topic. There was definitely interest, but still on the very edge. The video game industry is still a hit driven business and publishers spend upward of 20-30 million dollar to create the best AAA game possible. They just can&#8217;t take the risk. So it&#8217;s going to come from smaller companies, from outsiders coming in with a vision and understanding on how to put the AR pieces together to create a totally new experience.</strong></p>
<p><strong>Tish:</strong> But the basic tool set is there isn&#8217;t it?</p>
<p><strong>Ori: I talked to some folks at the games developer conference, many folks with MMO background, and they have great ideas about AR. It&#8217;s great to see different people with different views on what&#8217;s needed first. &#8220;Joe the Programmer&#8221; had this idea of creating a small piece of hardware that you can put in every house and provide accurate geospatial information in your home. That couldÂ  open up many opportunities for AR experiences in homes.</strong></p>
<p><strong>Tish:</strong> Don&#8217;t you think we have enormous resources in terms of image databases that provide a great basis for augmented reality.Â  I was talking to Aaron Cope at ETech about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> &#8211; Flickr&#8217;s vernacular mapping project using all the geotagged photos in Flickr. That is such cool project. <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron will be speaking at Where 2.0</a> also.</p>
<p><strong>Ori: Think of Google Earth. Google Earth leveraged communities to basically map all the major cities around the world into 3D models. And that is an essential step to be able to do augmented reality outdoors. Because if you had to model everything from scratch, it wouldn&#8217;t be realistic.</strong></p>
<h3><strong>Augmented Reality and Becoming Greener.</strong></h3>
<p><strong>Tish:</strong> I am really interested in how AR interfaces might be useful to some of the emerging energy identity/metering projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.wattzon.com/" target="_blank">WATTZON</a> because I think it is very important that people have very intuitive, immediate, and enjoyable ways to relate to energy data so they can make greener choices.</p>
<p><strong>Ori: Back in the day I had an idea to build an Augmented Reality application to become greener. You look at things around your home with the camera and itÂ  recognizes its green gas footprint and makes recommendations to reduce it.Â  I guess it was a bit too early to do that based on visual recognition alone&#8230;you&#8217;d needÂ  additional sensors that would provide related information about what you are looking at.</strong></p>
<p><strong>Tish:</strong> Well as there is more interest in Green technology do you think we may see VC interest in some green AR projects now?</p>
<p><strong>Ori: I talked to some of the investment folks, Angels as well as VC&#8217;s about AR and they had no clue what it is. There&#8217;s a need for a whole lot of education. And there are no proof points (as in successful investments in this domain), and counter to popular belief &#8211; they don&#8217;t like risk so much&#8230;</strong></p>
<p><strong>Tish:</strong> And consumer adoption must lead the way, right?</p>
<p><strong>Ori: Just like with every emerging technology in history, people never bought the technology, they bought the content, the apps, the benefits that came on top of the technology. Whether it was VHS winning over Beta Max, or BluRay winning over HD. It&#8217;s always because of more/better content. Look at the video game console war: Xbox, and Nintendo did better than Sony just because they had more and better games. Even Windows was a success thanks to its applications. People bought it for the applications not the OS. The content is the first to drive demand.</strong></p>
<p><strong>Tish:</strong> One of the challenges to giving people new ways to relate to their energy consumption is that you can just have them looking at graphs of how bad they have been in the past you &#8211; that may make them feel bad but that doesn&#8217;t necessarily give them ways or motivation to change. There perhaps needs to be more immediate relationship to the data to facilitate change. I think the mantra for optimization of anything from energy usage to supply chains is timely, actionable data?</p>
<p><strong>Ori: There are a lot of ideas about measuring information and displaying it to people. For example, the Prius hybrid car, one of its interesting features &#8211; which is kind of game like &#8211; is a constant display of your current fuel consumption. That alone changes how people drive because they try to beat the &#8220;Score&#8221; and as a result conserve more fuel. That model can be applied to our homes&#8230;</strong></p>
<p>Tish: Yes that is something I am very interested in. I have been following several projects in this area &#8211; one of my favorites is the <a href="http://www.arduino.cc/" target="_blank">Arduino</a>, <a href="http://www.currentcost.com/" target="_blank">Current Cost</a>/<a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> integrations <a href="http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/" target="_blank">I saw at Homecamp</a>.</p>
<p>You joined a start up with Shai Agassi which was bought out by SAP right? He has a brilliant approach with Better Place.</p>
<p><strong>Ori:Â  I think what&#8217;s really unique about Better Place&#8217;s approach is that he doesn&#8217;t require people to change their behavior. People are still going to have their own cars. They&#8217;ll be able to drive as far as they want, and for the same (or lower cost). Its not necessarily about a new technology, electric cars have been around for a long time but there was no way people were going to be limited by the 50 or 70 mile range and Better Place is solving that problem. With its infrastructure of charging spots and battery switching stations, drivers are going to be able to drive anywhere. And it&#8217;ll be similar to having to stop once in a while to refuel your car. The price maybe even lower than what you pay today for your transportation needs &#8211; and you&#8217;ll stop generating green gas. It&#8217;s a clever way of taking technology to a whole new level without changing the behavior of people.</strong></p>
<p><strong>Tish: </strong>Better Place is a classic example of things as a service isn&#8217;t it?Â  It is basically a utility company.</p>
<p><strong>Ori: It is similar to a phone carrier model.Â  You pay for a membership that gives you access to the car (equivalent to the phone) and electricity (equivalent to the phone line) for the same price of fuel cost today. And as bonus you get to save the world.</strong></p>
<h3><strong>How the iphone changed the game for AR &#8211; and the iphone versus Android</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38.png"><img class="alignnone size-medium wp-image-3472" title="picture-38" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38-300x198.png" alt="picture-38" width="300" height="198" /></a><em></em></p>
<p><em>Picture from Ori&#8217;s post</em><strong><em>, <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_blank">&#8220;GDC 2009: Why the iphone changed everything&#8221; </a></em></strong></p>
<p><strong>Ori: And back to AR, you have to take the same approach, because nobody&#8217;s wants to don those huge head mounted displays or backpacks. You have to take advantage of people&#8217;s current behavior: they already carry their iPhones or similar devices.</strong></p>
<p><strong>Tish:</strong> As we discussed, you just have to get people raising up their phones and looking through them when that is a useful thing to do. Both Wikitude and Nathan Freitas&#8217;s graffiti app were enough to get me interested in the evolutionary step of raising my phone! Nathan&#8217;s graffiti app is nice. You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art (see my interview, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG finally for Augmented Reality?&#8221;</a>).</p>
<p><strong>Ori: The app store has fundamentally changed the mobile gaming industry. Last year they were in shambles. There was no growth. Everybody was complaining, &#8220;we can&#8217;t handle it, there&#8217;s a million phones, and you have to test it on each phone. And carriers suck, they don&#8217;t care about sharing and promoting your content. Everything was bad. This year mobile gaming is the hottest thing. And it&#8217;s all because of the iPhone. It changed the game.</strong></p>
<p><strong>Tish: </strong>How do you think Android is going to get traction against the iphone?</p>
<p><strong>Ori: Well the number one thing is the form factor &#8211; the iPhone is just much cooler than the G1. Its OK but it doesn&#8217;t have the same feel. People thought it was going to be easy to clone the iPhone but none of the attempts succeeded so far.</strong></p>
<p><strong>Tish: </strong>How much does it matter for AR not being able to runs things persistently in the background on the iphone?</p>
<p><strong>Ori: Actually they have add a such a capability in OS 3.Â  You can now make use of a background service.</strong></p>
<p><strong>Tish:</strong> OS 3 will open up new possibilities for AR?<strong> </strong></p>
<p><strong>Ori: The access to the video API is still not public.Â  But there is a new Microsoft application &#8211; Microsoft Tag that makes use of that API which means it is probably OK to use it.</strong></p>
<p><strong>Tish: </strong>(I ask Ori for his card and he shows me how to read it with my iphone.) Oh nice you have an AR card, of course!</p>
<h3><strong>In Search of Pong for Augmented Reality</strong></h3>
<p><strong>Tish: </strong>So how will AR begin to, as Blair&#8217;s friend put&#8217;s it, &#8220;facilitate a killer existence,&#8221; particularly as we are probably looking at some new and perhaps pricey hardware?</p>
<p><strong>Ori: You could take the Better Place approach. We&#8217;re going to give you a great experience and we&#8217;ll include the devices as part of that experience for the same price. Let&#8217;s say you subscribe to an AR experienceÂ  which offers access to multiuser, support, and all the information you need wherever you go &#8211; exactly according to the vision. You pay for a subscription on a monthly basis and included in that cost we give you a better device that offers aÂ  better AR experience. It&#8217;s following the phone carrier approach, but in a good way.</strong></p>
<p><strong>But first of all we do need our Pong! I was sitting with a couple of AR game enthusiasts at the GDC and we were asking ourselves, &#8220;how do we create the first pong for AR?&#8221;</strong></p>
<p><strong>Was Pong a multiplayer game? Not necessarily! Did it connect to the network? No! We have to create the first dot in a long line of dots that will bring us to our destination.</strong></p>
<p><strong>Tish: </strong>You haven&#8217;t seen a Pong yet have you?</p>
<p><strong>Ori: Not yet. I mean there&#8217;s maybe a handful of games and apps out there, but I don&#8217;t think any of them is a Pong yet. Still, it&#8217;s getting closer.</strong></p>
<p><strong>Tish: </strong>Kati London is doing some very interesting work on bringing games into reality, isn&#8217;t she?</p>
<p><strong>Ori: Yes, she works with Frank Lanz at <a href="http://playareacode.com/" target="_blank">Area/Code</a>. He teaches at NYU and has designed games for the <a href="http://www.comeoutandplay.org/" target="_blank">&#8220;Come Out and Play&#8221;</a> festival here in Manhattan. And a lot of these games are actually low tech.</strong></p>
<p><strong>Tish:</strong> Yes I have a big alternate reality game blog brewing that I haven&#8217;t had time to write yet!</p>
<p><strong>Ori: The city is the gameboard is their slogan. It&#8217;s going to be a great playground for AR games. The city becomes a theme park. The city could become an even bigger touristic attraction. People will come to the city to be part of these games. So you&#8217;re having thousands of people running around the city playing all sorts of games from laser-tag style to history adventures, to treasure hunts.</strong></p>
<h3><strong>Composing Reality</strong></h3>
<p><strong>Tish: </strong>So why haven&#8217;t you focused on one of these kinds of games with your company?</p>
<p><strong>Ori: We have a couple of scenarios along these lines that we&#8217;re planning for 2010-11. But first focus on what&#8217;s possible today.</strong></p>
<p><strong>Tish: </strong>And what&#8217;s stopping you from doing those kind of games today?</p>
<p><strong>Ori: Many things. The devices are not there yet, location services are not accurate enough, ubiquitous sensors are notÂ  there yet.</strong></p>
<p><strong>Tish: </strong>You think alternate reality gaming needs more &#8220;ubiquity&#8221; than is currently available?</p>
<p><strong>Ori: Not necessarily. People are doing alternate reality games with no &#8220;ubiquity&#8221; at all. But my interest is to add the visual aspect. I believe humans are mostly driven visually.</strong></p>
<p><strong>Jane McGonigal said in a talk at GDC, that AR would allow us to program reality, which is exactly how I look at it. Once you can recognize things, some of it with WiFi and RFID and all sorts of sensors. But visual sensors is always going to be the ultimate way to recognize things. And once you recognize things and know what they are, and can pull information about those things (or people and places) from the internet, you can program it (visually). You could program it to be fictional, like in a video game, or it could be programmed as non-fictional, like a documentary. And that allows you to do things that before were unimaginable.</strong></p>
<p><strong>Tish: </strong>But you can&#8217;t forget the visual, it is primary the connection to peoples&#8217; primary sensory relationships.</p>
<p><strong>Ori: Yes, it&#8217;s like you go to a grocery store and you pick your vegetables, a lot of it is by sight and by touch. And what if you could also see just by looking at it that it&#8217;s from a local store, and that it&#8217;s organic?</strong></p>
<p><strong>Tish:</strong> It goes beyond overlays really?</p>
<p><strong>Ori: By the way, I don&#8217;t like the term &#8216;overlay&#8217;. I know that&#8217;s how it looks: you either overlay or superimpose, but I&#8217;m still searching for a better term. A term I prefer to use is &#8220;composing reality&#8221;. Just like painters, they use brushstrokes and colors and compose a painting. We need to take the real element and the virtual element and compose them into something new. It&#8217;s not just about slapping one on top of the other.</strong></p>
<p><strong>Tish: </strong>yes I think the idea of dashboards is not so appealing.</p>
<h3><strong>Pookatak Games</strong></h3>
<p><strong>Tish: </strong>Do you want to explain the evolution of your company? You have an interesting history of success with high end enterprise applications.</p>
<p><strong>Ori: Since I was a kid I wanted to invent and create things. When I discovered software, that was a really cool way of actually creating things from nothing. From thin air; and you can do it very quickly. That&#8217;s what brought me into software. But I was always looking for the intersection between technology and art. Looking for ways to bring these things together. In the early nineties virtual reality was doing it. It had the appeal of cutting edge technology that can be combined with art. But then, as we all know, it crashed. So I joined Shai Agassi&#8217;s startup (who is now doing Better Place) back in the early nineties. I was one of the first employees in his startup which was developing multimedia products. I was leading the development of one of its flagship product. At some point we realized the technology could be great for an enterprise environment.</strong></p>
<p><strong>It was a really great experience. First going through this cycle from a very small startup and growing into this multi billion dollar business. I was responsible for defining and marketing SAP&#8217;s platform, which was called Netweaver. It was just an idea when we joined SAP and by the time I left it was a major, major business for SAP. I learned about the challenges of building a platform. No matter what purpose you&#8217;re building it for, it typically has similar rules. It&#8217;s definitely not just about the technology; the content that comes with it is really key to making a platform successful.</strong></p>
<p><strong>The third part of this platform trifecta is the community. If you don&#8217;t build a community, you won&#8217;t get the critical mass required for adoption. It may be your own platform but it&#8217;s not necessarily the people&#8217;s platform. That experience is very key to what we&#8217;re doing today. Now, a new industry is being born on the basis of a remarkable technology. But to drive adoption, first we&#8217;ll need good content. The content will be created using today&#8217;s technology with internal tools developed to simplify the process. Next step would be to make the tools used internally &#8211; available to other developers. Help scale the industry, enable innovation on a larger scale. That way we have a chance to create a platform. So it isn&#8217;t really just about my company. I&#8217;m so passionate about augmented reality, I want to it to become a healthy and successful industry for the next 5, 10, 15 years.</strong></p>
<p><strong>Tish: </strong>Yes I am so ready to be liberated from the sitting behind a computing screen! And I know that all this hardware is murdering the environment.</p>
<p><strong>Ori: There&#8217;s &#8216;s the book by Rolf Hainich which is called &#8220;<a id="ba8p" title="The End Of Hardware" href="http://www.theendofhardware.com/">The End Of Hardware.</a> &#8221; It&#8217;s about hardware for augmented-reality. Once you use goggles or other AR interfaces you eliminate the need for screens, laptops, etc. It&#8217;s going to be great for the environment. You have read Rainbow&#8217;s End, right? According to the book in few years there will barely be any (visible) hardware. At least it&#8217;ll have a much smaller footprint for the environment. And it&#8217;ll touch every aspect of life, everything you do. It&#8217;ll change the way you interact with the world.</strong></p>
<h3><strong>The Illusive Eyewear for Immersive AR.</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost.jpg"><img class="alignnone size-medium wp-image-3469" title="retroar-googlespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost-300x225.jpg" alt="retroar-googlespost" width="300" height="225" /></a><br />
<em>Friend of Ori&#8217;s in San Francisco wearing retro AR goggles (from <a href="http://gamesalfresco.com/2009/05/04/gdc-2009-roundup-a-tiny-spark-of-augmented-reality/" target="_blank">Games Alfresco, Ori&#8217;s roundup of GDC 2009</a>)</em></p>
<p><strong>Tish:</strong>OK lets talk about goggles.</p>
<p><strong><strong>Ori: Goggles are going to happen, we want to be hands free.</strong></strong></p>
<p><strong>It&#8217;s going to happen because it&#8217;s just a more intuitive way to use this technology. But above all it has to look cool. Because if it&#8217;s not, if it&#8217;s a big headset, then maybe a small percent of the population might use it, but most people won&#8217;t. It has to look like an accessory, like new cool eyeglasses that you just must wear.</strong></p>
<p><strong>I recently talked to a friend, who runs an industrial design firm, and has experience in designing such glasses for companies like Microvision and Lumux. He says that when you try to bring the images so close to our eyes &#8211; there are some really hard problems to solve. Otherwise it can become really annoying and cause dizzyness.</strong></p>
<p><strong>But I&#8217;m optimistic. I believe it&#8217;s going to happen 3 to 5 years from now. It&#8217;s already starting now: Vuzix announced goggles that will be available this year. Some AR apps that are going to take advantage of next year. Initially only a fraction of the population will use it. And that&#8217;s going to help advance it and make it better and better. But it&#8217;s going to take time until it reaches the mass market.</strong></p>
<p><strong>Tish:</strong> In virtual worlds we have seen, I think, a lot of mistakes in terms of reinventing the wheel and producing too many proprietary versions of the same thing and not enough concerted effort on standards and open platforms that could create a vibrant ecosystem.Â  How can augmented reality not make the same mistakes?</p>
<p><strong>Ori: There are some early AR open source efforts ARTookit, ARtag but it is not a movement yet.Â  One of the things we&#8217;re trying to do at ISMAR this year is to put togetherÂ  discussions around key industry issues, such as standards. Some people say it&#8217;s too early, you have to have a defacto standard to start from. But pretty soon it&#8217;s going to be too late. Just like with virtual worlds, all of a sudden you have all these islands that don&#8217;t talk to each other. Why get to that point if we can plan to avoid it? Let&#8217;s start thinking about it right now. On the other front there are devices. There are pockets of people working on adapting devices for AR, second guessing the hardware companies. Why not get them together with the Intels and Nvidias of the world, and discuss what this device should be able to do. And then compete to make it happen.</strong></p>
<p><strong>Tish: </strong>How much luck are you having with this discussion part?</p>
<p><strong>Ori: People are very interested in doing this. We proposed these panels for ISMAR. And I&#8217;ve got some key people already on board. They have tons of input, they want to get involved. We&#8217;ll see how much we can actually get out of it.</strong></p>
<p><strong>Tish: </strong>In virtual worlds it was a while before vibrant opensource communities developed.Â  OpenSim has I think been the breakthrough community in this regard.</p>
<p><strong>Ori: You have to think about the elements up front. The dream job is to architect the industry. Say we agree on the required pieces. Then we could help the right companies succeed in delivering the pieces. Next, we have to collaborate so that these pieces talk to each other. And eventually these communication methods will become defacto standards and most developers will adopt it.</strong></p>
<p><strong>Tish: </strong>So I&#8217;m going to put you in the role. You&#8217;ve got your dream job. You&#8217;re going to architect this community. So what are the key pieces and where would you like to see the open source communities take hold first?</p>
<p><strong>Ori: Open source will not be exclusive. It&#8217;s going to live side by side with proprietary technology.</strong></p>
<p><strong>The key pieces? You have the user at the center. And the user interacts with a lens. The lens includes both the hardware and the software. And then the lens senses and interacts with the world, which includes people, things and places. And these people-things-places emit information &#8211; about who they are, where they are, what they&#8217;re doing, etcÂ  &#8211; which is then stored in the cloud.</strong></p>
<p><strong>And then you have the content providers, the people and companies, composers who weave AR experiences through the pieces we mentioned before. These composers need a platform that glues these pieces together. Pieces of the platform will be on the lens, and in the world, and in the cloud. If you manage to remove the frictions, and connect these pieces into an experience that people like &#8211; then you have a platform. What the platform does it reduces the overhead and accelerates innovation.</strong></p>
<p><strong>Tish: </strong>Another problem virtual worlds faced in their development was their isolation from the world wide web.Â  Will augmented reality avoid this plight?</p>
<p><strong>Ori:Â  Yes, I believe the key, like you said before, is not to reinvent the wheel. The cloud is already there.Â  Take Wikitude for example, all <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a> had to do is buildÂ  a relatively simple client app, connected to wikipedia, and all of a sudden it offered a wealth of information in your field of view.</strong></p>
<p><strong>I think we can learn a lot from web 2.0. For example, in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, you&#8217;ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So let&#8217;s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So let&#8217;s find ways to make people create information that can be used for AR.</strong></p>
<p><object width="425" height="344" data="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /></object></p>
<p><em>Ori Inbar directed <a title="Wiki Mouse" href="http://www.youtube.com/watch?v=GTXtW3W8mzQ" target="_blank">Wiki Mouse</a> &#8211; a WIKI Film co-created by a swarm of movie makers around the world.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</title>
		<link>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/</link>
		<comments>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/#comments</comments>
		<pubDate>Sat, 28 Feb 2009 04:28:06 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[crossing digital divides]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[aggregating the world's energy data]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Anne Galloway's forgetting machine]]></category>
		<category><![CDATA[antisocial networking]]></category>
		<category><![CDATA[antisocial networking systems]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[cities and networks]]></category>
		<category><![CDATA[connecting environments]]></category>
		<category><![CDATA[context aware]]></category>
		<category><![CDATA[context aware applications]]></category>
		<category><![CDATA[context aware mediators]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[deliberative democracy]]></category>
		<category><![CDATA[Eben Moglen on privacy]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[Erving Goffman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[flexible identity]]></category>
		<category><![CDATA[information processing]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative is a mood]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[mobile computing]]></category>
		<category><![CDATA[mobile phones and sensors]]></category>
		<category><![CDATA[mobility]]></category>
		<category><![CDATA[next generation internet]]></category>
		<category><![CDATA[Nurri Kim]]></category>
		<category><![CDATA[onto]]></category>
		<category><![CDATA[ontome]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[privacy in networked environments]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[self-describing networked objects]]></category>
		<category><![CDATA[smart homes]]></category>
		<category><![CDATA[smart products]]></category>
		<category><![CDATA[social networking systems]]></category>
		<category><![CDATA[sousveillance]]></category>
		<category><![CDATA[speedbird]]></category>
		<category><![CDATA[spime wrangle]]></category>
		<category><![CDATA[spime wrangling]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[spimy]]></category>
		<category><![CDATA[sustainable cities]]></category>
		<category><![CDATA[the big now]]></category>
		<category><![CDATA[the city is here for you to use]]></category>
		<category><![CDATA[the future of the internet]]></category>
		<category><![CDATA[the long here]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp technologies]]></category>
		<category><![CDATA[ubiquitous systems]]></category>
		<category><![CDATA[unbook]]></category>
		<category><![CDATA[uncanny valleys]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[web of things]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2969</guid>
		<description><![CDATA[Adam Greenfieldâ€™s new book, The City Is Here For You To Use, is coming soon (photo above by Pepe Makkonen is from Adam Greenfieldâ€™s Flickr stream). Adam told me: â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg"><img class="alignnone size-full wp-image-2970" title="adamgreenfieldpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg" alt="adamgreenfieldpost" width="333" height="500" /></a></p>
<p>Adam Greenfieldâ€™s new book, <em><strong><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></strong></em>, is coming soon (photo above by Pepe Makkonen is from <a id="souo" title="Adam Greenfield's Flickr stream" href="http://www.flickr.com/photos/studies_and_observations/">Adam Greenfieldâ€™s Flickr stream)</a>. Adam told me:</p>
<p style="text-align: left;"><strong>â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a version zero or public alpha in about six weeks.â€</strong></p>
<p>I am not good at waiting for books I really want to read to arrive. But, on the upside, it brings out my already pretty highly developed investigative instinct. So when Adam very generously agreed to do an interview, impatience turned into delight in tasting what is to come. And Adam is encouraging this kind of engaged anticipation. He writes (<a id="v80w" title="see post" href="http://speedbird.wordpress.com/2009/02/19/of-books-and-unbooks/">see post</a>) that <em>The City Is Here For You To Use</em>, is shaping up:</p>
<p><strong>â€œas something of an <a id="oj:9" title="unbook" href="http://theunbook.com/2009/02/18/what-is-an-unbook/">unbook</a><em> avant la lettre. </em>Itâ€™s why weâ€™ve [<a href="http://www.nurri.com/">Nurri Kim</a> and Adam Greenfield] always insisted on keeping you in the loop as to the bookâ€™s <a href="http://speedbird.wordpress.com/2009/01/22/bookproject-update-005-year-two/">fitful progress</a>, itâ€™s why I take every opportunity to <a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/">test its ideas here</a>, itâ€™s why I make explicit the fact that your response to those ideas is crucial to their evolution and expression. And itâ€™s why, even though the process is inevitably going to result in a static, physical document as one of its manifestations &#8211; and hopefully a very nice one indeed &#8211; weâ€™ve committed to offering a free and freely-downloadable Creative Commons-licensed PDF of every numbered version of <em>The City</em>, from zero onward.</strong></p>
<p><strong>You buy the book if you want the object. The ideas are free.â€</strong></p>
<p>I found the opportunity to ask Adam questions about some of his subtle renderings of technology, culture, and being in urban environments challenging and very illuminating.Â  Although I definitely get the feeling I am asleep at the wheel on some of the critical areas he is thinking and writing on.</p>
<p>Knowing the depth and range of Adam&#8217;s thought in his seminal book, <em><a id="you9" title="Everyware" href="http://www.studies-observations.com/everyware/">Everyware</a></em>, and his blog, <a id="r22r" title="Speedbird" href="http://speedbird.wordpress.com/">Speedbird</a>, before I began the conversation I asked Adam to point me to some of his posts that reflect key ideas he is working on at the moment (Adam has recently posted<em> </em><a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/" target="_blank"><em>The City Is Here</em>: Table of contents</a>).Â  Adam directed me to these three posts.</p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">Antisocial networking</a></p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/" target="_blank">More songs about context and mood</a></p>
<p><a href="http://speedbird.wordpress.com/2007/01/29/messenger-space-messenger-body-messenger-mesh/" target="_blank">Messenger, space, messenger body, messenger mesh</a></p>
<p>I may ramble and diverge, as is my nature, but these posts inspired many of the questions I ask.</p>
<p>Adam is currently head of design direction for service and user-interface design at Nokia and living in Helsinki, so I did not have the opportunity to do the interview in person. But I have glimpsed Adamâ€™s world through his Flickr stream and some of these images have found their way into this post. But I suggest you browse Adamâ€™s photography for yourself. I cannot do justice to the thousands of nuanced perceptions of cities, networks and publics you will find there. In the meantime, here are three glyphs of Adam Greenfield that I liked a lot.</p>
<p><strong><em><a id="r315" title="&quot;My favorite shoes&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074835498/">â€œMy favorite shoes,â€</a> <a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">â€œMy favori</a><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">te chairâ€</a> </em></strong><em>and</em><strong><em> </em></strong>photo by Adam Greenfield, <em><strong><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/"> </a><a id="vjz1" title="&quot;Favoriteplace&quot;" href="http://www.flickr.com/photos/studies_and_observations/1849426174/">â€œFavoriteplaceâ€</a></strong></em></p>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg"><img class="alignnone size-full wp-image-2984" title="favoriteshoespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg" alt="favoriteshoespost" width="225" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1.gif"><img class="alignnone size-medium wp-image-2975" title="favoritechair1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1-300x225.gif" alt="favoritechair1" width="300" height="225" /></a></em></strong></p>
<p><a href="../wp-content/uploads/2009/02/favoriteplace.jpg"><br />
</a><br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2.jpg"><img class="alignnone size-medium wp-image-2992" title="favoriteplace2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2-300x225.jpg" alt="favoriteplace2" width="300" height="225" /></a></p>
<h3>A Conversation (in gdoc) with Adam Greenfield</h3>
<p><strong> Tish Shute:</strong> Could you explain a little about the evolution of your thoughts on urban environments, ubicomp and interaction design? What shifts in your thinking have taken place over the last few years re the dawning of the age of ubiquitous computing? It is a couple of years now since <a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a>, what aspects of the uptake of <em>Everyware</em> have most surprised, disappointed or inspired you? Which of the many thesis you discuss in <em>Everyware</em> have become the most crucial for <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>?</a></p>
<p><strong>Adam Greenfield: You know, thereâ€™s a little passage in the liner notes to the second Throbbing Gristle album that I always think of when Iâ€™m asked questions along these lines. As part of their stance, theyâ€™d adopted the dry tone of a corporate annual report, and the preamble began by saying, â€œSince our last report to you, many things have changed. Indeed, it would be foolish to assume that it could be otherwise.â€ And I think thatâ€™s just exactly right: the world keeps moving, and the positions weâ€™d staked ourselves to not so long ago may no longer be correct, or even relevant, to the one we find ourselves inhabiting now.<br />
</strong><br />
<strong>So, first, I think itâ€™s important to cop to all the places in <em>Everyware</em> where I just outright got things wrong. Thereâ€™s a passage in Thesis 50, for example, where I unaccountably mock the idea that â€œthe mobile phoneâ€¦will do splendidly as a mediating artifact for the delivery of [ubiquitous] services.â€ OK, this was admittedly written in a pre-iPhone world &#8211; and was correct <em>for</em> that world &#8211; but you can really see my parochialism showing here. It took the iPhone to make the proposition as blazingly self-evident to me in North America as it had been for quite some time to folks in Europe and Asia.</strong></p>
<p><strong>Having said that, though, I think Iâ€™m justified in taking a little pride in what the book got right. The broader trends the book set out to discuss &#8211; the colonization of everyday life by information processing &#8211; well, take a good look around you. And so one of the points of departure for the new book is taking everything posited in <em>Everyware</em> as a given: the urban environment, and most everything in it as well, has been provisioned with the kind of abilities you mention. So what now?</strong></p>
<p><strong>How do you go about designing informatic systems so they donâ€™t undermine the wonderful things about cities? How do you design cities so they can incorporate networked informatics to greatest advantage? How, especially, do you accomplish these things when the disciplinary communities involved barely speak the same language? And how do you keep everyoneâ€™s eyes on the prize, which is the ordinary human being asked to make sense of these new propositions? These are the questions<em> </em><em>The City Is Here For You To Use </em>sets out to address.</strong></p>
<p><strong><em><br />
</em></strong></p>
<p><a href="../wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"><img class="alignnone size-full wp-image-2993" title="adamgreenfieldthelonghere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg" alt="adamgreenfieldthelonghere" width="500" height="321" /></a></p>
<p><em>Adam talking about the <a href="http://www.flickr.com/photos/studies_and_observations/3181518615/" target="_blank">â€œLe Long Iciâ€</a> in Paris (also see Adamâ€™s post, <a href="http://speedbird.wordpress.com/2008/05/04/the-long-here-and-the-big-now/" target="_blank">â€œThe long here and the big nowâ€</a>)</em><strong></strong></p>
<p><strong>TS:</strong> You mention that the hardest parts ofÂ  producing <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em></a> wasnâ€™t <em><strong>â€œkeeping on top of all the emergent manifestations of urban informatics, or even developing a satisfying spinal argument about their significanceâ€</strong></em> but getting the voice right.Â  It seems that now is the perfect time for a book that would really speak to a wide audience.Â  But also it seems that the city that is here for you to use is manifesting quite differently in different parts of the world?Â  You seem to be somewhat of a nomad, Japan to NYC to Helsinki.Â  Can putting together different views of urban informatics give us more depth perception on the emergence of ubiquitous computing?</p>
<p><strong>AG: Thereâ€™s no question in my mind that the long-term experience of everyday life in Tokyo, New York, and now Helsinki has been an invaluable asset to me, as I imagine it would be to anybody interested in thinking or writing about the networked city. Itâ€™s given me a certain amount of parallax, you know? And that, in turn, throws a really interesting light onto how the selfsame technology can appear in substantially different guises in different social contexts.</strong></p>
<p><strong>But explaining those things &#8211; those complicated, delicate negotiations &#8211; getting them right, doing them justice, doing so in a way that doesnâ€™t dumb anything down, and still remaining accessible? Itâ€™s a challenge, let me tell you. You want to remain approachable and humane, but you also want to explain things like different jurisprudential takes on property, or how advocates of RESTful architectures think that REST is the reason why Internet adoption spread as rapidly as it did. If you want to enjoy even one chance in a hundred of getting your message across, youâ€™ve got to start with an understanding that those subjects are MEGO territory for most people &#8211; whether they hail from Shibuya, Shoreditch or San Pedro.</strong></p>
<p><a href="../wp-content/uploads/2009/02/everywareicon.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg"><img class="alignnone size-full wp-image-2996" title="everywareicon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg" alt="everywareicon" width="136" height="135" /></a></p>
<p><em><strong><a href="http://www.flickr.com/photos/studies_and_observations/89045331/" target="_blank">Everyware icons: Information processing dissolving into behavior</a></strong></em><em><strong> </strong>(Icons inspired by <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall</a>; design by Adam Greenfield and <a href="http://www.nurri.com/">Nurri Kim</a>).Â  [Adam notes on his Flickr page that he tweaked <a href="http://www.flickr.com/search/?w=14112399%40N00&amp;q=everyware+icons&amp;m=text" target="_blank">these icons </a>as section headers for </em><em><a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a></em><em>]</em></p>
<p><strong>TS:</strong> Could you explain more about what you term â€œontoâ€ and â€œontomeâ€ and how this differs from spimes and spime wrangling?<strong><br />
</strong><strong><br />
AG: You know, I never did get to develop that idea as much as I would have liked. In my mind, at least, â€œontomeâ€ referred to the totality &#8211; the global environment of addressable, queryable, scriptable objects. (An â€œonto,â€ then, would be any given such object.) I guess I was looking for words that would do two things: allow us to distinguish between the instantiation and the class, and leave us with a better word than â€œspime.â€</strong></p>
<p><strong>TS: </strong>When you say better word than spime this is this becauseâ€¦.<br />
<strong><br />
AG: Euphony, primarily. : . )</strong></p>
<p><strong>TS:</strong> When I first used the Android app,Â  <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a>, on Broadway, NYC &#8211; a street I have traveled thousands and thousands of times, and it offered up new information about itself, it was definitely an â€œOMG this is big!â€ moment for me. Like the first time I clicked on a screen and Amazon sent out a book in the early nineties (something so ordinary now it seems impossible that it was exciting but I remember it was to me!). But if I understand <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">your post here</a> correctly, isnâ€™t Android with compass the first easy-to-use context-aware mediator for wrangling onto, ontome and spimes?<strong><br />
</strong><br />
<strong>AG: Wikitude sure looks pretty impressive, and maybe even useful. But I would never, ever call it â€œcontext-aware.â€<br />
</strong><br />
<strong>To my mind, at least two more things would need to happen before we could comfortably think of it a â€œcontext-aware spime wrangler.â€ First, the buildings and other public objects around you would actually have to be spimy &#8211; theyâ€™d have to report something of their past and current state to the network. And then, some application running on your phone would somehow have to cross-reference that state information with some fact about your current state of being, and deliver you relevant information.</strong></p>
<p><strong>S</strong><strong>o, letâ€™s take your Wikitude example. Youâ€™re walking down Broadway and you pass an unfamiliar building, and for whatever reason you want to know more about it. Your phone pings the buildingâ€™s dynamic self-description, and it replies to the effect that Andy Warhol had his Factory there between 1973 and 1984. If Wikitude chooses to share this particular piece of information with you, and not some other potentially germane factoid from the buildingâ€™s history, on the strength of the fact that â€œThe Velvet Underground and Nicoâ€ was in your last.fm playlist? That would constitute some small measure of context-awareness.</strong></p>
<p><strong>But you see how hard we had to try just to come up with an example, how forced it is, how</strong><em><strong> so-what. </strong></em><strong>And I have to say that &#8211; short of some infinitely supple system that really could model your innermost desires ahead of real time, and present appropriate responses to them &#8211; most so-called â€œcontext-awareâ€ applications and services are like this. Theyâ€™re either trivial, or wildly overambitious.</strong></p>
<p><strong>Maybe we donâ€™t need for things to be context-aware for them to be useful, anyway. Certainly a great many objects in the world are starting to report their own status, and many more will do so in the fullness of time. And for the most part, all youâ€™ll need to avail yourself of them is a Web browser running on a device that knows where it is in the world. An iPhone or an Android device will work splendidly &#8211; I called the iPhone â€œthe first real everyware deviceâ€ the day it came out and I was able to play with it for the first time &#8211; and in that way, the answer to your question is â€œyes.â€ Not to be longwinded or anything. ; . )</strong></p>
<p><a href="../wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"><img class="alignnone size-medium wp-image-3000" title="objectwithimperceptibleproperties" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties-300x212.jpg" alt="objectwithimperceptibleproperties" width="300" height="212" /></a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/206984090/#DiscussPhoto" target="_blank">This Object has imperceptible properties. </a> [Adam notes on his Flickr page: &#8220;This is a custom RFID-enabled transit pass that <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall </a>had made up for me here in Seoul. I&#8217;ve (clumsily) tagged it with the icon that Nurri and I developed to represent just such emergent situations as this in the everyware milieu &#8211; that there&#8217;s no way for anyone to understand that this object has puissance beyond the obvious simply by examining it.&#8221;]</em></p>
<p><strong>TS: </strong>It seems thatÂ  we are just at the beginning of understanding how to create networks of spimes (e.g. <a href="http://www.pachube.com/" target="_blank">Pachube</a>). Gavin Starks of <a id="ya:2" title="AMEE" href="http://www.amee.com/">AMEE</a> (â€the worldâ€™s energy meterâ€) once suggested to me that AMEE could be described as a facilitator of networked spimes (everything will have an energy identity). I think you may be familiar with AMEE because you keynoted next to Gavin at<a href="http://2007.xtech.org/public/schedule/grid/2007-05-16" target="_blank"> Xtech 2007</a>.</p>
<p>I would be interested to hear your thoughts on AMEE?</p>
<p>When <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">you discussed onto and ontome in this post</a>, you noted:</p>
<blockquote><p><em><strong>â€œThe greater part of the places and things we find in the world will be provided with the ability to speak and account for themselves. That theyâ€™ll constitute a coherent environment, an <a href="http://www.graphpaper.com/2006/03-23_a-spime-is-a-species">ontome</a> of <a href="http://flickr.com/photos/studies_and_observations/89092744/">self-describing networked objects</a>, and that weâ€™ll find having some means of handling <a href="http://web.archive.org/web/20050117141647/www.v-2.org/greenfieldspime.pdf">the information flowing off of them</a> very useful indeed.â€</strong></em></p></blockquote>
<p>Is the idea of â€œenergy identityâ€ that AMEE proposes an ontome?Â  <em><br />
<strong><br />
</strong></em><strong>AG: See below for a prÃ©cis of my feelings regarding environmental/sustainability initiatives, AMEE included. Uhâ€¦is AMEE an ontome? No. Thereâ€™s just one ontome, and itâ€™s coextensive with what folks now call the Internet of Things. It sounds like individual AMEE sensors would be â€œontos.â€</strong></p>
<p><strong>But I think the difficulty weâ€™re having is a pretty good indicator that the terminology is more trouble than itâ€™s worth. Sometimes a coinage, as satisfying as it may be lexically, just doesnâ€™t work for people. These days Iâ€™m trying to get out of the neologism trade.</strong></p>
<p><strong>TS: </strong>I know <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">when Usman Haque talks about Pachube</a> he talks about spimes and spime wrangling. I asked Usman for his thoughts on spimes and onto/ontome and he gave me some comments.</p>
<p><strong>Usman Haque:</strong> I think I had somehow missed the conversation about onto and ontome but backtracked through blog posts to piece it together (unfortunately some posts at v-2 and Studies &amp; Observations no longer exist!). There are a couple of things that have made me uncomfortable about the word â€™spimeâ€™: (a) the fact that it might be too easy to confuse with an â€œobjectâ€. A â€™spimeâ€™ should also encompass relationships between things, and not just the â€œthingnessâ€ itself. (b) the sound of it (as Adam noted above). But then I am reminded of that horrible gooey interface used to plug into people in <a href="http://www.imdb.com/title/tt0120907/">eXistenZ</a> &#8211; it somehow seems appropriate that it should be a horrible gooey word, and not something that can disappear politelyâ€¦ So I like onto/ontome because it speaks to my first concern about â€™spimeâ€™; but my second concern, it turns out, is not the problem I thought it was, and so onto/ontome might beâ€¦ ahemâ€¦ too euphonic! On the question of this thing people are calling the â€œInternet of Thingsâ€, Iâ€™ve tried in lectures to reframe it as the â€œEcosystem of Environmentsâ€. Further, Vlad Trifa makes a delicious point that just as â€˜webâ€™ is different from â€˜internetâ€™, so too should we consider the â€œWeb of Thingsâ€<strong> </strong>rather than the â€œInternet of Thingsâ€, something I agree with.</p>
<p><strong>TS: </strong>It seems like this point about the difference between â€œthe web of thingsâ€ and the â€œinternet of thingsâ€ is pretty important?<br />
<strong><br />
AG: The parallel distinction between Web and Internet sure is! Theyâ€™re two completely different things, right? And http is far from the only protocol that runs over the Internet. Now, as to what Vlad means by extending this particular distinction to the domain of networked objects, I donâ€™t yet know, I havenâ€™t had time to check it out. But sure, in principle Iâ€™d totally be willing to go along with the idea that thereâ€™s a meaningful distinction between two environments named that way.</strong></p>
<p><strong><br />
</strong></p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg"><img class="alignnone size-full wp-image-3010" title="everywareicon3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg" alt="everywareicon3" width="142" height="139" /></a><br />
</em></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/89045326/in/photostream/" target="_blank">No information is collected here; network dead zone</a></em></p>
<p><strong>TS: </strong>I was just going over <a id="yo_s" title="Greenfield's principles of ubiquitous computing" href="http://www.we-make-money-not-art.com/archives/2006/10/adam-greenfield.php">Greenfieldâ€™s principles of ubiquitous computing</a>.Â  I am not sure that I see any current manifestations of ubicomp that hold to these priniciples yet?</p>
<p><strong>AG: Oh, sure there are. Look at the work Tom Coates has done on <a href="http://fireeagle.yahoo.net/" target="_blank">Yahoo!â€™s Fire Eagle</a>; look at <a href="http://www.dopplr.com/" target="_blank">Dopplr</a>. And look at some of the steps other, less compassionate developers (e.g. Facebook) have been forced to take by their own users.</strong></p>
<p><strong>Look, those principles are just codifications of common sense and basic neighborly virtues, expressed in language appropriate to the domain of application. The best, smartest and most ethical developers have never needed guidelines to do the right thing. But especially inside companies and other complex organizations, people who want to implement compassion in their design of a technical system may occasionally find it useful to have some color of authority to invoke in their struggles</strong><strong>. Thatâ€™s all those five principles are there for, and Iâ€™m well satisfied that people have been able to use them that way.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome.jpg"><img class="alignnone size-medium wp-image-3005" title="smarthome" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome-300x225.jpg" alt="smarthome" width="300" height="225" /></a><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank"><br />
</a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank">Boffiâ€™s take on the smart home</a>- photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> In your post, <a id="klme" title="More Songs About Context And Mood" href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/">More Songs About Context And Mood,</a> you suggest a direction for interaction design that you point out is not far from Yvonne Rogersâ€™ ideas in â€œMoving on from Weiserâ€ about a switch in goal of ubicomp from Weiserâ€™s vision of calm living (â€computers appearing when needed and disappearing when notâ€) to engaged living &#8211; ubicomp technologies not designed to to do things for people but to help people engage more actively in things that they do (ensembles, ecologies of resources).</p>
<p>You also suggest interaction designers should be:</p>
<blockquote><p><strong><em>&#8220;parsimonious about the interaction design challenges our organizations do take on, with an eye toward reducing the complications of context (and the attendant opportunities for default, misunderstanding, misfire, time-wasting, and humiliation) to some manageable minimum.&#8221;</em></strong></p></blockquote>
<p>As you have pointed out, â€œwe donâ€™t do â€œsmartâ€ very well yet.â€ But paradoxically smart grids, smart homes, smart products etc. etc. are ubiquitously coming to market right now.</p>
<p>Yvonne Rogers suggests interaction designers should be:</p>
<blockquote><p><em>moving from a mindset that wants to make the environment smart and proactive to one that enables people, themselves, to be smarter and proactive in their everyday and working practices</em><em> </em></p></blockquote>
<p>What areas might interaction designers most productively direct their attention towards?<br />
<strong></strong></p>
<p><strong>AG: You note that things called â€œsmart homesâ€ and â€œsmart productsâ€ are coming onto the market, and that sure would seem to be the case. But as to whether or not these things are genuinely smart, we donâ€™t have anything more to go on than the marketing departmentâ€™s word. I think you can already see that I tend to take language very seriously, and I really donâ€™t uses like the â€œsmartâ€ here, or the â€œawareâ€ in â€œcontext-aware.â€ They overpromise, they cannot help to set us up for failure and disappointment.</strong></p>
<p><strong>You know what Iâ€™d really like to see interaction design wrestle with? I would love to see a rigorous, no-holds-barred examination of the complexities of the self and its performance in everyday life, and how these condition our use of public space (and personal media in public space). I would love to see the development of ostensibly â€œsocialâ€ platforms informed by some kind of reckoning with issues like vulnerability, dishonesty, the fact of power dynamics. In other words, before we deign to go about â€œhelpingâ€ people, wouldnâ€™t it be lovely if we understood what they perceived themselves as needing help with, and why?</strong></p>
<p><strong>Iâ€™d also pay good money to see talented interaction designers turn their efforts toward tools for the support of deliberative democracy, for the navigation of complex multivariate decision spaces, and for conflict resolution.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg"><img class="alignnone size-full wp-image-3071" title="locativeasamood" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg" alt="locativeasamood" width="500" height="375" /></a><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank"><br />
</a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank">Locative is a mood</a> &#8211; photo by Adam Greenfield</em><strong><br />
</strong></p>
<p><strong>TS:</strong> I know you said this would take too long to explain but I couldnâ€™t help noticing that you seem to be, perhaps, skeptical about the role of everyware can play in sustainable living and yet, it seems at the moment, in the hacker and business communities at least, the role of everyware in reducing carbon footprint/energy management etc, is the great green hope?</p>
<p>Will everyware enable or hinder fundamental changes at the level of culture and identity necessary to support the urgent global need &#8211; â€œto consume less and redefine prosperity?â€<strong><br />
</strong><br />
<strong>AG: Iâ€™m not skeptical about the potential of ubiquitous systems to meter energy use, and maybe even incentivize some reduction in that use &#8211; not at all. Iâ€™m simply not convinced that anything we do will make any difference.</strong></p>
<p><strong>Look, I think we really, seriously screwed the pooch on this. We have fouled the nest so thoroughly and in so many ways that I would be absolutely shocked if humanity comes out the other end of this century with any level of organization above that of clans and villages.</strong><strong> Itâ€™s not just carbon emissions and global warming, itâ€™s depleted soil fertility, itâ€™s synthetic estrogens bioaccumulating in the aquatic food chain</strong><strong>, itâ€™s our inability to stop using antibiotics in a way that gives rise to multi-drug-resistance in microbes</strong><strong>. </strong></p>
<p><strong>Any one of these threats in isolation would pose a challenge to our ability to collectively identify and respond to it, as itâ€™s clear anthropogenic global warming already does. Put all of these things together, assess the total threat they pose in the light of our societiesâ€™ willingness and/or capacity to reckon with them, and I think any moderately knowledgeable and intellectually honest person has to conclude that itâ€™s more or less â€œgame over, manâ€ &#8211; that sometime in the next sixty years or so a convergence of Extremely Bad Circumstances is going to put an effective end to our ability to conduct highly ordered and highly energy-intensive civilization on this planet, for something on the order of thousands of years to come.</strong></p>
<p><strong>So (sorry <em>again</em>, Bruce) I just donâ€™t buy the idea that weâ€™re going to consume our way to Ecotopia. Nor is any symbolic act of abjection on my part going to postpone the inevitable by so much as a second, nor would such a sacrifice do anything meaningful to improve anybody elseâ€™s outcomes. Iâ€™d rather live comfortably &#8211; hopefully not obscenely so &#8211; in the years we have remaining to us, use my skills as they are most valuable to people, and cherish each moment for what it uniquely offers.</strong></p>
<p><strong>Maybe some people would find that prospect morbid, or nihilistic, but I find it kind of inspiring. It becomes even more crucial that we not waste the little time we do have on broken systems, broken ways of doing things. The primary question for the designers of urban informatics under such circumstances is to design systems that underwrite autonomy, that allow people to make the best and wisest and most resonant use of whatever time they have left on the planet. And who knows? That effort may bear fruit in ways we have no way of anticipating at the moment. As it says in the Quâ€™ran, gorgeously: â€œAt the end of the world, plant a tree.â€</strong></p>
<p><strong><a href="../wp-content/uploads/2009/02/biowall2.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg"><img class="alignnone size-full wp-image-3008" title="biowall2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg" alt="biowall2" width="375" height="500" /></a><br />
</strong></p>
<p><em><a href="http://www.flickr.com/search/?q=biowall&amp;w=14112399%40N00" target="_blank">Biowall! </a>- photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>In <a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">your post â€œAntisocial Networking,â€</a> you make some telling comments on the sorry state of social networking systems.</p>
<div style="margin-left: 40px;"><strong><em>â€œAll</em> <em>social-networking systems, as currently designed, demonstrably create social awkwardnesses that did not, and could not, exist before. All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options &#8211; and those static! A wiser response to them would be to recognize that, in the words of the old movie, â€œthe only way to win is not to play.â€</em></strong></div>
<p>But you do also state:</p>
<div style="margin-left: 40px;"><strong><em>â€œBut itâ€™s past time for me to acknowledge that while the discourse of social networking may at first blush seem marginal to my core concerns, itâ€™s far more central to those concerns than I might wish.â€</em></strong></div>
<p>Which of your concerns is social networking more central to than you might wish and why?</p>
<p><strong>AG: Well, you know Iâ€™m interested in social interaction, interpersonal behavior, and in how these things play out in networked environments. Thereâ€™s virtually no way for me to avoid dealing with Facebook, as wretched as I think it is</strong><strong>.</strong></p>
<p><strong>Facebook is pretty hegemonic, in that its reach and influence extend further than the universe of people who use it. I bump up against it constantly, in a few different ways. People send me links I canâ€™t access, because Iâ€™m not on Facebook. People spend time and energy trying to convince me that Iâ€™m really missing out, because Iâ€™m not on Facebook. The last few months, thereâ€™s even been a few people who feel justified in expressing some kind of </strong><strong>exasperation, that theyâ€™re really pissed offâ€¦because they canâ€™t find me on Facebook. Itâ€™s become the sovereign interface to any kind of life in public</strong><strong>, and as a result a great many people donâ€™t question its modes, tropes and metaphors.</strong></p>
<p><strong>So when it comes time to build some kind of situated interpersonal mediation framework, some kind of intervention in the fabric of the city, those are the tropes they reach for: accounts, profiles, friend counts, friendings and unfriendings, nudges and pokes. And as a member of a team tasked with the design of such systems, as a potential user of them, and certainly as someone exposed to the social rhetoric flowing downstream from their use, you bet these tropes become central to my concerns.</strong></p>
<p><strong>But what if we admitted that Facebook and the whole paradigm itâ€™s built on are broken? What would things look like if we started from a more sensitive understanding of the interaction between self and others? Say, the understanding Erving Goffman was offering us as far back as the late 1950s? Then youâ€™d understand the need for provisions like a â€œbackstage,â€ a place to swap out one mask for another, the ability to present oneself differently to different communities and networks. Thatâ€™s what Iâ€™m interested in exploring.</strong></p>
<p><strong>TS: </strong>Social networking systems in their current form are crude and express a very narrow bandwidth of human relationship. But already people are connecting everywareâ€™s networked social acts to existing social networking systems. At the ITP winter show there was <a id="eo:2" title="kickbee" href="http://gizmodo.com/5109297/kickbee-now-the-world-can-know-what-your-fetus-is-up-to">kickbee</a> &#8211; networked fetal communication (and <a id="kwj6" title="tweetmobile" href="http://tweetmobile.com/">tweetmobile</a> which used twitter as an acctuator for an ambient display) and green everyware (energy monitoring) is showing up in a number forms on existing social networks. But rather than just hooking up everyware to these existing flawed social network systems, does everyware require a reimagining of networked social interactions and social networking systems?<strong><br />
</strong><br />
<strong>AG: Thatâ€™s a great question, and I think the answer is clearly â€œyes.â€ Itâ€™s one thing to confine the consequences of that brokenness to the Web, and entirely another to let it bleed out into the world.</strong></p>
<p><strong>Does that mean any such reimagining is <em>going</em> to happen, that people will somehow refrain from plugging real-world outputs into these terribly flawed frameworks? Not a chance in hell. Itâ€™s too late to put a fence on that particular cliff. But maybe thereâ€™s still time to park an ambulance in the valley</strong><strong> below.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg"><img class="alignnone size-full wp-image-3074" title="earthssurface" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg" alt="earthssurface" width="375" height="500" /></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2970558731/" target="_blank">&#8220;A graphic representation of a portion of the Earth&#8217;s surface, as seen from above&#8221;</a> &#8211; photo by Adam Greenfield<br />
</em></p>
<p><strong>TS: </strong>I saw you tweet that you met Usman Haque from <a href="http://www.pachube.com/" target="_blank">Pachube</a> recently. What do you find most interesting about Pachube and <a href="http://www.eeml.org/" target="_blank">EEML</a>? Will you design a project for Pachube to push the conversation further?Â  Did Usman ask you to take a role in the future of Pachube. How does Pachube enable the vision of<em> <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a></em>? I could go on for ever with questions,Â  so please do tell!</p>
<p><strong>AG: OK, I should probably reiterate that my fundamental interest is in people, and in what they choose to make and do with technology, not the technology itself. For the last few years, Iâ€™ve particularly been trying to understand how people interact with each other and with the urban environments around them when those environments have been provisioned with the ability to gather, process and take action on data. And this is how I come about my interest in what Usman is up to with Pachube, because those â€œgather,â€ â€œprocessâ€ and â€œtake action uponâ€ functions are generally accomplished by different systems, designed by different groups of people, at different times and to different ends. What Pachube aims to do is make the difficult and not-particularly-glamorous work of connecting these pieces a whole lot easier.</strong></p>
<p><strong>Think of it as a step toward enabling the ontome, this so-called Internet of Things we&#8217;ve been talking about, the same way basic protocols like HTTP and HTML enabled the wildfire spread of the Internet weâ€™re familiar with. What Pachube offers is a way &#8211; a relatively straightforward and self-explanatory way &#8211; to plug any given compatible input into a similarly compatible output. So if youâ€™ve got an air-quality sensor or a soil-pH sensor or a personal biometric monitor, you can plug it into Pachube, and someone else can grab the data those things generate and use it to drive a visualization, or the state of a physical system like a window, or whatever else they can imagine. Itâ€™s as close as anyoneâ€™s yet come to providing a plug-and-play backbone for the creation of responsive environments.</strong></p>
<p><strong>And I think itâ€™s absolutely brilliant that itâ€™s designed to work with Arduino and Processing, two lightweight, open-source frameworks that hobbyists and researchers (and even one or two more serious developers) around the world are already using to build things. (Arduinoâ€™s a kit of parts for doing basic physical computing &#8211; using data to drive lights, motors, and other actuators that have effect out here in the world &#8211; while Processing is a very accessible language to do dynamic and interactive graphics for screen-based media). Given both its openness and modularity, and its willingness to build on top of the very popular frameworks that already exist, Iâ€™m very excited to see what people make of and with Pachube.</strong></p>
<p><strong>I have to be honest and admit that personally, I couldnâ€™t really care less about the environmental angle, for reasons that I went into at embarrassing length above. What Iâ€™m engaged by in Usmanâ€™s work is the idea that Pachube is helping to create an open platform for people to share data more readily. And while, no, he hasnâ€™t explicitly asked me to take any particular stake in things, Iâ€™m always happy to lend a hand in whatever way would be most useful. I think itâ€™s a project worth supporting.</strong></p>
<p><strong>As to how Pachube enables some of the ideas in</strong><em><strong> The City Is Here</strong></em><strong>, the answer has to do with the bookâ€™s call for every â€œpublic objectâ€ &#8211; every lamppost, bus shelter, commercial faÃ§ade, and so forth &#8211; to support an open API. Somethingâ€™s got to string all those objects together, present them to people as resources to be taken up and used, and Usmanâ€™s offered us a critical first step in that direction.</strong><em><strong><br />
</strong></em><br />
<strong>TS:</strong> Usman suggested, it might be interesting to ask you about â€œthe tension between â€˜couldâ€™ and â€™should.â€™</p>
<p><strong>Usman Haque: </strong>There are a whole bunch of things that we â€œcanâ€ do, technologically speaking; how do we decide what we â€™shouldâ€™ do, as we find ourselves in an age where we can build almost anything we can imagineâ€¦? particularly with reference to technology/privacy/security triumvirate. e.g., leaving aside that the majority of the world is *not* in the technology â€˜paradiseâ€™ that weâ€™re in, here in the west, only a small fraction of people are currently producing the technology that the rest of us use; one aim is to get people more engaged in the productive process, but, in a sense that will also mean the whole wide ecosystem of technology will be even bigger, both â€œgoodâ€ stuff and â€œbadâ€ (that qualification firmly placed on how itâ€™s used), as opposed to now when we can focus on quite specific things that government &amp; industry are doing and saying â€œthat shouldnâ€™t be happeningâ€¦.â€. part of this relates to something <span class="nfakPe">adam </span>said on his blogÂ  in the comments (see <a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank">here</a>).â€Â <strong><a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank"> </a></strong></p>
<p><strong>AG: I think the first part of answering that question has to involve figuring out who â€œweâ€ are in any given situation. A â€œweâ€ composed of seven Helsinki-based Linux developers would most likely arrive at very different answers than the United States Air Force Materiel Command or Samsungâ€™s board of directors, right? So clearly, a first challenge is getting to some kind of pragmatically useful alignment between those local and occasionally even painfully parochial perspectives with whatâ€™s best for the Big We. And this challenge is only going to become more vexing as the ability to imagine, design, build and deploy informatic componentry gets more and more widely distributed. In this respect the spread of simple, modular, low-barrier-to-entry tools only makes things worse!</strong></p>
<p><strong>The primary issue that I can see here is that the inherent clock speed of technical development is so very much faster than that of any meaningful deliberative process â€œweâ€ might bring to bear on it. A concomitant concern is that the sources of technical innovation and production are now so widely distributed that you can be reasonably certain that somebody, somewhere will implement any given technically feasible idea, no matter how offensive, poorly thought-out, socially disruptive or frankly stupid. A public toilet you have to SMS to unlock and use? A â€œFriend Finderâ€ visualization with high locational precision and no privacy features whatsoever? A first-person rape-simulation â€œgameâ€? A clunky brown iPod knockoff? Somebody thought each one of these things was worth the time, expense and effort to actually go about making it. They exist.</strong></p>
<p><strong>But Iâ€™m pretty old-fashioned in some ways, in that I think the good old Habermasian idea of the public sphere still has some life left in it. And I think it should be self-evident by now that thereâ€™s no necessary contradiction between even the newest (cough) â€œsocial mediaâ€ and the formation of such a sphere. So youâ€™ve provided a forum, and in it I get to express my belief that these things are stupid and pointless and probably should not have been built. And if somebody gets all het up about that, they can argue right back at me in comments. And eventually one or another of these positions begins to tell, in terms of regulation, legislation, and other tools of the juridical order, in terms of protest campaigns or organized boycotts or litigationâ€¦in terms of nonexistent sales!</strong></p>
<p><strong>Thereâ€™s nothing new in any of this, of course, though indubitably some of the dynamics are amplified or accelerated by e-mail, Twitter and YouTube. My main contention is that informatic technology now has such deeply pervasive implications, and for things like presentation of self that previous waves of technical development barely touched, that â€œweâ€ as societies need to be very much more conscious of the consequences before committing to any one course of action.</strong></p>
<p><strong>I should also point out that I do not, at all, believe that weâ€™re â€œin an age where we can build almost anything we can imagine,â€ though I might buy â€œâ€¦<em>two or three of</em> almost anything we can imagine.â€ On the contrary, as I implied above, I think the global constraints on our ability to operate freely are already becoming quite evident, and will continue to grow teeth over the next few decades.</strong></p>
<p><strong><br />
TS: </strong>Also UsmanÂ  added &#8230;</p>
<p><strong>Usman Haque:</strong> ..where Adam said: <em>in this regard, I very much *do* have a problem with â€œjust showing up.â€ â€” </em>something I feel that as well. but i always wonder: What happens when one appears to be mandating participationâ€¦?</p>
<p><strong>AG: Look, I happen to have a strong &#8211; maybe some would say obnoxious or hyperactive or overdeveloped &#8211; sense of personal responsibility and accountability. I think one is basically committed to some measure of responsibility for the commonweal simply by surviving to the age of majority. The</strong><strong> choice of how, particularly, to discharge that responsibility</strong><strong> can only be yours and yours alone, but it canâ€™t be ducked or gotten around without severe and entirely predictable consequences. So to Usman Iâ€™d respectfully suggest that Iâ€™m not the one mandating participation. Life is.</strong></p>
<p><em><strong><br />
</strong></em></p>
<p><strong>TS:</strong> It seems we have grown accustomed to striking a Faustian bargain on the internet today -Â  in order to share and distribute parts of our identity we are expected to give up key information to one site to store and disperse our data. <strong> </strong>I took part in<a href="http://www.ugotrade.com/2007/12/21/a-conversation-with-eben-moglen-on-second-life/" target="_blank"> a discussion with David Levine, IBM and Eben Moglen on privacy</a> last year.Â  And Eben Moglen gave a succinct description of the elements of privacy and how they have been treated in the American Constitution that is, I think, relevant to unpacking some of the challenges of ubiquitious computing. Here are some extracts from that conversation where, Eben notes:</p>
<blockquote><p><em>there are three elements that are mixed up in privacy and we tend not to notice which one we are talking about at any given moment.</em></p>
<p><em>There is secrecy &#8211; that is the data should not be readable by or understandable by anybody except me or people I designate. There is anonymity which is the data can be seen by anybody but about whom it is should be knowable only by me or people that I designate. And there is autonomy which isnâ€™t about either secrecy or anonymity but which is about my right to live under circumstances which reinforce my sense that I am in control of my own fate. And this form of privacy is actually the one we talk about in the constitutional structure when we talk about the right to get an abortion or use birth control.</em></p></blockquote>
<p>â€œAnonymityâ€ is a condition that is a deep structuring characteristic of the internet as you, Lessig and others have commented on.Â  And frequently we are promised (questionably) â€œsecrecyâ€ or anonymity as privacy protection by services handling our data on the internet.Â  But Eben (one of the USâ€™s great constitutional lawyers) points out that â€œautonomyâ€ is a key form of privacy in theÂ  US constitutional structure that is often compromised in situations where our digital selves may constrain our non-digital selves.</p>
<blockquote><p><em>The real issue here is about the forcing of choices on usâ€¦digital aspects of identity can quickly acquire an inflexibilty that constrains our non-digital selves.</em></p>
<p><em>I see again and again the ways in which people now find themselves unable to make certain life choices easily because there digital self has acquired an inflexibility that constrains their non-digital self.</em></p></blockquote>
<p>As we go beyond the end to end internet and we lose the structuring characteristic that has privileged anonymity: How do you see these three elements of privacy, anonymity, secrecy and most importantly autonomy, being worked out in a networked world beyond the end to end internet?</p>
<p>Are there any new structuring characteristics that could privilege autonomy? (which Eben indicates is linked to having a flexible identity).</p>
<p><strong>AG: If we accept for the moment a definition of autonomy as a feeling of being master of oneâ€™s own fate, then absolutely yes. One thing I talk about a good deal is using ambient situational awareness to lower decision costs &#8211; that is, to lower the information costs associated with arriving at a choice presented to you, and at the same time mitigate the opportunity costs of having committed yourself to a course of action. When given some kind of real-time overview of all of the options available to you in a given time, place and context &#8211; and especially if that comes wrapped up in some kind of visualization that makes anomaly detection and edge-case analysis instantaneous gestalts, to be grasped in a single glance &#8211; your personal autonomy is tremendously enhanced. <em>Tremendously</em> enhanced.</strong></p>
<p><strong>But as to how this local autonomy could be deployed in Moglenâ€™s more general terms, I donâ€™t know, and Iâ€™m not sure anyone does. Because heâ€™s absolutely right: Bernard Stiegler reminds us that the network constitutes a <em>global mnemotechnics</em>, a persistent memory store for planet Earth, and yet weâ€™ve structured our systems of jurisprudence and our life practices and even our psyches around the idea that information about us eventually expires and leaves the world. Its failure to do so in the context of Facebook and Flickr and Twitter is clearly one of the ways in which the elaboration of our digital selves constrains our real-world behavior. Let just one picture of you grabbing a cardboard cutoutâ€™s breast or taking a bong hit leak onto the network, and see how the career options available to you shift in response.</strong></p>
<p><strong>This is whatâ€™s behind Anne Gallowayâ€™s calls for a â€œforgetting machine.â€ An everyware that did that &#8211; that massively spoofed our traces in the world, that threw up enormous clouds of winnow and chaff to give us plausible deniability about our whereabouts and so on &#8211; might give us a fighting chance.</strong><br />
<strong><br />
TS: </strong>The concept of autonomy is signaled clearly in the title you have chosen for your next book, <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>,</a> and is a theme of all your writing!Â  While you talk about many of the possible constraints to presentation of self and potential threats to a flexible identity that ubicomp poses, your next book signals optimism. What are your key grounds for optimism?</p>
<p><strong>AG: Itâ€™s not optimism so much as hope. Whether itâ€™s well-founded or not is not for me to decide. I guess I just trust people to make reasonably good choices, when theyâ€™re both aware of the stakes and have been presented with sound, accurate decision-support material.</strong></p>
<p><strong>Putting a fine point on it: I believe that most people donâ€™t actually want to be dicks. We may have differing conceptions of the good, our choices may impinge on one anotherâ€™s autonomy. But I think most of us, if confronted with the humanity of the Other and offered the ability to do so, would want to find some arrangement that lets everyone find some satisfaction in the world. And in its ability to assist us in signalling our needs and desires, in its potential to mediate the mutual fulfillment of same, in its promise to reduce the fear people face when confronted with the immediate necessity to make a decision on radically imperfect information, a properly-designed networked informatics could underwrite the most transformative expansions of peopleâ€™s ability to determine the circumstances of their own lives.</strong></p>
<p><strong>Now thatâ€™s epochal. If that isnâ€™t cause for hope, then I donâ€™t know what is.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg"><img class="alignnone size-full wp-image-3076" title="obamannook1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg" alt="obamannook1" width="375" height="500" /></a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/3246420459/" target="_blank">Newson Obamanook</a> &#8211; photo by Adam Greenfield, &#8220;The fact that it was one of the happiest days of my adult life may have colored my appreciation of this space. A bit, anyway.&#8221;</em></p>
<p><strong>TS:</strong> In your writing you seem to imply that we will not find answers to our new relationship with Everyware by transposing the internet onto things for convenienceâ€™s sake but rather like the bike messengers -Â  we must explore the rich and complex terrain of the city that is ours to use in a give an take relationship.Â  Through our own exertions we find- how â€œanything reasonably smooth and approximately horizontal can become a thoroughfare,â€Â  rather than be served up the city as something for us to consume.</p>
<p>You seem to be suggesting our city becomes ours to use because of the way we use it in our personal journeys -like â€œthe messenger subconsciously maps the contours of an economic geography &#8211; known sources and sinks of courier assignments, or â€œtagsâ€ &#8211; and a threat landscape, this latter comprised of blind corners, cable-car and metro tracks, and traffic lanes.</p>
<p>But bike messengers are the lone ranger of our big cities. Others surf the city in tribes that ride the roiling tides of highly networked information together. How are the â€œnaturalâ€ gestures of these tribes, e.g. day traders, who yoked to the tracings of a hive mind, part of the city that is here for us to use?Â  I thought the comment <a href="http://twitter.com/ginsudo" target="_blank">@ginsudo</a> made shortly after joining Twitter and setting up TweetDeck particularly poignant:</p>
<blockquote><p><em><span class="status-body"><span class="entry-content">â€œwatching Tweetdeck is like watching stock market of your personality ebb and flow. needs analytics to maximize inherent self-involvement.â€</span></span></em></p></blockquote>
<p>But, for many of us our work has more in common with the day trader than the bike messenger, and are we pretty hooked on the ever growing possibilities for â€œcontactâ€ and identity sharing/construction, social media has producedÂ  (with all theâ€Here Comes Everybody,â€ C. Shirky, benefits and risks).Â  Early theorizing of a â€œcalm,â€ invisibleâ€ ubicomp seems out of synch with the excitable, active, engaged, contact driven, â€œusersâ€ that are <span class="status-body"><span class="entry-content">watching stock market of their personality (or personal brand) ebb and flow.</span></span></p>
<p>How will these excitable/exciting processes of contact and identity sharing that have captured of a pretty large segment of popular imagination (not confined to the West -services like <a id="f9mb" title="Gupshup" href="http://www.smsgupshup.com/">Gupshup</a> does much of the same curating, linking and distributing of identity that web based social media does in SMS) be/ or not be part of <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a>?<strong><br />
</strong><br />
<strong>AG: Letâ€™s remember that ubicomp itself, as a discipline, has largely moved on from the Weiserian discourse of â€œcalm technologyâ€; Yvonne Rogers, for example, now speaks of â€œproactive systems for proactive people.â€ You can look at this as a necessary accommodation with the reality principle, which it is, or as kind of a shame &#8211; which it also happens to be, at least in my opinion. Either way, though, I donâ€™t think anybody can credibly argue any longer that just because informatic systems pervade our lives, designers will be compelled to craft encalming interfaces to them. That notion of Mark Weiserâ€™s was never particularly convincing, and as far as Iâ€™m concerned itâ€™s been thoroughly refuted by the unfolding actuality of post-PC informatics.</strong></p>
<p><strong>All the available evidence, on the contrary, supports the idea that we will have to actively fight for moments of calm and reflection, as individuals and as collectivities. And not only that, as it happens, but for spaces in which weâ€™re able to engage with the Other on neutral turf, as it were, since the logic of â€œsocial mediaâ€ seems to be producing</strong><em><strong> Big Sort</strong></em><strong>-like effects and echo chambers. We already â€œmaximize inherent self-involvement,â€ analytics or no, and the result is that the tools allowing us to become involved with anything but the self, or selves that strongly resemble it, are atrophying.</strong></p>
<p><strong>So when people complain about K-Mart and Starbucks and American Eagle Outfitters coming to Manhattan, and how it means the suburbanization of the city, I have to laugh. Because the real</strong> <strong>suburbanization is the smoothening-out of our social interaction until it only encompasses the congenial. A gated community where everyone looks and acts the same? <em>Thatâ€™s</em> the suburbs, wherever and however it instantiates, and I donâ€™t care how precious and edgy your tastes may be. Richard Sennett argued that what makes urbanity is precisely the quality of necessary, daily, cheek-by-jowl confrontation with a panoply of the different, and as far as I can tell heâ€™s spot on.</strong></p>
<p><strong>We have to devise platforms that accommodate and yet buffer that confrontation. We have to create the safe(r) spaces that allow us to negotiate that difference. The alternative to doing so is creating a world of ten million autistic, utterly atomic and mutually incomprehensible tribelets, each reinforced in the illusion of its own impeccable correctness: duller than dull, except at the flashpoints between. And those become murderous. Nope. Unacceptable outcome.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg"><img class="alignnone size-full wp-image-3075" title="uncannyvalleys" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg" alt="uncannyvalleys" width="500" height="369" /></a></strong><br />
<em><a href="http://flickr.com/photos/studies_and_observations/3119708407/" target="_blank">Uncanny Valleys </a>- Adam comments,&#8221;Our apartment in NYC as rendered in Google Earth, with realtime traffic, weather, daylight and shadow as well as geodetic, street grid and service overlays. Camera view is South; that&#8217;s First Avenue just left of center-screen.&#8221;</em></p>
<p><strong><br />
TS:</strong> Smart phoneâ€™s are now drawing everyware data into the system and the net is reaching into who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc..</p>
<p><a id="u:ys" title="Nathan Freitas" href="http://openideals.com/">Nathan Freitas</a> says Android:<em> </em>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ (Android APIs for a wide range of sensor data.)</p>
<p>What in your view will be the most likely platform, Android or what?, to socialize the idea that sensor data could be a piece of every application?</p>
<p><strong>AG: An open platform. A platform with lots of hooks and ways to plug things into it, a strong developer community, a shallow learning curve and/or an easy-to-use, high-level development environment.</strong></p>
<p><strong>I donâ€™t have a dog in this race, mind you. I couldnâ€™t care less who gets there first.</strong></p>
<p><strong>TS: </strong>New location based services, e.g., <a id="kvue" title="Xtify" href="http://xtify.com/featured">Xtify</a> and <a id="fajp" title="ViaPlace" href="http://www.viaplace.com/">ViaPlace</a>, are offering us ways to share location data across lots of different applications (eg Xtify and a dating application like <a id="yixz" title="MeetMoi" href="http://www.meetmoi.com/welcome">MeetMoi</a> ). In return for services that allow us to share information, we must give up key information up to one site to store and disperse (although there are many differences in approach to our data, from the Twitter stance â€œshow but donâ€™t ownâ€ as opposed to Facebookâ€™s stance &#8211; â€œin order to show we must have rights to itâ€). But the basic model of Twitter &#8211; to provide a white noise platform for people to build service on top off seems to be being transposed to location based services. Obvious questions arise like what happens to our data in a start up like MeetMoi if they go belly up?Â  Apparently in the dot.com bust data was the first thing to go on the auction block in bankrupcy cases.</p>
<p>Also, I suppose it is hardly surprising (if disappointing to me) that some of the early location based services are trying to get mindshare by picking up on the glue celebrities give to mass culture. At the last New York Tech Meetup, <a href="http://m.twitter.com/omgicu" target="_blank">OMGICU</a> demoed a rather terrifying new pre-launch location based â€œparticipatory celebrity gossip applicationâ€ which seems to combine all the worst features of social media with celebrity stalking, plus a narrative to change the notion of celebrity itself by â€œturning D listers into A listers.â€</p>
<p>Hopefully location based applicationsÂ  will not get stuck on â€œstalker, stalker, stalkerâ€ apps like OMGICU .</p>
<p>David Oliver, <a id="qgz3" title="Oliver Coady" href="http://olivercoady.com/">Oliver Coady</a> gave me a good question: &#8220;How does timeliness and location-independence change our ideas of social media?</p>
<p>And how can we design new architectures that can reinforce the sense that I am in control of my own fate?</p>
<p><strong>AG: But weâ€™ve already come so far in terms of turning D-listers into A-listers! On a daily basis, Iâ€™m exposed to almost as many cues insisting I attend to nonentities and dullards like Robert Scoble as those insisting I attend to nonentities like Madonna or Thomas Friedman.</strong><strong> Itâ€™s gotten ridiculous.</strong></p>
<p><strong>Now, how does timeliness and location change our ideas of social media? It makes them dangerous!</strong></p>
<p><strong>Look, even a proud Z-lister like myself &#8211; Iâ€™m a public person only in the most debased and degraded meaning of that word &#8211; Iâ€™ve had experiences that shook me up, like having someone approach me while I was quietly hanging out in the back of St. Markâ€™s Books, and wanting to strike up a conversation based on some talk theyâ€™d seen me give a year or so previously. Now part of learning to deal with this kind of thing is shrugging it off, being grateful and flattered that someone thinks youâ€™re interesting enough to single out for that kind of attention, or chalking this up to Sennettâ€™s observation about the constitution of urbanity. Or doing all three at once.</strong></p>
<p><strong>But letâ€™s remember that at the end of the day, a â€œsocial networkâ€ is nothing but a group of arbitrarily distributed human beings joined by a communications channel, and those people have eyes and ears. The degree to which they recognize some shared interest gives them significance filters. If social capital accrues to those in the network who are able to claim some connection with a â€œcelebrity,â€ no matter how fleeting, then such connections are going to be mobilized, made explicit. And now say the network has been provided with the tools allowing it to plot the appearances of those putative celebrities in space and time, and what do you get? You get a circumstance in which it is very, very difficult to maintain any membrane between the private self and the world, for anyone whoâ€™s even remotely a public figure, whether they particularly want to be a public figure or not. You get network effects that amplify those locational traces, and further undermine any possibility of anonymity, even anonymity-by-suspension-of-interrogative-awareness (which is a clumsy way of referring to that blasÃ© matter-of-factness around famous people that most big-city folks eventually develop).</strong></p>
<p><strong>Am I letting myself off the hook? Not in the slightest. I passed Terence Stamp on the street not so long ago, and you bet I Twittered it. My only excuse was that I Twittered it to a closed loop of no more than a few dozen people. But then, who knows what those few dozen people will turn around and do with that fact, on the open networks to which they in turn belong?</strong><strong> And that, too, is my responsibility.</strong></p>
<p><strong>Iâ€™m not sure thereâ€™s anything to be done about any of this but cultivate our own urbanity, learn to say â€œso whatâ€ when we happen to find ourselves next to Philip Seymour Hoffman in the line at Whole Foods.</strong><strong><br />
</strong></p>
<p><strong>TS: </strong>Zittrain in <a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It</a>, foregrounds â€œgenerativityâ€ and a generative devices (as opposed to appliances) as the most fortuitous starting point for: â€œtools to bring about social systems to match the power of the technical one.â€</p>
<p>Are appliances a threat to the city that is here for you to use? How can generativity ensure <em><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></em> as Zittrain argues it has ensured, even if imperfectly, that the internet has been here for us to use?<strong><br />
</strong><br />
<strong>AG: You know, I havenâ€™t read the book, Iâ€™ve only heard him give the talk, so itâ€™s certainly possible thereâ€™s a subtlety to the argument that Iâ€™m missing. But Iâ€™m not sure Jonathan isnâ€™t simply wrong about this notion of generativity. Not that the concern is misplaced, but that heâ€™s insufficiently trustful in human agency. Is a car â€œgenerative,â€ by his definition? Certainly not. And yet look at all the cultural production that goes on around â€œthe car,â€ look at all the assemblages people make with cars, from Beach Boys songs to <a href="http://en.wikipedia.org/wiki/Ghost-riding">ghost riding the whip</a>, from J.G. Ballard novels and <em>Herbie the Love Bug</em> to <em>Tokyo Drift.</em></strong></p>
<p><strong>Or probably more to his point: look at the Japanese mobile-phone market &#8211; seemingly one of the most locked-down and unpropitious circumstances imaginable for the production of culture, in technical terms and Zittrainâ€™s both. And yet fully 50% of the bestselling books in Japan last year were written on mobile phones. Not <em>read</em>, which would already be impressive enough (if â€œimpressiveâ€ is indeed the word): </strong><em><strong><a href="http://www.nytimes.com/2008/01/20/world/asia/20japan.html">written</a>. </strong></em><strong>What does that imply for his argument?</strong></p>
<p><strong>So, yes, I think there are grounds for concern in that we don&#8217;t allow technologies and frameworks to appear that unduly limit the scope of human creativity</strong><strong>. Code is still law. But I also think people are quite amply able to reach into what would appear to be the least propitious technologies and tell their own stories with same.<br />
</strong></p>
<p><strong><br />
TS: </strong> One aspect of Everyware that seems in need of some visionary yoga is the how we will relate to pixels anywhere.</p>
<p>In <em><a href="http://www.lulu.com/content/1554599">Urban Computing and its Discontents</a></em> you mention how our technological trajectories often make it seem as if we seem to get fixated on particular scenes in movies, e.g., <em>Minority Report</em>. You point out that so many ambient informatics projects seem simply â€œto expand the reach of signage and advertising in dense urban spacesâ€¦.as if weâ€™ve become transfixed by the scene from <em>Minority Report</em> where heterosexual cop John Anderton is on the run from his colleagues.â€</p>
<p>Ideas from the <em>Minority Report</em> continue to hold sway in designs as we saw in the recent MIT demo of <a href="http://ambient.media.mit.edu/projects.php?action=details&amp;id=68" target="_blank">SixthSense</a> at TED.</p>
<p>But visions of augmented reality were pretty high profile in this years Super Bowl commercials this year (including a highly anthropomorphic imagining of ubicomp that was a kind of WoW mashup with a Pixar movie).</p>
<p>What recent movies/commercials have produced scenes mostly likely to be are new fixation fodder for ubicomp and why?</p>
<p><strong>AG: I donâ€™t think Iâ€™m qualified to answer that, actually. We donâ€™t have a TV, so I donâ€™t see much in the way of commercials, and most of the films I wind up seeing are the kind that play at Anthology Film Archives. What I can say is that science fiction is currently suffering in toto from an inability or disinclination to posit future scenarios that are any weirder or more visionary than those emerging from other sectors of the culture. And that would be fine, except sf has traditionally been the place where we wrestled with the imaginary.</strong></p>
<p><strong>We need that set of tools, badly. If for no other reason than something I glean from personal experience: essentially my entire professional career has simply been the leveraging of ideas and concepts I originally wrestled with in the encounter with William Gibson and Bruce Sterling when I was 16. Today&#8217;s visionary sf means tomorrow&#8217;s halfway-competent generalist.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg"><img class="alignnone size-full wp-image-3030" title="nurrikim" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg" alt="nurrikim" width="375" height="500" /> </a></strong><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank"></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank">Nurri Kim in the waiting zone</a> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>My AR friend, <a href="http://curiousraven.squarespace.com/about-me/">Robert Rice</a>, who is <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">working on a markerless AR platform,</a> notes that data visualization is one of the critical elements of AR in terms of â€œmake or break.â€ Robert says, â€œeven with the ultimate in ubiquitious data from everything, without good data vis it will all be uselessâ€</p>
<p>Also something Cory Doctorow said to me last year has really stuck in my mind. When I asked him what happens when Cyberspace everts, he talked about a reverse surveillance society:</p>
<div style="margin-left: 40px;"><em>â€œSurveillance is all about when people in authority know a lot about you. Instrumentation is when you know a lot about the world,â€</em></div>
<blockquote><p>C<em>ory: Well this is like Spook Country the new Gibson novel â€“ What happens when cyber space everts â€“ hmmm? Iâ€™m not sure I have anything very pithy to say on that EXCEPTâ€¦â€¦â€¦ </em><br />
<em> Apart from all the traditional kind of overlay reality stuff, if there is one thing I am actually interested seeing from a virtual world migrating to the real world its instrumentation. </em><br />
<em> I think lot of things that are characteristic of very successful internet based business is that they are extremely finally instrumented so like Amazon knows in aggregate on a second by second basis how their site is being used by people and they can twiddle the dials in real time. </em></p>
<p><em> As users of the world we have very little access to that kind of instrumentation. We donâ€™t even know how the tube is running. The tube knows how the tube is running and we kinda of donâ€™t. I would be really interested in seeing that. Youâ€™ve seen <a href="http://joi.ito.com/">Joi Itoâ€™s</a> WoW interface right. Have you seen it â€¦ </em></p></blockquote>
<p>Joi Itoâ€™s WoW interface seems a long way from the calm, invisible imaginings for ubicomp by early ubicomp visionaries?</p>
<p><strong>AG: Well, heâ€™s got a particular kind of neural wiring. And thereâ€™s not a thing thatâ€™s wrong with that, except that Iâ€™d never, ever want to assert that whatâ€™s appropriate for Joi Ito necessarily is or should be understood to be appropriate for anybody else. The point of calling for open systems and frameworks is to allow us maximum scope of diversity in the ways we choose to interface with the worldâ€™s richness and complexity.</strong><em><strong><br />
</strong></em> <strong><br />
TS: </strong>What new imaginings/possibilites do you see when pixels anywhere are linked to everyware?<strong><br />
</strong><br />
<strong>AG: Product placement. Commercial insertions and injections, mostly.</strong></p>
<p><strong>Beyond that: one of the places where Mark Weiser logic breaks down is in thinking that the platforms we use now disappear from the world just because ubiquitous computingâ€™s arrived. Weâ€™ve still got radio, for example &#8211; OK, now itâ€™s satellite radio and streaming Internet feeds, but the interaction metaphor isnâ€™t any different. By the same token, weâ€™re still going to be using reasonably conventional-looking laptops and desktop keyboard/display combos for awhile yet. The form factor is pretty well optimized for the delivery of a certain class of services, itâ€™s a convenient and well-assimilated interaction vocabulary, none of thatâ€™s going away just yet. And the same goes for billboards and â€œTVâ€ screens.</strong></p>
<p><strong>But all of those things become entirely different propositions in everyware world: more open, more modular, ever more conceived of as network resources with particular input and output affordances. We already see some signs of this with Microsoftâ€™s recent â€œSocial Desktopâ€ prototype &#8211; which, mind you, is a very bad idea as it currently stands, especially as implemented on something with the kind of security record that Windows enjoys &#8211; and weâ€™ll be seeing many more.</strong></p>
<p><strong>If every display in the world has an IP address and a self-descriptor indicating what kind of protocols itâ€™s capable of handling, then you begin to get into some really interesting and thorny territory. The first things to go away, off the top of my head, are screens for a certain class of mobile device &#8211; why power a screen off your battery when you can push the data to a nearby display thatâ€™s much bigger, much brighter, much more social? &#8211; and conventional projectors.</strong></p>
<p><strong>Then we get into some very interesting issues around large, public interactive displays &#8211; who &#8220;drives&#8221; the display, and so forth. But here again, we&#8217;ll have to fight to keep these things sane. It&#8217;s past time for a public debate around these issues, because they&#8217;re unquestionably going to condition the everyday experience of walking down the street in most of our cities. And that&#8217;s difficult to do when times are hard and people have more pressing concerns on their mind.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg"><img class="alignnone size-full wp-image-3045" title="citywarecrash" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg" alt="citywarecrash" width="500" height="375" /></a><br />
</strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2786991056/" target="_blank">Citywarecrash</a> &#8211; photo by Adam Greenfield, &#8220;An occupational hazard for urban screens.&#8221;</em></p>
<p><strong>TS: </strong>I know in <em>Everyware</em> you mentioned that architects have play an important visionary role in imagining ubicomp and I know you work closely with your wife, artist <a href="http://www.nurri.com/">Nurri Kim</a>.Â  Robert Rice asked me the following question &#8211; which I will in turn ask you: &#8220;In terms of augmented reality do you think virtual worlds and virtual reality experts / leaders / are good pioneers for thought and guidance on AR? Or, should we look for new leaders, or where are new leaders emerging? Is the tech similar enough for the old crowd to be useful or is it different enough to be a disadvantage coming from the old models?.<strong>&#8221;<br />
</strong><br />
<strong>AG: I should make it clear that I have absolutely no interest in virtual worlds or virtual reality. The so-called virtual worlds Iâ€™ve experienced seem sad and really rather tatty &#8211; eversions of the most predictable adolescent fantasies of unlimited power, reinscriptions of all the usual politics &#8211; and completely lacking in just about everything that makes life resonant, meaningful and awe-inspiring. And anyway, to paraphrase J.G. Ballard, ordinary, everyday life is now far more vividly and fantastically weird than anything youâ€™ll see in Second Life. I mean, Garry Kasparov was heckled by a radio-control dildocopter, Joe the Plumberâ€™s off to Gaza as a war correspondent, a sea of dust-covered BMWs waits in the long-term parking lot at Dubai International for owners who are never, ever coming back.</strong></p>
<p><strong>Look to virtual worlds for insight into the hard work of negotiating the actual, with its physics, its entropy, its suffering, with all its constraints? Oh my goodness gracious, no.<br />
And look to leaders? Never.</strong><strong> Leaders are for followers, and who wants to be that? I donâ€™t mean you canâ€™t take inspiration and insight from the work of others &#8211; not at all &#8211; but use your own imagination, take some personal risk, do your own damn work.</strong></p>
<p><strong>Now, having said that. This opposition of virtual and physical worlds strikes me as increasingly a false one, as it does many people. The hard-and-fast distinction between â€œthe real worldâ€ and virtual environments make less and less sense, as righteously satisfying as making it can sometimes seem. There may be attributes of this physical environment that are impossible to see or make use of without access to the networked overlay, and those attributes may in time come to constitute the primary wellsprings of a given placeâ€™s meaning. And if youâ€™re offering me some insight that I think could be of utility in resolving the challenge of making this overlay accessible to all, equally, Iâ€™ll gladly accept it, no matter what domain or disciplinary background you claim</strong><strong> as your own. </strong></p>
<p><strong>Am I aware of any such insight coming out of virtual worlds? No. As Bryan Boyer notes, â€œIf you want to start talking about some serious cross-disciplinary pollination then you better take both sides of that disciplinary divide seriously. When your </strong><em><strong>ubi- </strong></em><strong>runs into my building with its boring HVAC, mundane load paths, typical finished floors, plain old foundations, etc., the transformative powers of </strong><em><strong>comp </strong></em><strong>are bracketed pretty seriously by the realities of the physical world.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg"><img class="alignnone size-full wp-image-3064" title="thecloudgate" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg" alt="thecloudgate" width="500" height="375" /></a><br />
<a href="http://flickr.com/photos/studies_and_observations/1904838102/" target="_blank"><em>The Cloud Gate has landed</em></a><em> &#8211; photo by Adam Greenfield, &#8220;Tell me this doesn&#8217;t look *just* like the descriptions of &#8220;stasis fields&#8221; in 70s SF. In fact, the picture looks practically CGId to me.&#8221;</em></p>
<p><strong>TS:</strong> Some people thought the whole world would have been plastered with RFID by now.Â  But before that has happened markerless AR seems to be in our sights.</p>
<p>If I understand it correctly marker versus markerless AR has quite different implications for how the cyberspace of ubicomp evolves?Â  I asked Robert Rice (he is developing a markerless AR platform) to explain some of the differences.Â  He said:</p>
<div style="margin-left: 40px;"><em>markers are discreet physical objects at worst, they are passive images that are linked to some sort of static data in a database somewhere (like a 3D object). If you destroy them, thats it. With markerless stuff, everything is persistent, dynamic, already linked in cyberspace. Marker based stuff requires a secondary infrastructure of hardware for telecommunications</em></div>
<p><em><br />
</em>Robert also pointed out to me that markerless AR may prove even more problematic for privacy:</p>
<div style="margin-left: 40px;"><em>Markers are easy to see, so you know where they are. RFIDs cant really be seen, but they can be detected. With markerless AR, there is nothing obvious to the naked eye you dont know if someone has active AR going on or not, so you could be tracked and not know it. Not much more than today with CCTVs all over the place so, it is the same [a surveillance issue] as marker based, but more subtle or inobvious.</em></div>
<p><em> </em></p>
<p>Do you have any thoughts about the different roles that markerless versus marker techinologies will play in AR and Ubicomp?</p>
<p><strong>AG: I need to admit that Iâ€™ve never until this moment heard the phrase â€œmarkerless AR,â€ although Iâ€™d think itâ€™s more or less self-explanatory to anyone whoâ€™s been following this stuff. Let me make the distinction explicit, shall I, for anyone who hasnâ€™t been? And you or Robert can correct me if Iâ€™ve gotten it wrong.</strong></p>
<p><strong>Augmented reality means that I have some mediating artifact that provides me with a visual overlay on the world</strong><strong>. This could be a phone, it could be a windshield, it could be a pair of glasses or contact lenses, doesnâ€™t matter. And youâ€™re going to use that overlay to superimpose some order of information about the world and the objects in it onto the things that enter my field of vision &#8211; onto what I see. So far, so good: thatâ€™s AR 101.</strong></p>
<p><strong>Now where does that information come from?</strong></p>
<p><strong>What youâ€™re calling marker-based AR implies that thereâ€™s some reasonably strong relationship between the information superimposed over a given object, and the object itself. That object is an onto, a spime, itâ€™s been provided with a passive RFID tag or an active transmitter. And itâ€™s radiating information about itself that Iâ€™m grabbing, perhaps cross-referencing against other sources of information, and superimposing over the field of vision. Fine and dandy.</strong></p>
<p><strong>But thereâ€™s another way of achieving the same end, right? Instead of looking at a suit jacket on a rack and having its onboard tag tell you directly that itâ€™s a Helmut Lang, style number such-and-such from menâ€™s Spring/Summer collection 2011, Size 42 Regular in Color Gunmetal, produced at Joint Venture Factory #4 in Cholon City, Vietnam, and packed for shipment on September 3, 2010, youâ€™re going to run some kind of pattern-matching query on it. And without the necessity of that object being tagged physically in any way, youâ€™re going to have access to information about it. But this set of information isnâ€™t, necessarily, what the object itself, or its creators or merchandisers, want you to know about it; it could be derived from online discussion fora or review sites, or blog posts, or whatever. All there needs to be is a lookup table, essentially, that tells you where to find information about any object in the field of vision whose identity can be established.</strong></p>
<p><strong>Do I have that right? And if I do, then as I understand it, the distinction is primarily a pragmatic one: itâ€™s just easier to get to an augmented world, by far, if we donâ€™t actually have to go to all the trouble of tagging everything in the world with its own dedicated RF transponder. Easier, and cheaper, and quicker, and more environmentally sound besides, because the relevant traffic is in bits not atoms.</strong></p>
<p><strong>Unless Iâ€™ve missed something, you donâ€™t, then, get the distinction between classes of objects and instances of same. Sometimes, when thereâ€™s a 1:1 correlation between the two, thatâ€™s not going to matter: Iâ€™m walking down the street in Madrid, and my glasses or whatever can easily recognize that this building is the Caixa Forum. Thereâ€™s only one of it, and I can get a positive ID via pattern recognition. But for some edge cases &#8211; twins and lookalikes, mostly &#8211; the same thing is generally true of people.</strong></p>
<p><strong>But other times it will matter. Is <em>this specific watch</em> a real, $10,000 Panerai or a $50 Kowloon fakery? How has <em>this</em> black 1998 Honda Civic over here differ from this other one in terms of its use and maintenance history? Does <em>this</em> O-ring gasket need to be replaced? I donâ€™t see how you extract data from specific instances of things without the necessary sensor instrumentation, transmitter, etc., being coextensive with the object in question or very closely colocated with it over time &#8211; in the terminology youâ€™re using, a â€œmarker.â€</strong></p>
<p><strong>So using these terms, Iâ€™d say that â€œmarkerlessâ€ AR comes first, is relatively easy to deploy, and generates not-insignificant value. But &#8211; again, unless Iâ€™m missing something &#8211; there are some things that it wonâ€™t ever be able to do, and for those things you need some provision for self-identification and self-location.</strong></p>
<p><strong>Ultimately I think it&#8217;s a distinction without a difference, from the user&#8217;s point of view. People will care much more about the source of whatever information shows up on their overlay than the precise technical means used to get it there.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg"><img class="alignnone size-full wp-image-3042" title="smileuroncctv" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg" alt="smileuroncctv" width="394" height="500" /></a><br />
</strong></p>
<p><a href="http://flickr.com/photos/studies_and_observations/3274544108/" target="_blank"><em>The surrender to cynicism</em></a><em> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> Much early thinking around ubicomp seems to have come from visionary architects and engineers but recently I was at the <a href="http://www.toccon.com/toc2009" target="_blank">O&#8217;Reilly Tools of Change for Publishing Conference</a> (publishing in the Digital Age) and I met several book futurists.Â  It struck me how ubicomp from the perspective of the book created some interesting questions for how particular material cultures will shape and be shaped by Ubicomp differently.</p>
<p><span class="status-body"><span class="entry-content">I noted, Google seemed well down the path to holy grail â€œconverting images to original intent XML.â€</span></span> And <a id="ricl" title="Peter Brantley" href="http://radar.oreilly.com/peter/">Peter Brantley</a> talked about machine parsed <span class="nfakPe">books</span>.</p>
<p>At TOC there were many suggestions about how b<span class="nfakPe">ooks</span> might manifest as everyware. (Although it did not seem that many people felt books had a special relationship to time and history and would not vanish as one of the great metaphors of calm and solitary enjoyment in our culture soon).Â  Books as everyware will, it seems, include, amongst other things:</p>
<p><span class="nfakPe">books</span> that read <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read context</p>
<p>context that reads <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read me</p>
<p><span class="nfakPe">books</span> linked to mobility &#8211; timeliness and location independence</p>
<p><span class="nfakPe">books</span> that are not <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> becoming babble</p>
<p><span class="nfakPe">books</span> bubbling up from the babble</p>
<p>There is an Institute of the Future of the Book. Will all former material cultures require their own institutes of the future to guide their cultures into everyware?Â  Do you think books transition into everyware is especially significant and why?</p>
<p><strong>AG: But all objects have a relationship to time and history, no?</strong></p>
<p><strong>TS: </strong>Yes! What I meant to convey really was the idea that many people expressed at TOC that books had a privileged relationship to knowledge in our culture that was valuable and related to some aspects of their current form, and that books as everyware, e.g. machine parsed books, and more sociallly generated forms would not replace that entirely.<br />
<em><strong><br />
</strong></em><strong>AG: Gotcha. Well, I certainly agree that books constitute an interesting category unto themselves &#8211; Iâ€™ve held onto my physical books, and in fact still spend a fortune buying new ones, where I stopped buying music on discs a long, long time ago. But I donâ€™t think this state of affairs can or should obtain forever.</strong></p>
<p><strong>Lately thereâ€™s been a good amount of thought around the notion of </strong><strong>&#8220;<a href="http://theunbook.com/about/">unbooks</a>,&#8221; which I regard as</strong><strong> a container for long-form ideas appropriate to an internetworked age. By building on some of the tropes of software development, mostly having to do with version control, open-endedness and an explicit role for the â€œuserâ€ community, unbooks can usefully harness the dynamic and responsive nature of discourse on the Web. At the same time, you preserve the things books are really good at: coherence, authorial voice and intent.</strong></p>
<p><strong>The important part is in acknowledging two points which have usually been understood as contradictory, but which are actually nothing of the sort: firstly, that the expression of ideas in written form has something to learn from the practices that have evolved around the collaborative creation of dynamic, digital documents over the half-century-long history of software; and secondly, that certain ideas require elaboration in the reasonably strongly-bounded form we know as a â€œbook,â€ and cannot meaningfully be shared otherwise. A third point, concomitant to the second, is that despite recent technical advances, screen-based media still cannot, and may not ever fully be able to, deliver the extratextual cues and phenomenological traces that support, inform and extend the meaning of written documents.</strong></p>
<p><strong>The unbook lets you have your cake and eat it too. So, for example, when we publish <em>The City Is Here</em>, one of its manifestations will be a static, physical document &#8211; and hopefully, if we do our jobs well, a very nice one indeed. But even before that, youâ€™ll be able to download a Creative Commons-licensed PDF of every numbered version of the manuscript, from zero onward. Bottom line: you buy the book if, and only if, you want the object. The ideas are free.</strong><br />
<strong><br />
TS: </strong><em><a id="ed35" title="David Brin" href="http://www.davidbrin.com/tschp1.html"> David Brin</a> sees two futures:1) the government watches everybody, and 2) everybody watches everybody (the latter he calls &#8220;sousveillance&#8221;).Â  My friend <a id="suag" title="Ben Goertzel" href="http://www.goertzel.org/">Ben Goertzel</a> says â€œhooking AI up to a massive datastore fed by ubicomp is the first step toward sousveillance?â€ What do you think the role of AI in ubicomp will be?Â  Is it worth thinking about what is the first important â€œAI meets ARâ€ app is?</em></p>
<p><strong>AG: I donâ€™t believe that artificial intelligence as the term is generally understood &#8211; which is to say, a self-aware, general-purpose intelligence of human capacity or greater &#8211; is likely to appear within my lifetime, or for a comfortably long time thereafter.</strong></p>
<p><strong>Having said that, your friend Ben seems to be making the titanic (and enormously difficult to justify) assumption that a self-aware artificial intelligence would share any perspectives, goals, priorities or values whatsoever with the human species, let alone with that fraction of the human species that could use a little help in countering watchfulness from above. â€œHooking [an] AI up to a massive datastore fed by ubicompâ€ sounds to me more like the first step toward enslavementâ€¦if not outright digestion.</strong></p>
<p><em><strong>Sousveillance </strong></em><strong>- the term is Steve Mannâ€™s, originally &#8211; doesnâ€™t imply â€œeverybody watching everybodyâ€ to me, anyway, so much as a consciously political act of turning infrastructures of observation and control back on those specific institutions most used to employing same toward their own prerogatives. Think Rodney King, think Oscar Grant.</strong><em><strong><a href="http://www.davidbrin.com/tschp1.html"><br />
</a></strong></em><br />
<strong>TS: </strong>I have one last question from Usman Haque.</p>
<p><strong>Usman Haque:</strong> insofar as a lot of what adam describes as desirable could be said to constitute pretty radical socio-political change (or perhapsâ€¦ â€œadjustmentâ€) i would be really interested to know how his current work @ nokia is or isnâ€™t able to gel with the themes of his writing. in some senses thereâ€™s quite an undercurrent strongly challenging corporate practices, in other senses it could be seen as gentle nudges. how does adam see it? and how about the nokia behemoth? does he have success nudging nokia towards the kind of world he would like to see (i imagine the answer is â€˜yesâ€™ otherwise he wouldnâ€™t be doing itâ€¦) but iâ€™d love to know more about the limits/challenges.</p>
<p><strong>AG: I am told that Henry Kissinger, on his first trip to China in 1971, asked Zhou Enlai whether he thought the French Revolution had or had not advanced the cause of human freedom.<br />
Zhou thought for a moment, pursed his lips, and replied, â€œIt is too soon to tell.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
		<item>
		<title>People Meet People Meet Big Data: ScienceSim Explores Collaborative High Performance Computing</title>
		<link>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/</link>
		<comments>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/#comments</comments>
		<pubDate>Wed, 11 Feb 2009 22:40:02 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[science outreach in virtual worlds]]></category>
		<category><![CDATA[scientific simulation in virtual worlds]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[virtual worlds in Japan]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[collaboration and big data]]></category>
		<category><![CDATA[collaborative visualization]]></category>
		<category><![CDATA[haptic interfaces for virtual worlds]]></category>
		<category><![CDATA[Hypergrid]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[modelling complex systems]]></category>
		<category><![CDATA[n-body simulation]]></category>
		<category><![CDATA[Piet Hut]]></category>
		<category><![CDATA[rapid data movement in virtual worlds]]></category>
		<category><![CDATA[ScienceSim]]></category>
		<category><![CDATA[scientific simulation]]></category>
		<category><![CDATA[steering big data simulations from virtual worlds]]></category>
		<category><![CDATA[steering virtual worlds with brain waves]]></category>
		<category><![CDATA[super computing conference]]></category>
		<category><![CDATA[supercomputing]]></category>
		<category><![CDATA[Wilf Pinfold]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2855</guid>
		<description><![CDATA[Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the Supercomputing Conference general chair, is working with some Intel colleagues to make a project called ScienceSim the centerpiece of a special workshop event at the SC09 conference (see Supercomputing Conference, an ACM and IEEE Computer society sponsored event). Recently, I interviewed Wilf Pinfold (see interview [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg"><img class="alignnone size-full wp-image-2861" title="gwave_lg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg" alt="gwave_lg" width="540" height="540" /></a></p>
<p>Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the<em> </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a></em> general chair, is working with some Intel colleagues to make a project called <a href="http://www.sciencesim.com/">ScienceSim</a> the centerpiece of a special workshop event at the SC09 conference (<em>see </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a>, an ACM and IEEE Computer society sponsored event)</em>.</p>
<p>Recently, I interviewed Wilf Pinfold (see interview below), Mic Bowman (also <a href="../../2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/">see my previous interview here</a>), and John A. Hengeveld (see interview below). I wanted to find out what are the underlying goals of this SC conference program?Â  Why are members of the SC community being encouraged to participate with the ScienceSim environment? What projects are beginning to emerge?  And, what are Intel&#8217;s goals in giving infrastructure support to further the conversation between high performance computing and collaborative virtual worlds?</p>
<p>The vision of creating new ways to collaborate and interact with big data does seem to be one of the more significant steps we can take at a time when we find many of our most complex systems roiling and threatening total collapse. As Tim O&#8217;Reilly has pointed out &#8211; from financial markets to the climate, the complex systems we depend on for our survival seem to be reaching their limits.</p>
<p>But,Â  how can we get from the place we are now &#8211; <a href="http://www.youtube.com/watch?gl=GB&amp;hl=en-GB&amp;v=gM4fmL6dLdY" target="_blank">see this example of an n-body simulation in OpenSim</a>, to the point where we can collaboratively steer from our visualizations big data simulations of climate change, financial markets, or the depths of the universe.Â  The picture opening this post is a:</p>
<blockquote><p><em>Frame from a 3D simulation of gravitational waves produced by merging black holes, representing the largest astrophysical calculation ever performed on a NASA supercomputer. The honeycomb structures are the contours of the strong gravitational field near the black holes. Credit: C. Henze, NASA</em></p></blockquote>
<p>Wilf Pinfold explained to me part of the reason to begin a dialogue on collaborative visualization at SC &#8217;09 is that super computing communities (that tend to be highly skilled and visionary) have played key roles in internet development in the past. Wilf pointed out,Â  key browser technologyÂ  developed out of these communities in the early days of the internet &#8211; see <a href="http://en.wikipedia.org/wiki/Mosaic_(web_browser)" target="_blank">this wikipedia entry</a> that givesÂ  a background on the role of NCSA (National Center for Supercomputer Applications).</p>
<p>The hope is, while there are many obstacles to overcome, the super computing community has both the skills and motivation to find solutions to creating collaborative environments capable of the kind of rapid data movement that scientific/big data visualization needs. Solving the problems of realtime collaborative interaction with big data willÂ  have many ramifications for the way we understand virtual reality, the metaverse, virtual worlds (all these terms are becoming increasingly inadequate for cyberspace in the age of ubiquitous computing, an argument I will make in another post!).</p>
<p><em></em></p>
<p>There have already been a number of blogs on ScienceSim (see <a href="http://www.virtualworldsnews.com/2008/11/intel-creating-sciencesim-on-opensim.html" target="_blank">Virtual World News</a>, <a href="http://nwn.blogs.com/nwn/2009/02/intel-outside-.html" target="_blank">New World Notes</a>, <a href="http://www.vintfalken.com/intel-using-opensim-for-immersive-science-project/" target="_blank">Vint Falken</a>, and <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a>). There have also been Intel blogs &#8211; <a href="http://blogs.intel.com/research/2009/01/sciencesim.php" target="_blank">see this post</a> by John A. Hengeveld (a senior business strategist working with Intel planners and researchers to accelerate the adoption of Immersive Connected Experiences). And Intel CTO <a href="http://blogs.intel.com/research/2008/11/immersive_science.php" target="_blank">Justin Rattner&#8217;s pos</a>t announcing the project this November.</p>
<p>But to blow my own horn a little, I think i was the first to blog the encounter between <a href="http://opensimulator.org/">OpenSim</a> and Supercomputing (an encounter I to some degree provoked by making the introductions) <a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/ " target="_blank">see this post</a>.Â  So I have been following the ScienceSim initiative with great interest.</p>
<p>Very shortly after N-Body astrophysicicsts Piet Hut and Jun Makino, creators ofÂ  &#8211; GRAPE (an acronym for â€œgravity pipelineâ€ and an intended pun on the Apple line of computers) &#8211; a super computer that will <a href="http://grape.mtk.nao.ac.jp/grape/news/ABC/ABC-cuttingedge000602.html" target="_blank">become one of the fastest super computers in the world (again)</a>, met <a href="http://www.genkii.com/" target="_blank">Genkii</a> &#8211; a Tokyo based strategic company working with OpenSim, the first N-body simulation appeared in OpenSim.Â  And in a matter of weeksÂ  <a href="http://www.youtube.com/watch?v=gM4fmL6dLdY" target="_blank">this video went up on YouTube</a> &#8211; the result of a collaboration between MICA and Genkii.Â  But the nirvana of being able to create visualizations using real time data from super computers that can be steered from a collaborative environment is still a ways off.</p>
<p>Super computing communities tend to be geographically very dispersed and researchers often find themselves far from simulation facilities so there is both the motivation and skills to pioneer new tools for collaborative visualization. I know that astrophysicists certainly see their value (Piet Hut has some profound ideas on this). Astrophysicist Piet Hut and othersÂ  (<a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/b" target="_blank">see here for more</a>) have been pioneering the use of VWs for collaboration.Â  There are two Virtual World organizations, both founded by <span class="nfakPe">Piet</span> Hut and collaborators, that are currently exploring the use of OpenSim for scientific visualizations. Â One is specifically aimed at astrophysics, MICA, the<a href="http://www.mica-vw.org/" target="_blank"> Meta Institute for Computational Astrophysics</a>, and the other is aimed broadly at interdisciplinary collaborations in and beyond science, <a href="http://www.kira.org/" target="_blank">Kira</a>, a 12-year old organization focused on `science in context&#8217;. Â As of last week, there are two weekly workshops sponsored jointly by Kira and MICA that explore the use of OpenSim, ScienceSim, and other virtual worlds. Â One of them is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=124&amp;Itemid=154" target="_blank">&#8220;Stellar Dynamics in a Virtual Universe Workshop&#8221; </a>and the other is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=119&amp;Itemid=149" target="_blank">&#8220;ReLaM: Relocatable Laboratories in the Metaverse.&#8221;</a></p>
<p>MICA was founded two years ago by <span class="nfakPe">Piet</span> Hut within the virtual world of <a href="http://qwaq.com" target="_blank">Qwaq Forums</a> (see the paper <a href="http://arxiv.org/abs/0712.1655" target="_blank">&#8220;Virtual Laboratories and Virtual Worlds&#8221;</a>). The Kira Institute is much older: it was founded in 1997. Â Later this month, on February 24, Kira will celebrate its 12th anniversary with a presentation of talks, a panel discussion, and a series of workshops. Â See the <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=83&amp;Itemid=113" target="_blank">Kira Calendar</a> for the main event, and the Kira Japan branch for a <a href="http://www.kirajapan.org/event/" target="_blank">special mixed RL/SL</a> event in Tokyo. Â During both events, Junichi Ushiba will give a talk about his research in which <a href="http://nwn.blogs.com/nwn/2007/10/the-second-life.html" target="_blank">he let paralyzed patients steer avatars using only brain waves</a>.</p>
<p>Other early adopters of ScienceSim include Tom Murphy, who teaches computer science at a Contra Costa College. Prior to teaching, Tom spent 35+ years working for supercomputer manufacturers. Tom said:</p>
<blockquote><p>it is very natural for me to find significantly new ways to visualize and interact with scientific mathematical models via ScienceSim and the OpenSim software behind it. ScienceSim also allows us to interact with each other and teach students in new ways.</p></blockquote>
<p>Also Charlie Peck, chair of the SC09 Education Program, (his day job is teaching computer science at Earlham College in Richmond, IN), is working with Wilf Pinfold, Tom Murphy and others &#8220;to explore how 3D Internet/metaverse technology can be used to support science education and outreach.&#8221;</p>
<p><a href="http://www.ics.uci.edu/~lopes/" target="_blank">Cristina Videira Lopes</a>, University of Irvine, is doing very interesting workÂ  on road and pedestrian traffic simulations. Crista is also the creator of <a href="http://opensimulator.org/wiki/Hypergrid" target="_blank">hypergrid in OpenSim</a>,</p>
<h3>People Meet People Meet Data: A Conversation With Mic Bowman</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png"><img class="alignnone size-full wp-image-2908" title="sciencesim_002_thumb1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png" alt="sciencesim_002_thumb1" width="404" height="239" /></a><em></em><br />
<em>Screenshot of ScienceSim from <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a></em></p>
<p><strong>Tish:</strong> How does this work on ScienceSimÂ  fit into a wider dialogue on linked data? Where people meet people meet data, and where data meets data?</p>
<p><em><strong>Mic:</strong> Yeahâ€¦ thatâ€™s hard by the way.Â  Open integration of data (and more interestingly the functions on data) is very hard if it comes from multiple, independent sources.</em></p>
<p><em>Thatâ€™s the people part. For example, if Crista can build a model of the UCI campus somebody else builds an accurate model of several cars and another expert provides the simulation that computes the pollution generated by those cars in that environmentâ€¦its bringing people together to solve real problems, no matter how far apart physically.</em></p>
<p><strong>Tish:</strong> You mention three different simulations here. Could you explain why it is difficult to integrate data from multiple sources?</p>
<p><em><strong>Mic:</strong> integrating data from multiple sources has always been one of understanding &amp; interpreting both the syntax &amp; semantics of the data. Even relatively simple things like multiple date formats require explicit translation. More complex formats, like the many formats data is represented for urban planning, are barely computable independently let alone in conjunction with data from other sources (each with its own representation for data). Its often the expertise &amp; the collaboration of bringing people (and their bag of tools) together that solves these problems.</em></p>
<p><strong>Tish:</strong> and in this case the bag of tools is high performance modeling..?</p>
<p><em><strong>Mic:</strong> high performance modeling, rich visualizations and data. Its the three that matterâ€¦ data, function, and interface.</em></p>
<p><strong>Tish:</strong> Some people have a very hard time wrapping their head aropund the fact that anything that seems related to Second Life can do this.Â  Can you explain more about the difference between SL and OpenSim?</p>
<p><em><strong>Mic:</strong> OpenSim potentially improves data &amp; function because it can be extended through region modules. Region modules hook directly into the simulator to provide additional functionality. For example, a region module could be implemented to drive the behavior of objects in a virtual world according based on a protein folding model.</em></p>
<p><em>We need to work on additional viewer capabilities to address the user interface limitations.</em><br />
<strong><br />
Tish:</strong> Yes Rob Smartâ€™s (IBM) recent data integrations with OpenSim (<a href="http://robsmart.co.uk/2009/01/22/visualizing-live-shipping-data-in-opensim-isle-of-wight-ferries/" target="_blank">see here</a>) are impressive. Re viewers one of the biggest objections to virtual worlds is the mouse pushing and pc tied interface.</p>
<p><em><strong>Mic:</strong> There are great opportunities for improving the interface</em></p>
<p><strong>Tish:</strong> Yes I really like where the Andy Piperâ€™s experiments with Haptic Interfaces for OpenSim lead, <a href="http://andypiper.wordpress.com/2009/02/06/haptic-user-interfaces/" target="_blank">see Haptic Fantastic</a>! And I think that we will have cyberspace ubiquitous in our environment, not just stuck on a pc screen, sooner than we think.</p>
<p><em><strong>Mic:</strong> Micâ€™s opinion (not Intel): until we get souped up sunglasses with HD screens embedded (or writing directly into the eye) there will always be a role for the PC/Console/TV).Â  But, it isnâ€™t about the deviceâ€¦ its about the services projected through the deviceâ€¦ sometimes youâ€™ll want a very rich experienceâ€¦ sometimes youâ€™ll want an experience NOW wherever you are.</em></p>
<p><strong>Tish:</strong> I think people are only just realizing that VWs will be a now and wherever you are experience very soon.</p>
<p><em><strong>Mic:</strong> Thatâ€™s the critical observation the virtual world is not an application you runâ€¦ its a â€œplaceâ€â€¦ and you interact with it where you are or maybe interact through it. Speaking for Intelâ€¦ it is the spectrum of experiences that are critical to support.</em></p>
<h3>Interview with Wilfred Pinfold</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg"><img class="alignnone size-full wp-image-2860" title="gustav_h" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg" alt="gustav_h" width="416" height="200" /></a></p>
<p><em>Picture from National Science Foundation &#8211; <a href="http://www.nsf.gov/news/news_summ.jsp?cntn_id=112166" target="_blank">&#8220;Climate Computer Modeling Heats Up.&#8221;</a></em></p>
<p><strong>Tish Shute:</strong> I know your day job for Intel is in High Performance computing.  Could you explain to me a little bit more about what you are working on in this regard &#8211; a mini state of play for high performance computing from your perspective?</p>
<p><em><strong>Wilfred Pinfold:</strong> My title is Director, Extreme Scale Programs. This program drives a research agenda that will put in place the technologies required to make an Exa (10^18) scale systems by 2015. The current generation of high performance computers are Peta (10^15) scale so this is a 1000x increase in performance and this increase will require significant improvements in power efficiency, reliability, scalability and new techniques for dealing with locality and parallelism.</em></p>
<p><strong>Tish:</strong> The nirvana in terms of linking supercomputers to the collaborative spaces of immersive virtual worlds is to be able to create visualizations using real time data from super computers in collaborative VW environments, and ultimately for researchers to be able to collaborate and steer their simulations from their visualizations.Â   Where are we at now in terms of scientific data visualization in VWs? And what are the current obstacles to using realtime data from super computers?</p>
<p><em><strong>Wilf: </strong>Being able to steer a simulation from a visualization requires both a visualization interface that allows interaction and a simulation that operates at a speed that is responsive in interactive timeframes. For example a weather model that predicts the path of a hurricane would need to operate at something close to 1000x real time. This would run through a day in ~1.5 minutes allowing an operator to run the simulation over several days multiple times with different parameters in a single sitting to understand the likelyhood of certain outcomes?</em></p>
<p><strong>Tish:</strong> Do you see a networked online collaborative virtual world being capable of being a visualization interface that allows meaningful interaction with the hurricane scenario you describe in the near future (next 6 to 18 months)?</p>
<p><em><strong>Wilf: </strong>I was using the hurricane example to explain the usage model not an imminent capability. Hurricane Simulation: Accurate hurricane simulations require multiscale models able to resolve the global forces working on the storm as well as the microforces that define precipitation. We can build useful weather models today that run faster than real time (anything slower is not useful for prediction) but we are a long way from the ideal.<br />
Visualization: There are excellent visualizations of weather systems but I have not yet seen a virtual world that can track a simulation and allow the scientist or team of scientists to see what is going on at both the macro scale and zoom in to see precipitation conditions. Today&#8217;s supercomputers are much better at this than they were a few years ago but they are a long way from ideal.</em></p>
<p><strong>Tish:</strong> Open Source Virtual World technologies are pretty diverse in their approaches, Croquet, Sun&#8217;s Wonderland and OpenSim are quite different and have different strengths and weaknesses. As you have become more familiar with OpenSim, what have you found about the technology that particularly lends itself to this project &#8211; ScienceSim (Mic mentioned Crista&#8217;s hypergrid code for example, modularity is another feature often cited).</p>
<p><em><strong>Wilf: </strong>We have found OpenSim&#8217;s client server model is well suited to the visualization model and the ability to put the server next to the supercomputer producing the visualization data is critical. We are however very interested in other environments and encourage papers, demonstrations and research on any of these platforms at the conference.</em></p>
<h3>Interview with John A. Hengeveld</h3>
<p><strong>Tish Shute:</strong> OpenSimâ€™s dependence on Second Life based viewers is sometimes cited as a limitation, and sometimes as a strength. What are your views on this?Â  What would a strong open viewer project directed at science applications bring to the picture?</p>
<p><em><strong>John Hengeveld:</strong> There may be more than one strong open viewer project required for opensim compatible experiences.Â  The strength of the Hippo viewer, for example, is availability and its weakness is the size of the client.Â  We would love a ubiquitous, client.. that runs on all platforms, but each hardware platform brings tradeoff and restrictions of its own.Â  Today, probably all of the folks innovating in the space can deal with the size of a very fat rich client ap.. they have big computers anyway.Â  But as we get into more 3D entertainment and augmented reality applications.. virtual mall, collaboration apps.. etcâ€¦ there is a great deal of room to optimize for the specific experience.Â  Balancing visual experience with bandwidth and compute performance available .. tying into standard browsers, etcâ€¦ people have done some of this work.. and I think all of it adds to the usefulness of these worlds.</em></p>
<p><strong>Tish:</strong> Integrating highend game engines and OpenSim opens up new possibilities. But licensing issues have been an obstacle. Could a project like ScienceSim get a non-commercial license on a high end game engine?Â  What would that bring to the picture?</p>
<p><em><strong>John: </strong>Anything is possible. Game engines can give a great deal of design power for high value experiences, but the programming of these experiences must be simplified.Â  Mainstream adoption in enterprise can&#8217;t be premised on the programming model of studio gamesâ€¦ thatâ€™s a big step to get over I think.Â  There are very interesting possibilities when we take that step tho.Â  Simulation, training, agents of various types (I just finished watching â€œThe Matrixâ€ for like the billionth timeâ€¦ I think agents are coolâ€¦)</em></p>
<p><strong>Tish:</strong> Where does Larabee fit into the picture of ScienceSim and next generation virtual worlds?</p>
<p><em><strong>John:</strong> We are all very excited about the Larrabee architecture and its application to work loads like next generation virtual worlds, both in the client.. delivering immersive reality.. and someday potentially in a distributed architecture simulating and producing these worlds.Â  For Intel CVC is an all play.Â  Atom will be used in strong mobile clients.Â  Core will be used in Enterprise PCs, Laptops and DesktopsÂ Â  Xeon will be simulating these environments and handling the data communication, and Whatever we brand Larrabeeâ€¦ will be enabling compelling visual experiences. Oh.. and our software products (Havoc, tools and others) will be building blocks in knitting all this together.Â  Larrabee is a part, but there are a lot of other pieces in our visionâ€¦</em></p>
<p><strong>Tish:</strong> If the kind of rapid data movement that scientific visualization needs is achieved in virtual worlds, this will be quite a game changer for business applications of VWs too. Also it will blurr the boundaries between what we call virtual worlds and mirror worlds. It seems to me this kind of rapid data movement is a vital step towards what Mic described to me as Intelâ€™s vision of CVC: â€œConnected Visual Computing is the union of three application domains: mmog, metaverse, and paraverse (or augmented reality).â€ It almost seems to me that if you achieve your goals for ScienceSim you will change how we think about virtual worlds in general? What do you think?</p>
<p><em><strong>John:</strong> I certainly hope so..Â  Part of our goal is to stimulate innovation in the technology and usage models that will enable broad mainstream adoption of CVC based applications (what we categorize as immersive connected experiences).Â Â  By tackling the scientific visualization problem, we hope to find the key technology barriers and encourage the ecosystem to solve them.</em></p>
<p><strong>Tish: </strong>To me virtual worlds and augmented reality should be complimentary and connected experiences. How do you see this connection evolving?</p>
<p><em><strong>John:</strong> We certainly see them as related.Â  In the long term, there are many common building blocks.. but they arenâ€™t united per se.Â  Its about the user experience, and in some usages these two are almost identicalâ€¦Â  in some.. they donâ€™t look or feel at all alikeâ€¦ the viewer is distinct by a lot.Â  Our approach is to enable building blocks that people can quickly build out usages that are robust.</em></p>
<p><strong>Tish: </strong>What is Intelâ€™s vision for ubiquitous mobile computing and an internet of objects?Â  How can high performance computing be an enabler for this vision?</p>
<p><em><strong>John: </strong>Mobile computing is a central part of our life, culture and community in economically enabled economies.Â  It feeds the data of our decisions, it connects us to entertainment, it is the access point to our soapboxes, pulpits, economy and families.Â  This creates a massive increase in data, a massive increase in interactions, transactions and visualizations.Â  While many HPC applications will be behind the scenes (finance, health, energy, visual analytics and others), HPC will emerge as a part of a scale solution to serving some of this increaseâ€¦ particularly that part where interactions and visualizations are complex or compelling.. or where scale enables the usage per se .. I talked about my love of agents earlier.. and some of that comes in here.Â  Compute working behind the scenes to help managed the data complexity, manage some of the base interactions between ourselves and technology.Â  The other thing we talk internally about the â€œHannah Montana usageâ€ where millions of people use their mobile devices to access and participate (using the sensors in the device) with an interactive live concert.Â  When Mylie hears the applause of a virtual interactive audienceâ€¦ and can scream back at them.. weâ€™re there.Â  Access to ubiquitous compute will be mobile, and interactive experiences will be complex.. and HPC can help make that real.Â  Watch out for the mental trap that HPC is always high end super compute clusters thoâ€¦ the â€œmainstream HPCâ€.. smaller clustersâ€¦ high threads, etcâ€¦ will play a key part in all of this as well.</em></p>
<p>Interesting that John ended on this point as this just came in from <a href="http://blog.wired.com/gadgets/2009/02/intel-fights-re.html" target="_blank">Wired. </a><em><br />
</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Pachube, Patching the Planet: Interview with Usman Haque</title>
		<link>http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/</link>
		<comments>http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/#comments</comments>
		<pubDate>Wed, 28 Jan 2009 16:31:41 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MQTT and RSMB]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[connecting environments]]></category>
		<category><![CDATA[dynamic environments]]></category>
		<category><![CDATA[electronically assisted plants]]></category>
		<category><![CDATA[Extended Environment Markup Language]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[sensor technology]]></category>
		<category><![CDATA[smart buildings]]></category>
		<category><![CDATA[smart spaces]]></category>
		<category><![CDATA[social networking sensor data]]></category>
		<category><![CDATA[software of space]]></category>
		<category><![CDATA[sustainable real estate]]></category>
		<category><![CDATA[the street as a platform]]></category>
		<category><![CDATA[ubicomp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2686</guid>
		<description><![CDATA[Usman Haque (architect and director, Haque Design + Research) and founder of Pachube pointed me to this image from T.R. Oke&#8217;s book, &#8220;Boundary Layer Climates&#8221; (original photo source Prof. L. E. Mount&#8217;s The Climatic Physiology of the Pig) to explain his approach to the &#8220;software&#8221; of space. My focus as an architect has always been [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/pigletspachubepost.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/dcfwgkt_8g2dvxgdg_b2.jpg"><img class="alignnone size-full wp-image-2835" title="piglets" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/dcfwgkt_8g2dvxgdg_b2.jpg" alt="piglets" width="614" height="407" /></a></p>
<p>Usman Haque (architect and director, <a id="o.td" title="Haque Design + Research" href="http://www.haque.co.uk/" target="_blank">Haque Design + Research</a>) and founder of <a id="cpbp" title="Pachube" href="http://www.pachube.com/">Pachube</a> pointed me to this image from <a href="http://www.geog.ubc.ca/~toke/Profile.htm &lt;http://www.geog.ubc.ca/%7Etoke/Profile.htm" target="_blank">T.R. Oke&#8217;s</a> book, <a href="http://www.amazon.com/Boundary-Layer-Climates-T-Oke/dp/0415043190" target="_blank">&#8220;Boundary Layer Climates&#8221;</a> (original photo source Prof. L. E. Mount&#8217;s <a href="http://www.alibris.com/booksearch?qwork=1137594&amp;matches=1&amp;author=Mount%2C+Laurence+Edward&amp;browse=1&amp;cm_sp=works*listing*title" target="_blank">The Climatic Physiology of the Pig</a>) to explain his approach to the &#8220;software&#8221; of space.</p>
<p><em>My focus as an architect has always been to consider what I&#8217;ve called the &#8220;software&#8221; of space (sounds, smell, light, temperature, electromagnetic fields, social relationships, etc.) rather than the &#8220;hardware&#8221; (floors, walls, roof, etc.) as it has traditionally been considered. The image (above) really sums up why I think this is important.</em></p>
<p><em>It&#8217;s the same piglets, in the same box, but on the right hand side the temperature has been increased. This small change in how the space is &#8220;programmed&#8221; has dramatically changed the way the &#8216;inhabitants&#8217; relate to each other and how they relate to their space. This approach to architecture became my challenge: how to translate such strategies into the general architectural discourse and how to bring into reality such possibilities for the construction industry.</em></p>
<h3>&#8220;Connecting Environments, Patching the Planet&#8221;<em><br />
</em></h3>
<p>Pachube is the culmination of 12 years of work.<em> </em></p>
<p><em>&#8220;It is now occupying pretty much all my time and will do for the foreseeable future,&#8221; </em>Usman told me.</p>
<p>Haque Design + Research is not foregrounded on theÂ <a id="q51:" title="Pachube site" href="http://www.pachube.com/" target="_blank">Pachube site</a>. And I did not make the connection at first. But when I followed a small link at the bottom, I was soon delving into the <a id="n4ku" title="work of Usman Haque" href="http://www.haque.co.uk/" target="_blank">work of Usman Haque</a>.Â  Then the penny dropped and I realized that Pachube is not only:</p>
<p><em><em>A web service that enables people to tag and share real time sensor data from objects, devices and spaces around the world, facilitating interaction between remote environments, both physical and virtual.</em></em><strong><em><br />
</em></strong></p>
<p>Pachube is also a really big idea.</p>
<h3><strong>Ubicomp and the &#8220;Software of Space?&#8221;<br />
</strong></h3>
<p>Usman suggested that, if I really wanted to go back to the beginning of the Pachube vision, I should check out the work of Dutch architect Constant Nieuwenhuys and his 1956 proposal for a visionary society, <a id="y-7j" style="font-weight: normal;" title="New Babylon" href="http://www.artfacts.net/index.php/pageType/exhibitionInfo/exhibition/15904" target="_blank">New Babylon</a></p>
<p>Usman explained:<strong><em></em></strong></p>
<p><em>Constant Nieuwenhuys is certainly an inspiration for Pachube. He envisages a globally connected architecture, built by its inhabitants &#8211; configured, reconfigured, reappropriated&#8230;</em></p>
<p>For a more contemporary reference, Usman noted there are lots of overlapping concepts with <a id="d21o" title="Adam Greenfield (head of design direction for service and user-interface design at Nokia)" href="http://speedbird.wordpress.com/about/" target="_blank">Adam Greenfield&#8217;s work. </a>Adam is head of design direction for service and user-interface design at Nokia. see Everyware: <a id="spz5" title="The dawning age of ubiquitous computing" href="http://www.amazon.com/exec/obidos/ASIN/0321384016/v2organisa/" target="_blank">The dawning age of ubiquitous computing</a>, and <a href="http://www.lulu.com/content/1554599">Urban Computing and its Discontents</a> to understand more about the vision Adam Greenfield has been developing.</p>
<p>Pachube is right in the zone with the ideas outlined in <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The project description </a>for Adam Greenfield&#8217;s upcoming book,<a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a>:</p>
<p><em><em>The City&#8230; takes everything explored in Everyware as a given, and a point of departure.<br />
<em><br />
It assumes that emergent technologies like RFID, mesh networking and shape-memory actuators&#8230;</em></em></em><em><em><em>will simply be part of how cities will be made from now on&#8230;</em></em></em></p>
<p><em><em><em><br />
</em></em></em></p>
<h3 style="text-align: left;">The Pachube Team</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/pachubeteamfull.jpg"><img class="alignnone size-full wp-image-2764" title="pachubeteamfull" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/pachubeteamfull.jpg" alt="pachubeteamfull" width="480" height="344" /></a></p>
<p>The Pachube Team &#8211; Usman Haque (creative director), Chris Leung (EEML developer), photoshopped laptop: Chris Burman (&#8220;example-maker&#8221;. e.g. SL code and Google SketchUp plugin), Ai Hasegawa (graphic designer), Sam Mulube (technical producer and website development).</p>
<p>Also, with Bruce Sterling as a &#8220;visionary&#8221; adviser and other luminaries involved, Pachube has some brilliant guiding lights.Â  Usman pointed that many people have<em> &#8220;have helped, prodded, nudged and advised along the way!&#8221; </em></p>
<div><em>Gavin Starks and also Dopplr&#8217;s Matt Biddulph have been sort of &#8220;friendly neighbours&#8221; to Pachube: they&#8217;ve made some great introductions and I turn to them often for advice on being a London start-up. What&#8217;s been really useful for me is that they are active in a related area and have directly useful advice: Gavin, of course, since he&#8217;s involved in metering the world&#8217;s energy; and Matt perhaps less tangibly in his day job as Dopplr&#8217;s CTO but more so in his active Arduino-enabled social life!</em></div>
<div><em><br />
</em></div>
<div><em>One very important Pachube advisor has been Dr. Paul Pangaro, who has previously been CTO at a number of technology startups, and brings vital experience from his time at Sun Microsystems as Senior Director and Distinguished Market Strategist. Oh, and he&#8217;s also a former student and collaborator of Gordon Pask&#8217;s! He has been very helpful in developing a viable business model in conjunction with my brother Yusuf Haque, who, with his experience in raising capital for startups, has led the fundraising process.</em></div>
<div><em><br />
</em></div>
<div><em>Of course, direct daily input from the Pachube team has been vital to the development of the project, and without Chris Leung (EEML development) and Sam Mulube (backend development) it would be a very different thing indeed!</em></div>
<div>
<h3>Pachube is not just a social networking project for sensor data.</h3>
<p>Pachube evolved out three strands of thought:</p>
<p><em>1) the geographical non-specificity of architecture these days as people live their lives in constant connection with people in remote spaces </em></p>
<p><em>2) a desire to open up the production process of &#8220;smart homes&#8221; in reaction to current trends forÂ placing the design and construction process solely in the hands of knowledgeable others.</em></p>
<p><em>3) an emphasis on contextually specific &#8220;environments&#8221; rather than object-centric &#8220;sensors&#8221;</em></p>
<p>Sensor/actuator integrations are a part of whatÂ  Pachube is about (also see Peter Quirk&#8217;s in depth post on <a id="ai70" title="the strong connection between virtual worlds and sensor networks" href="http://peterquirk.wordpress.com/2009/01/21/sensor-networks-and-virtual-worlds/" target="_blank">the strong connection between virtual worlds and sensor networks</a>), and an interest in home automation and energy management is giving a lot of early momentum to Pachube.</p>
<p>But Usman makes clear Pachube is about &#8220;environments&#8221; rather than &#8220;sensors.&#8221;Â  &#8220;An &#8216;environment&#8217; has dynamic frames of reference, all of which are excluded when simply focusing on devices, objects or mere sensors&#8221; (Usman explains this in depth in the interview below). A central part of Pachube is the development ofÂ  the <a id="f0b2" title="Extended Environments Markup Language." href="http://www.eeml.org/" target="_blank">Extended Environments Markup Language.</a></p>
<h3>Extended Environment Markup Language</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/eeml.jpg"><img class="alignnone size-full wp-image-2765" title="eeml diagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/eeml.jpg" alt="eeml diagram" width="520" height="159" /></a></p>
<p><em>Pachube came about as a direct attempt to enable the production of dynamic, responsive, conversant &#8216;environments&#8217;. </em></p>
<p><em>The <a id="gv6y" style="color: #551a8b;" title="Extended Environments Markup Language (EEML)" href="http://www.eeml.org/" target="_blank">Extended Environments Markup Language (EEML)</a> (which is the protocol around which much of Pachube is based) is being developed to make the idea of &#8220;dynamic, responsive and conversant environments&#8221; a reality. It worksÂ with existing construction standards like <a id="l7sl" style="color: #551a8b;" title="Industry Foundation Classes (IFC)" href="http://en.wikipedia.org/wiki/Industry_Foundation_Classes" target="_blank">Industry Foundation Classes (IFCs)</a>, but exists to extend them to account for dynamic, responsive and, dare I say it, conversant buildings. </em></p>
<p>A key member of the Pachube<em> </em>team<em> </em>doing EEML development is <a id="h3n5" title="Chris Leung" href="http://www.chrisleung.org/" target="_blank">Chris Leung</a><em>. </em>Haque Design + Research<em> </em>is industry sponsor of Chris&#8217; doctorate that:</p>
<p><em>investigates how Architectural and Engineering consultancies can use advanced imaging, sensing and visualisation technology to capture, record and playback the responsive behaviour of built Architecture in response to its environment as a decision-support tool to meet this unique challenge.</em></p>
<p><strong><a href="http://www.chrisleung.org/CaseStudy1.htm">Case-Study I â€“ Kielder Forest</a></strong></p>
<p><em><strong><img class="alignnone size-medium wp-image-2707" title="kielderforest" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/kielderforest-300x225.jpg" alt="kielderforest" width="300" height="225" /></strong></em></p>
<p>Usman explained to me the full vision for Pachube is not yet fleshed out on the web site (so read the full interview!), and this is in part because the focus has been on building a backend capable of handling millions of users.</p>
<h3>The business model for Pachube</h3>
<p>Usman explained his commitment to an ethically driven business model to allow a diverse group of companies and individuals to transition to the internet of things. Usman emphasizes that one of his chief concerns is to make sure that these technologies of &#8220;extreme connectivity,&#8221; that will soon be part of every aspect of our lives, are in the hands of all who want to use them.<br />
<em><br />
Pachube is here to make it easier to participate in what I expect to be a vast &#8216;eco-system&#8217; of conversant devices, buildings &amp; environments. </em></p>
<p><em>Pachube will facilitate the development of a huge range of new products and services that will arise from extreme connectivity. It&#8217;s relatively easy for large technology companies like Nike and Apple to transition into the Internet of Things, but Pachube will be particularly helpful for that huge portion of smaller scale industry players that *want* to become part of it, but which are only now waking up to the potentials of the internet &#8212; small and medium scale designers, manufacturers and developers who are very good at developing their products but don&#8217;t have the resources to develop in-house a massive infrastructure for their newly web-enabled offerings. </em></p>
<p><em>Basically, having built a generalized data-brokering backend to connect physical (and virtual) entities to the web, others can now start to build the applications that make the connections really useful. </em></p>
<h3>An Inspired Community of Early Adopters and Business Visionaries</h3>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/monkchipsathomecamp1.jpg"><img class="alignnone size-full wp-image-2766" title="monkchipsathomecamp1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/monkchipsathomecamp1.jpg" alt="monkchipsathomecamp1" width="462" height="308" /></a><br />
</em></p>
<p>James Governor <a href="../wp-content/uploads/2008/12/andystanfordclark.jpg"><span class="entry-content">(</span></a><a id="qd8i" title="@monkchips" href="http://twitter.com/monkchips" target="_blank">@monkchips</a>), <a href="http://redmonk.com/">Redmonk</a> has Pachube, <a href="http://currentcost.co.uk/">Current Cost</a>, <a id="g.i:" title="using MQTT" href="http://mqtt.org/" target="_blank">MQTT</a> and RSMB (<a id="h0is" title="IBM AlphaWorks" href="http://alphaworks.ibm.com/tech/rsmb" target="_blank">IBM AlphaWorks</a>), and <a href="http://www.arduino.cc/" target="_blank">Arduino</a> on the board at <a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp â€˜08.</a> Photo from theÂ  <a href="http://www.flickr.com/photos/tags/homecamp08/" target="_blank">Flickr</a><a href="http://www.flickr.com/search/?q=homecamp&amp;w=29034542%40N00" target="_blank"> stream</a> ofÂ  <a href="http://benjaminellis.co.uk/" target="_blank">Benjamin Ellis</a>.<a href="http://www.flickr.com/search/?q=homecamp&amp;w=29034542%40N00" target="_blank"></a></p>
<p>What attracted my attention to Pachube, at first, was the small but highly energized community of early adopters I noticed experimenting with Pachube.Â  <a id="x2vv" title="Nigel Crawley" href="http://www.nigelcrawley.co.uk/" target="_blank">Nigel Crawley</a> <a id="nf4y" title="@ni" href="http://twitter.com/ni" target="_blank">@ni</a>), and <a id="zjcv" title="James Taylor" href="http://jtlog.wordpress.com/" target="_blank">James Taylor</a>, (<a id="ie4m" title="@jtonline" href="http://twitter.com/jtonline" target="_blank">@jtonline</a>)Â  were some of the first to plunge in.Â <a id="o0.i" title="Rick Bullotta" href="http://www.automation.com/content/wonderware-appoints-rick-bullotta-vp-and-cto" target="_blank">Rick Bullotta,</a> Usman noted, has been very active in the community forum bringing much-needed automation expertise to the conversation. <a id="ny-t" title="Pam Broviak" href="http://www.publicworksgroup.com/" target="_blank">Pam Broviak</a> (<a id="xkmo" title="@pbroviak" href="http://twitter.com/pbroviak" target="_blank">@pbroviak</a>) is an early Second Life adopter.Â  And <a id="ugu0" title="Matt Biddulph" href="http://www.hackdiary.com/about/" target="_blank">Matt Biddulph</a> (CTO of <a href="http://www.dopplr.com/">Dopplr</a>) was the first non-Pachube person to get a feed up!</p>
<p>A very active early adopter is <a id="q54j" title="Carl Johan Rosen" href="http://carljohanrosen.com/" target="_blank">Carl Johan Rosen</a> wrote an <a href="http://www.openframeworks.cc/" target="_blank">openFrameworks</a> addon (<a id="ljuh" title="for more see here" href="http://carljohanrosen.com/?p=42" target="_blank">see here</a>) for <a href="http://www.pachube.com/" target="_blank">Pachube</a> that he presented at the <a href="http://www.aec.at/en/festival2008/program/project.asp?parent=14439&amp;iProjectID=14447" target="_blank">OFLab at Ars Electronica Festival</a>.<br />
After the first inaugural <a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp</a>, where Usman and Chris Burman from Pachube were presenters, (<a id="diae" title="see slides here" href="http://www.slideshare.net/tag/pachube" target="_blank">see slides here</a>), I began to notice that people were sending their current cost feeds into Pachube. And recently, it was announced that Pachube has <a href="http://apps.pachube.com/carbon_footprint.php" target="_new">carbon footprint calculation app</a> which:</p>
<p><em>makes it very easy to take any Pachube feed that measures electricity consumption in watts or kilowatts and convert it into a Pachube feed that shows a realtime estimated carbon footprint for the last 15 minutes, the last hour and the last 24 hours.</em></p>
<p><em>The app makes use of international data provided by <a href="http://www.amee.cc/" target="_new">&#8216;AMEE &#8211; The world&#8217;s energy meter&#8217;</a>. AMEE provides figures that are specific to electricity suppliers in UK &amp; Ireland and specific to country in the rest of the world.</em></p>
<p><em>This app, combined with the <a href="http://community.pachube.com/?q=node/100">Current Cost app</a> makes it simple to monitor your carbon footprint on a day to day basis!</em></p>
<p>I still haven&#8217;t found out what <a id="kmt8" title="@yellowpark" href="http://twitter.com/yellowpark" target="_blank">@yellowpark</a> was doing last Saturday to produce so much CO2&#8230;&#8230;? (the perils of going public with your energy consumption as <a id="am8t" title="@epachube" href="http://twitter.com/pachube" target="_blank">@epachube</a> pointed out).</p>
<p>But perhaps Chris Dalby <a id="kmt8" title="@yellowpark" href="http://twitter.com/yellowpark" target="_blank">(@yellowpark</a>) can be excused a day of CO2 excess as he has just released <a id="qf:l" title="Pachube Air" href="http://www.yellowpark.net/cdalby/index.php/2009/01/10/pachube-air-the-first-release/" target="_blank">Pachube Air</a>.</p>
<p>While enterprise and government projects are on the near horizon, PachubeÂ  is designed to introduce a DIY approach to ubicomp.Â  Usman said he is &#8220;concerned by developments in ubiquitous computing whereby &#8216;making technology invisible&#8217; equates to placing the design and construction process solely in the hands of knowledgeable others.</p>
<p>DIY City (see the <a id="zwms" title="Do-It-Yourself-City Project" href="http://diycity.org/diycity-main-group/call-work-first-diycity-project" target="_blank">Do-It-Yourself-City Project</a>) is developing a similar vision here in NYC.</p>
<h3>Natural Fuse: &#8220;A city wide network of electronically-assisted plants.&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/naturalfusenetwork1.jpg"><img class="alignnone size-full wp-image-2779" title="naturalfusenetwork1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/naturalfusenetwork1.jpg" alt="naturalfusenetwork1" width="405" height="305" /></a></p>
<p><em>I think we&#8217;ve really not even begun to imagine the kinds of applications that will be important,&#8221; </em> Usman Haque.</p>
<p>Haque Design + Research which still continues, and has a separate team will be involved mostly in the kinds of things it has in the past, but it isÂ <em> &#8220;also in pushing development of things that *use* Pachube,&#8221;</em> such as the project Natural Fuse, by Usman Haque, <a id="y5x7" title="Nitipak Samsen (Designer)" href="http://www.dotmancando.info/" target="_blank">Nitipak Samsen (Designer)</a>,Â <a id="d.p2" title="Cesar Harada (Designer)" href="http://www.cesarharada.com/" target="_blank">Cesar Harada (Designer)</a>, Barbara Jasinowicz (Producer), was commissioned by <a href="http://www.archleague.org/index-dynamic.php?show=757" target="_new">the Architecture League</a> &amp; <a href="http://www.situatedtechnologies.net/?q=node/89" target="_new">Situated Technologies: Toward the Sentient City</a> and will open to the public in Autumn 2009.</p>
<p><em>Natural Fuse harnesses the carbon-sinking capabilities of plants to create a city-wide network of electronically-assisted plants that act both as energy providers and as shared &#8220;carbon sink&#8221; circuit breakers. By sharing resources and information between the plants, energy expenditure can be collectively monitored and managed.</em></p>
<p><em> The purpose is to create a collective &#8220;carbon sink&#8221;, that offsets the amount of energy consumed by the plant owners &#8211; a natural &#8220;circuit breaker&#8221;. If people cooperate on their energy expenditure then the plants thrive (and they can all use more energy); but if they don&#8217;t then the network starts to kill plants, thus diminishing the network&#8217;s energy capacity,</em> (a full description of natural fuse in the interview below).</p>
<h3>The Street As Platform</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/streetasaplatform1.jpg"><img class="alignnone size-full wp-image-2780" title="streetasaplatform1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/streetasaplatform1.jpg" alt="streetasaplatform1" width="450" height="301" /></a></p>
<p>Image courtesy ofÂ <a id="k0g3" title="Timo Arnall" href="http://www.elasticspace.com/" target="_blank">Timo Arnall</a> -Â  who is an awesome photographer and mover and shaker in ubicomp. <em>&#8220;The way the street feels may soon be defined by what cannot be seen with the naked eye,&#8221;</em> writes Dan Hill in his post <a href="http://www.cityofsound.com/blog/2008/02/the-street-as-p.html" target="_blank">&#8220;The Street as Platform.&#8221;</a> Usman comments on Dan Hill&#8217;s other &#8220;must read&#8221; post:</p>
<p><em><a id="doow" title="&quot;the personal well-tempered environment,&quot;" href="http://www.cityofsound.com/blog/2008/01/the-personal-we.html" target="_blank">&#8220;The Personal Well-Tempered Environment&#8221;</a> is full of &#8220;fascinating propositions&#8230; &#8230;they&#8217;re relevant to things I&#8217;m interested in&#8230;</em></p>
<p>In a summary of his ideas on personal well-tempered env., Dan Hill writes:<br />
<em></em></p>
<p><em>A real-time dashboard for buildings, neighbourhoods, and the city, focused on conveying the energy flow in and out of spaces, centred around the behaviour of individuals and groups within buildings.</em></p>
<p><em>A form of &#8216;BIM 2.0&#8242; that gives users of buildings both the real-time and longitudinal information they need to change their behaviour and thus use buildings, and energy, more effectively. An ongoing post-occupancy evaluation for the building, the neighbourhood and the city.</em></p>
<p><em>A software service layer for connecting things together within and across buildings.</em></p>
<p><em>As information increasingly becomes thought of a material within building, it makes sense to consider it holistically as part of the built fabric, as glass, steel, ETFE etc.</em></p>
<h3>Interview With Usman Haque</h3>
<p><strong>Tish Shute:</strong> You have been involved in many awesome projects but Pachube seems to be quite a new direction.Â  What are the key influences in your career and the development of your thinking? And, could you tell me more about how your previous work brought you to creating Pachube? Is Pachube a central focus for you and Haque design now?</p>
<p><strong>Usman Haque:</strong><em> To me Pachube is the logical culmination of everything I&#8217;ve worked on for the last 12 years since finishing my post-grad architecture studies.</em></p>
<p><em>A lot of my work until now has centered around large-scale mass-collaboration interactive &#8220;spectacles&#8221; involving many thousands of members of the public at once. I found this a good medium in which (a) to explore strategies for collaboration that take account of the granularity of participation (i.e. the fact that different people have different interests, skills and intentions in any participative act); and (b) to work at an urban scale; i.e. in a way that has an effect at the scale of buildings, parks, and streetscapes etc.</em></p>
<p><em> <a id="kr8h" title="Open Burble" href="http://www.haque.co.uk/openburble.php" target="_blank">Open Burble</a> was a good example of this approach: essentially a framework, composed of 2m carbon-fibre modules, it had electronics embedded in 1000 helium balloons. Members of the public could configure and assemble these, inflate them and then unfurl the complex structure up to the scale of a 15 storey buidling. Finally, by shaking, rowing, twisting and bending a handlebar embedded with sensors (the same as in the Wii controller as it happens), dozens of people at once could have an effect on the Burble&#8217;s position and the colours streaming through it.</em></p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/openburble2.jpg"><img class="alignnone size-full wp-image-2832" title="openburble2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/openburble2.jpg" alt="openburble2" width="509" height="338" /></a><br />
</em></p>
<p><a href="http://www.haque.co.uk/openburble.php" target="_blank">Open Burble, Singapore Biennale 2006</a></div>
<p><em>Along the way I became interested at times in what an &#8220;operating system&#8221; might mean in the context of architecture (paper,Â <a id="cxpf" title="Hardspace, Softspace and the possibilities of open source architecture, 2002" href="http://www.haque.co.uk/papers/hardsp-softsp-open-so-arch.PDF" target="_blank"> Hardspace, Softspace and the possibilities of open source architecture, 2002 (PDF)</a>, particularly an &#8220;open source&#8221; operating system (Urban Versioning System,Â <a id="yvjc" title="http://uvs.propositions.org.uk/" href="http://uvs.propositions.org.uk/" target="_blank">http://uvs.propositions.org.uk/</a> ). I was also interested in developing tools for supposedly &#8220;non-technical&#8221; people to start building their own interactive systems or environments, hence the release of <a id="zv:-" title="The &quot;Low Tech Sensors &amp; Actuators for Artists and Architects&quot;" href="http://lowtech.propositions.org.uk/" target="_blank">The &#8220;Low Tech Sensors &amp; Actuators for Artists and Architects&#8221;</a> pamphlet , co-authored with an old friend,Â <a id="w-ad" title="Adam Somlai-Fischer" href="http://www.aether.hu/" target="_blank">Adam Somlai-Fischer</a>, back in 2005.</em></p>
<p><em>An off-shoot of this has been an obsession withÂ <a id="ahue" title="trying to rescue the concept of &quot;interaction&quot;" href="http://mags.acm.org/interactions/20090102/?pg=71" target="_blank">trying to rescue the concept of &#8220;interaction&#8221;</a> from oblivion &#8211; I say oblivion because I think the really exciting possibilities of the concept of interaction are being lost because we&#8217;re being sold a billion so-called &#8220;interactive&#8221; devices and gadgets that are, in fact, merely &#8220;reactive&#8221;. In this, <a id="t5h7" title="I turn often to the work of cybernetician Gordon Pask" href="http://www.haque.co.uk/papers/architectural_relevance_of_gordon_pask.pdf" target="_blank">I turn often to the work of cybernetician Gordon Pask</a>, particularly active in the 50s, 60s and 70s in the development of truly interactive systems. (And also a collaborator withÂ <a id="gt4p" title="Cedric Price" href="http://en.wikipedia.org/wiki/Cedric_Price" target="_blank">Cedric Price</a>, one of my favourite architects).</em></p>
<p><em>Which brings me to Pachube, which is now occupying pretty much all my time and will do for the foreseeable future. (<a id="qdfj" title="Haque Design + Research" href="http://www.haque.co.uk/" target="_blank">Haque Design + Research</a> still continues, and has a separate team &#8212; it will be involved mostly in the kinds of things it has in the past, but also in pushing development of things that *use* Pachube, such as the projectÂ <a id="h:9w" title="Natural Fuse" href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a> ).</em></p>
<p><em>Pachube came about as a direct attempt to enable the production of dynamic, responsive, conversant &#8216;environments&#8217;. ItÂ basically evolved out of three strands of thought.</em></p>
<p><em>The first was the notion of the <strong>geographical non-specificity of architecture</strong> these days. By this I mean that, for many of us now, &#8220;home&#8221; is an idea constructed from several places &#8211;we live and work in environments composited by networked technology from fragments that bridge huge geographical distances. These environments are resolutely &#8220;human&#8221; (in the sense of being inhabited, designed and determined by people) yet context-free (because they do not privilege geographical location). I wanted to find a way to &#8220;connect&#8221; up remote spaces, much likeÂ <a id="ubie" title="Remote Home" href="http://www.tobi.net/remotehome/remotehome.htm" target="_blank">Remote Home</a> and a whole range of other projects had done, but in a generalized way so that it would be possible to keep adding to the ecosystem of connected environments on an ad hoc basis; a global architecture if you will.</em></p>
<p><em>The second strand of thought came from the <strong>desire to open up the production process of &#8220;smart homes.&#8221;</strong> I&#8217;m concerned by developments in ubiquitous computing whereby &#8220;making technology invisible&#8221; equates to placing the design and construction process solely in the hands of knowledgeable others. Whereas it&#8217;s still possible more or less to do DIY on your home, if many ubicomp technologists had their way it would become less and less possible simply because of the complexity of reverse-engineering such closed-systems. It&#8217;s already a problem with larger buildings: service companies go out of business, proprietary skills or tools disappear and complex lighting and sensor systems remain unused. So, with Pachube I wanted to help foster a more open way of developing the discipline: to embrace the concept of the maker, and to help people negotiate their technological future.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/reconfigurablehouse.jpg"><img class="alignnone size-full wp-image-2781" title="reconfigurablehouse" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/reconfigurablehouse.jpg" alt="reconfigurablehouse" width="419" height="107" /></a></p>
<p><em><a id="ex31" title="Reconfigurable House" href="http://haque.co.uk/reconfigurablehouse.php" target="_blank">Reconfigurable House</a>,Â an environment constructed from thousands of low tech components that can be &#8220;reconfigured&#8221; by its occupants.</em></p>
<p><em>The final strand of thought relates to Pachube&#8217;s emphasis on <strong>&#8220;environments&#8221; rather than &#8220;sensors.&#8221; </strong>I believe that one of the major failings of the usual ubicomp approach is to consider the connectivity and technology at the object-level, rather than at the environment-level. It&#8217;s built into much of contemporary Western culture to be object-centric, but at the level of &#8220;environment&#8221; we talk more about context, about disposition and subjective experience. An &#8216;environment&#8217; has dynamic frames of reference, all of which are excluded when simply focusing on devices, objects or mere sensors. If one really studies deeply what an &#8216;environment&#8217; is (by this I mean more than simply saying that &#8220;it&#8217;s what things exist in&#8221;), one begins to understand that an environment is a construction </em><em>process and </em><em>not a medium; nor is it a state or an entity. In this I would refer to Gordon Pask&#8217;s phenomenally important text </em><em>&#8220;Aspects of Machine Intelligence&#8221; in Nicholas Negroponte&#8217;sÂ <a id="hlcg" title="Soft Architecture Machine" href="http://www.amazon.com/Soft-Architecture-Machines-Nicholas-Negroponte/dp/0262140187" target="_blank">Soft Architecture Machine</a> though it makes for extremely tough reading (Negroponte compared it in importance to Alan Turing&#8217;s contributions to the computer science discipline).</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/pachube1.jpg"><img class="alignnone size-full wp-image-2782" title="pachube1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/pachube1.jpg" alt="pachube1" width="411" height="275" /></a></p>
<p><em>Ultimately, though, Pachube is here to make it easier to participate in what I expect to be a vast &#8216;eco-system&#8217; of conversant devices, buildings &amp; virtual environments. Pachube will facilitate the development of a huge range of new products and services that will arise from extreme connectivity. It&#8217;s relatively easy for large technology companies likeÂ <a id="ps11" title="Nike and Apple" href="http://www.apple.com/ipod/nike/" target="_blank">Nike and Apple</a> to transition into the Internet of Things, but Pachube will be particularly helpful for that huge portion of smaller scale industry players that *want* to become part of it, but which are only now waking up to the potentials of the internet &#8212; small and medium scale designers, manufacturers and developers who are very good at developing their products but don&#8217;t have the resources to develop in-house a massive infrastructure for their newly web-enabled offerings.Â Basically, having built a generalized data-brokering backend to connect physical (and virtual) entities to the web, others can now start to build the applications that make the connections really useful.</em></p>
<p><strong>Tish Shute:</strong> You mentioned that both Bruce Sterling and Gavin Starks (AMEE) have given input on Pachube.Â  Can you describe any specific ways they (and others?) have influenced the evolution of Pachube? You mentioned the concept of &#8220;engaged responsible spime wrangling&#8221; when we talked on skype?</p>
<p><strong>Usman Haque:</strong> <em>Yes, I am very grateful to a whole bunch of people who have helped, prodded, nudged and advised along the way!</em></p>
<p><em>I asked Bruce to be a &#8220;visionary&#8221; adviser because he was one of the people early on to envisage the concepts and ramifications ofÂ <a id="v5w3" title="&quot;spimes&quot;Â Â (his neologism for 'space-time objects')" href="http://www.boingboing.net/images/blobjects.htm" target="_blank">&#8220;spimes&#8221;Â Â (his neologism for &#8216;space-time objects&#8217;)</a>. While I agree that &#8220;spimes&#8221; are directly relevant, what I found most important from his conception was the concept of &#8220;wrangling&#8221; &#8211; being actively and productively engaged and responsible in the development of spimed environments. I think it was a crucial leap: to talk about &#8220;wranglers&#8221; rather than &#8220;end-users&#8221;. So the kinds of questions I&#8217;ve turned to him for regard how to nudge people away from being &#8220;end users&#8221; and towards being &#8220;wranglers&#8221;; and about how to transition from being a &#8220;hacker toy&#8221; to &#8220;major infrastructure&#8221;. He had some great (and invaluable) responses, of which one of the most important to me was something he said in email: &#8220;&#8230;I think total openness is fatal. Â It&#8217;s like lying in a blazing sun under a sky full of vultures, naked. It&#8217;s also rather rude, like babbling anything or anything that flies into your head and still expecting people to pay attention.&#8221;</em></p>
<p><em><a id="qrs7" title="Gavin Starks" href="http://www.amee.cc/" target="_blank">Gavin Starks</a> and alsoÂ <a id="bbd." title="Dopplr's" href="http://www.dopplr.com/" target="_blank">Dopplr&#8217;s</a> <a id="aqy:" title="Matt Biddulph" href="http://www.hackdiary.com/" target="_blank">Matt Biddulph</a> have been sort of &#8220;friendly neighbours&#8221; to Pachube: they&#8217;ve made some great introductions and I turn to them often for advice on being a London start-up. What&#8217;s been really useful for me is that they are active in a related area and have directly useful advice: Gavin, of course, since he&#8217;s involved inÂ <a id="lzoi" title="metering the world's energy" href="http://www.amee.cc/" target="_blank">metering the world&#8217;s energy</a>; and Matt perhaps less tangibly in his day job as Dopplr&#8217;s CTO but more so in hisÂ <a id="jav_" title="active Arduino-enabled social life" href="http://tinker.it/now/2009/01/20/toy-hacking-workshop-09/" target="_blank">active Arduino-enabled social life</a>!</em></p>
<p><em>One very important Pachube advisor has beenÂ <a id="qjz0" title="Dr. Paul Pangaro" href="http://www.pangaro.com/" target="_blank">Dr. Paul Pangaro</a>, who has previously been CTO at a number of technology startups, and brings vital experience from his time at Sun Microsystems as Senior Director and Distinguished Market Strategist. (Oh, and he&#8217;s also a former student and collaborator of Gordon Pask&#8217;s!) He has been very helpful in developing a viable business model in conjunction with my brother Yusuf Haque, who, with his experience in raising capital for startups, has led the fundraising process.</em></p>
<p><em>Of course, direct daily input from the Pachube team has been vital to the development of the project, and withoutÂ <a id="nyoj" title="Chris Leung" href="http://www.chrisleung.org/" target="_blank">Chris Leung</a> (EEML development) andÂ <a id="xr8l" title="Sam Mulube" href="http://twitter.com/smazero" target="_blank">Sam Mulube</a> (backend development) it would be a very different thing indeed!</em></p>
<p><strong>Tish Shute:</strong> Now the emerging internet is the world as a networked, enhanced virtual/reality environment &#8211; sorry about the inadequate terminology, but as you said &#8220;the distinction between real and virtual is becoming as quaint as the distinction between mind and body&#8221;. You are participating in the <a id="k7s8" title="Sentient City" href="http://www.situatedtechnologies.net/?q=node/89" target="_blank"><strong>Sentient City</strong> exhibition organized by the </a><a href="http://www.archleague.org/" target="_blank">Architectural League of New York for September 2009.</a></p>
<p>Could you explain more about the Sentient City project and what your contribution Natural Fuse which uses common house plants, energy-monitoring sensors, and Pachube to create &#8220;a city-wide network of electronically-assisted plants that act as carbon-cycle circuit-breakers in much the same way as conventional electrical circuit-breakers do&#8230;..&#8221; is about?</p>
<p><strong>Usman Haque: </strong><em>Situtated Technologies, founded toÂ explore the impact of &#8220;situated&#8221; technologies (i.e. locative media, etc.) in urban spaces,Â kicked off with a <a id="b77z" title="symposium organised by Mark Shepard, Omar Khan and Trebor Scholz" href="http://www.situatedtechnologies.net/?q=node/1" target="_blank">symposium organised by Mark Shepard, Omar Khan and Trebor Scholz</a> and supported by theÂ <a id="o7a4" title="Architecture League of New York" href="http://www.archleague.org/" target="_blank">Architecture League of New York</a> a couple of years ago, and continued throughÂ <a id="o5o6" title="a series of pamphlets" href="http://www.situatedtechnologies.net/?q=node/75" target="_blank">a series of pamphlets</a> (the first by Adam Greenfield &amp; Mark Shepard; the second by me and Matthew Fuller; the third and fourth byÂ Benjamin Bratton &amp; Natalie Jeremijenko andÂ Laura Forlano &amp; Dharma Dailey). This is now culminating in an exhibition,Â &#8220;Toward the Sentient City&#8221;, opening in September 2009, as a public manifestation of many of the concepts raised over the years.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantcircuit1.jpg"><img class="alignnone size-full wp-image-2783" title="plantcircuit1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantcircuit1.jpg" alt="plantcircuit1" width="400" height="289" /></a></p>
<p><em><a id="k48e" title="Natural Fuse" href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a>, a project funded by the Architecture League to be part of that exhibtion, is really a Haque Design + Research project rather than Pachube project alone. It came about for two reasons. The first was because we had been investigating for several months many different ways to use plants and vegetation in interactive architectural design: as living walls, as responsive systems, as visual and olfactory indicators, as passive ventilation &#8212; fantastic research undertaken predominantly by my invaluable production assistant Barbara Jasinowicz. We were particularly interested in energy creation and monitoring and had made a number of (unsuccessful) proposals to develop building systems based on plant interaction. The second was because I wanted to have a good demonstration project for Pachube: a system that was not just end-to-end single-point communication, but one in which the system increased its efficiency over time through more and more geographically-dispersed connections. So Natural Fuse developed through a series of conversations with a very intelligent and witty designerÂ <a id="ed_l" title="Nitipak (Dot) Samsen" href="http://www.dotmancando.info/" target="_blank">Nitipak (Dot) Samsen</a> who was then an intern and who will now lead design work along withÂ <a id="w9.y" title="Cesar Harada" href="http://www.cesarharada.com/" target="_blank">Cesar Harada</a> (similarly intelligent and witty!).</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfusecare1.jpg"><img class="alignnone size-full wp-image-2784" title="plantfusecare1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfusecare1.jpg" alt="plantfusecare1" width="400" height="322" /></a></p>
<p><em>Briefly, the point of Natural Fuse is to use networked plants, based on the Arduino ethernet platform, to harnessÂ the carbon-sinking capabilities of plants to create a city-wide network of electronically-assisted plants that act both as energy providers and as shared &#8220;carbon sink&#8221; circuit breakers. By sharing resources and information between the plants, energy expenditure can be collectively monitored and managed. The purpose is to create a collective &#8220;carbon sink&#8221;, that offsets the amount of energy consumed by the plant owners &#8211; a natural &#8220;circuit breaker&#8221;. If people cooperate on their energy expenditure then the plants thrive (and they can all use more energy); but if they don&#8217;t then the network starts to kill plants, thus diminishing the network&#8217;s energy capacity.Â Of course, the network functionality is enabled by Pachube. The plan is to distribute these to some households in New York and offer plans and downloads for people to build their own as well.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfusesystem1.jpg"><img class="alignnone size-full wp-image-2785" title="plantfusesystem1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfusesystem1.jpg" alt="plantfusesystem1" width="432" height="214" /></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfuseunit.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfuseunit1.jpg"><img class="alignnone size-full wp-image-2786" title="plantfuseunit1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/plantfuseunit1.jpg" alt="plantfuseunit1" width="443" height="197" /></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/naturalfusenetwork2.jpg"><img class="alignnone size-full wp-image-2787" title="naturalfusenetwork2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/naturalfusenetwork2.jpg" alt="naturalfusenetwork2" width="462" height="348" /></a><br />
<strong><br />
Tish Shute:</strong> You describe Pachube as linking environments not just sensor to sensor (as sensorbase.org does) &#8211; an environment for Pachube could be a web page. An essential concept in Pachube is the concept that anything could be an environment and such environments are treated equivalently with EEML. You describe EEML as a protocol that sits comfortably with existing building protocols &#8220;what it brings to the picture is the ability to describe buildings that change.&#8221;</p>
<p>How will EEML change our understanding of architecture and enable the view of architecture that &#8220;includes smells, sounds, light, electromagnetic fields &#8211; buildings as dynamic and changing?&#8221; (Prasad Passive House?)</p>
<p>You describe EEML as straddling and designed to work alongside IFC construction industry format. Who is involved in the creation of EEML?Â  Could you explain a little bit how it is different from SensorEML? You mentioned little has been done re post-construction evaluation of buildings. How will EEML enable buildings to share strategies (for example on energy consumption) as you put it?</p>
<p><strong>Usman Haque:</strong> <em>TheÂ <a id="gv6y" style="color: #551a8b;" title="Extended Environments Markup Language (EEML)" href="http://www.eeml.org/" target="_blank">Extended Environments Markup Language (EEML)</a> (which is the protocol around which much of Pachube is based) is being developed to make the idea of &#8220;dynamic, responsive and conversant environments&#8221; a reality. It worksÂ with existing construction standards likeÂ <a id="l7sl" style="color: #551a8b;" title="Industry Foundation Classes (IFC)" href="http://en.wikipedia.org/wiki/Industry_Foundation_Classes" target="_blank">Industry Foundation Classes (IFCs)</a>, but exists to extend them to account for dynamic, responsive and, dare I say it, conversant buildings. In the perhaps prosaic world of construction, this helps to facilitate a number of architectural requirements such asÂ <a id="i2_j" style="color: #551a8b;" title="post-occupancy evaluation" href="http://www.google.com/search?hl=en&amp;client=safari&amp;rls=en&amp;defl=en&amp;q=define:post+occupancy+evaluation&amp;sa=X&amp;oi=glossary_definition&amp;ct=title" target="_blank">post-occupancy evaluation</a>, realtime site-based environmental feedback at the design phase and simulations that synchronise with realworld installation. WithÂ <a id="hxs4" style="color: #551a8b;" title="EEML" href="http://www.eeml.org/" target="_blank">EEML</a> and Pachube you&#8217;ll be able to start working with, say, an Autocad model at the design phase, and include *real time* environmental data from the site, as well as to model expected sensor and assumed energy consumption data of the design; use the same model during the construction phase (because it will translate fine to standard modelling descriptions), and keep working with the same set of information even after the building is occupied and running &#8212; making it a whole lot easier to learn from the design and maintenance processes than it is currently.</em></p>
<p><em>At the same time this does not exclude the possiblity of talking about &#8220;sensors&#8221; (asÂ <a id="swia" title="SensorML" href="http://en.wikipedia.org/wiki/SensorML" target="_blank">SensorML</a> wants to), but we are more easily able to consider, say, the dozens of different ways that different clients will want to address, access or search for those sensors; the changing contextual motivations for actually processing sensor information; and the capacity for flexible sensor ontologies &#8212; where you don&#8217;t need to know from the beginning everything you&#8217;ll be looking for once you&#8217;ve recorded mountains of data.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/environmentsconnected.jpg"><img class="alignnone size-full wp-image-2792" title="environmentsconnected" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/environmentsconnected.jpg" alt="environmentsconnected" width="454" height="151" /></a></p>
<p><em>We can consider, equally as &#8216;environments&#8217; a mountainside, the interior a building, the context of a webpage, the internal status and external context of a mobile device, the interactions within something like Second Life.</em></p>
<p><em>As a result of this conception of &#8220;environment&#8221; we remove the need for a distinction between &#8220;real&#8221; and &#8220;virtual&#8221;. We can consider, equally as &#8216;environments&#8217; a mountainside, the interior a building, the context of a webpage, the internal status and external context of a mobile device, the interactions within something like Second Life &#8212; all these are environments and can communicate with each other on equivalent terms. More importantly a single &#8220;environment&#8221; can be expressed as a snapshot in time; or it can be expressed as a sequence of many snap shots over several years.</em></p>
<p><em>One very important thing we&#8217;re looking at now is how to transition the protocol from something that is status-based, to something that can express transactions, goals and processes. We&#8217;ve just started looking at howÂ <a id="e7.0" title="RDF" href="http://en.wikipedia.org/wiki/Resource_Description_Framework" target="_blank">RDF</a> andÂ <a id="khn." title="machine tags" href="http://en.wikipedia.org/wiki/Machine_tag" target="_blank">machine tags</a> might help in this, largely spurred on by perceptive comments from one of my favourite designers,Â <a id="mit9" title="Toxi, a.k.a. Karsten Schmidt" href="http://postspectacular.com/" target="_blank">Toxi, a.k.a. Karsten Schmidt</a>.</em></p>
<p><strong>Tish Shute:</strong> You mentioned that you see &#8220;smart&#8221; buildings and &#8220;smart&#8221; cities as environments not just a collection of devices? On the Pachube web page there is a chart describing potential interactions between entities (one to one, one to many, etc.) but you do not give many pointers to how two unrelated objects that are connected would derive any value out of the connection&#8230;could you give me some examples of the kinds of use cases (Natural Fuse is one of course!) and interesting new opportunities to create shared value that Pachube will enable?</p>
<p><strong>Usman Haque:</strong> <em>Yes, I recognize that the Pachube website information leaves a lot to be desired&#8230;! Apart from a whole lot of conceptual information that&#8217;s missing, there are a number of undocumented API features that nobody has yet uncovered!</em></p>
<p><em>Well, in answer to your question: much of it is intuition &#8211; I don&#8217;t know exactly _how_ it will be valuable but I do expect the community to find ways to make such seemingly disparate interoperability valuable.</em></p>
<p><em>To make a prosaic example: say, (once privacy options are introduced) that a manufacturer creates aÂ <a id="s53b" title="Pachube input application" href="http://community.pachube.com/?q=node/100" target="_blank">Pachube input application</a>, like an electricity meter that automatically charts on Pachube. There is a certain benefit to its customers in being able to monitor their usage over time and to compare their usage to the aggregation of others in a similar class, but anonymised. Say that someone else has produced a Pachube output application like aÂ <a id="fhjs" title="mobile phone Pachube viewer" href="http://www.rcreations.com/freeandroidgphoneg1applications" target="_blank">mobile phone Pachube viewer</a>. Now the electricity meter users can use this new output application as an extension to be able to monitor their consumption on a mobile phone. Now, imagine if someone else develops a new product, aÂ <a id="j.l-" title="networked lamp" href="http://www.goodnightlamp.com/" target="_blank">networked lamp</a> &#8212; it would now be very easy for that designer to write a little app to make the networked lamp switch on (or change brightness) according to the electricity consumption, even remotely. The point is that the more input and output apps are added the more valuable they each become.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/scatteredhouse.jpg"><img class="alignnone size-full wp-image-2791" title="scatteredhouse" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/scatteredhouse.jpg" alt="scatteredhouse" width="443" height="109" /></a><br />
<a id="tzsq" title="Scattered House" href="http://www.haque.co.uk/scatteredhouse.php" target="_blank"></a></p>
<p><em><a id="tzsq" title="Scattered House" href="http://www.haque.co.uk/scatteredhouse.php" target="_blank">Scattered House</a>, like Reconfigurable House, but spread throughout various cities in the world to demonstrate the implications of designing environments and buildings in the context of family diasporas and ubiquitous ad hoc networked connectivity.</em></p>
<p><em>Part of Pachube&#8217;s emphasis, in not making specific connections more important than others, is that the community can develop new types of connection. So, while of course it makes it relatively simple to create remote control connections between seemingly unrelated entities (like mobile phones and houses; or web pages and furniture); and it makes it relatively simple to connect up environmental conditions from the physical world to seemingly distant Second Life (or, more interestingly to me,Â <a id="iqkx" title="OpenSim" href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> ) which can make it a more viable interactive environment; and it makes data aggregation and comparison possible between wide ranges of energy consumers to facilitate aggregation analysis; but, the point really is to make it easy for people and companies to build in this kind of connectivity and invent new uses.</em></p>
<p><em>Through my close association withÂ <a id="sin8" title="The Bartlett, University College London's architecture school" href="http://www.bartlett.ucl.ac.uk/" target="_blank">The Bartlett, University College London&#8217;s architecture school</a>, I hope to develop some particularly relevant use-case scenarios for the architectural industry. I think we&#8217;ve really not even begun to imagine the kinds of applications that will be important, though I guess Natural Fuse exemplifies the kind of approach I would like to see in Pachube-enabled applictations: one in which the collective/hive experience contributes towards some end goal, to make it possible to create a &#8220;wikipedia of environments&#8221; as opposed to a web-based Wikipedia &#8211; it&#8217;s not that I necessarily want to create these things myself, but rather I want to make it </em><em>possible to create such things.</em></p>
<p><strong>Tish Shute:</strong> You mentioned that you hope Pachube to be the place to connect smart products &#8211; product to product communication?Â  Also you mentioned that you would like to have a way that smart products can self register with Pachube. While all feeds are public now, you are going to create groups with different levels of privacy. Both of the aforementioned features would enable more business applications for Pachube.Â  But could you describe the business model for Pachube?</p>
<p><strong>Usman Haque:</strong> Essentially, there are three facets to the business model. The first takes a cue fromÂ <a id="irzp" title="Flickr" href="http://www.flickr.com/upgrade/" target="_blank">Flickr</a> in recognising that there are those who would like a more sophisticated set of services as &#8220;professional&#8221; accounts. The second is to be able to provide a set of tools and applications for medium scale manufacturers and developers who want to web-enable their offerings, who will be able to take advantage of the growing repository of Pachube.Apps and add-ons, and who want the convenience, security and economy that Pachube will be able to offer. The third approach is to become more directly involved in large-scale urban infrastructure projects. There is a fourth facet, but we consider it the killer so I&#8217;m keeping quiet for the moment&#8230;.</p>
<p>So yes, in order to make all these things more useful we&#8217;ll soon be introducing a range of privacy options on feeds, the ability to create &#8220;aggregates&#8221; from collections of feeds, and the possibility of groups, organised around feeds. Another thing we&#8217;re hoping to introduce soon is open environment-level tagging, so that anyone will be able to tag environments, though there will be a way of evaluating the importance of any given tag.</p>
<p><strong>Tish Shute: </strong>I know you mentioned that you are trying to find ways to find tools that allow people to contribute to their environment. There are a number of projects aimed at providing tools that will help people/business to reduce their carbon footprintÂ  &#8211; <a id="a2qc" title="The Carbon Account," href="http://www.thecarbonaccount.com/" target="_blank">The Carbon Account,</a> AMEE, Wattzon, <a id="f8y3" title="Onzo" href="http://www.onzo.co.uk/" target="_blank"> Onzo</a> Is Pachube working with any of these projects and how?</p>
<p>What are the most interesting ideas in this area of changing our relationship to energy consumption emerging from Pachube?</p>
<p><strong>Usman Haque: </strong><em>The carbon footprint calculating industry is getting quite crowded&#8230;! So far I&#8217;ve particularly appreciated AMEE&#8217;s API (which is also used by the Carbon Account, I believe). So one thing we have just released a Pachube.App &#8216;plugout&#8217; which will take a feed from an electricity meter tagged &#8220;watts&#8221; or &#8220;kilowatts&#8221; and convert it into a realtime carbon footprint calculation (driven by AMEE&#8217;s international and region- and supplier-specific carbon conversion factors). So it should be really easy to discover how many kilograms of CO2 you generated in the last 15 minutes&#8230;. that last hour&#8230; the last 24 hours. Here&#8217;s a list of some of the feeds that are already making use of this:Â http://www.pachube.com/tag/co2_last_15_mins</em><br />
<strong><br />
Tish Shute:</strong> I know the Aduino community has really taken and interest in Pachube. Who are the early adopters on Pachube?Â  What are the most prevalent use cases you have seen so far?</p>
<p><strong>Usman Haque:<em> </em></strong><em>It has actually been more difficult than I thought it would be getting the Arduino community interested. This has partly been due to the difficulty of internet-enabling Arduino (until recently adding ethernet access has been a bit of a tough chore). Now that it&#8217;s easier to connect up Arduinos, some of the early adopters have been interfacing Arduino to Current Cost meters (alleviating the need for a computer in between); and others have been doing things like tracking temperature, humidity and light level in their homes and offices.Â <a id="ohbg" title="Pachube user C4C" href="http://www.gomaya.com/glyph/" target="_blank">Pachube user C4C</a> has been pretty active from early on:Â http://www.pachube.com/feeds/1284</em><br />
<strong><br />
Tish Shute:</strong> Pachube is input heavy at the moment &#8211; you mentioned not many accuators are plugged into Pachube yet.Â  You said this is in part because you have focused on making the backend robust and stable before taking a lot of hits. What new directions for Pachube will emerge from enabling the dynamic relationship between sensors and accuators?</p>
<p><strong>Usman Haque:</strong> <em>This will be a crucial evolution in Pachube, when we make actuators more evident. It&#8217;s input heavy at the moment, basically in the sense of being easy to see the inputs &#8212; you add &#8220;inputs&#8221; rather than &#8220;outputs&#8221;, so at the moment we have no idea of what&#8217;s actually plugged into the outputs unless people tell us! However, we know that there are plenty of outputs because they&#8217;re making API requests, we just don&#8217;t know what they&#8217;re being used for! Once the concept of actuators and output environments get built in to the system then I think we&#8217;ll know a lot more about how people are using the system.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/currentcost.jpg"><img class="alignnone size-full wp-image-2794" title="currentcost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/currentcost.jpg" alt="currentcost" width="444" height="150" /></a></p>
<p><em>To make this easier in the meantime we recently announce theÂ <a id="zp60" title="Pachube.apps" href="http://apps.pachube.com/%29" target="_blank">Pachube.apps</a> site, where people can start contributing Pachube &#8216;plugins&#8217; and &#8216;plugouts&#8217; &#8212; things that can be used by others without needing to code or hack, to create, generate or modulate Pachube inputs and outputs. One of these wasÂ <a id="htj9" title="Status2Pachube" href="http://apps.pachube.com/online-status.html" target="_blank">Status2Pachube</a>, which turns the online status of AIM, MSN Messenger, Skype or Yahoo! Messenger users into a Pachube input feed (to make it easy to create &#8220;remote presence&#8221; orbs and such); another was theÂ <a id="wjey" title="CurrentCost2Pachube" href="http://community.pachube.com/?q=node/100" target="_blank">CurrentCost2Pachube</a> app to make it easy to connect up Current Cost electricity meters as input feeds; all of which can then be used by Pachube output apps, like theÂ <a id="xki1" title="G1 Android phone Pachube viewer" href="http://www.rcreations.com/freeandroidgphoneg1applications" target="_blank">G1 Android phone Pachube viewer</a> by Pachube user N4Spd or in the soon-to-launchÂ <a id="pd2x" title="Pachube2SketchUp" href="http://apps.pachube.com/" target="_blank">Pachube2SketchUp</a> plugout which will direct Pachube outputs into Google SketchUp (and by extension Google Earth) in order to generate or modulate 3-d models in response to realtime environmental/sensor data. (Pachube2SketchUp is pretty much finished for Mac OS X &#8212; but we&#8217;re having difficulty getting it to work on Windows, because of its sometimes pigheaded security measures&#8230; we&#8217;ll probably release it for Mac OS X alone soon anyway).</em></p>
<p><strong>Tish Shute:</strong> Do you and Haque design expect to go beyond just providing a platform? Will you be producing more interesting applications like Natural Fuse on Pachube?Â  If so, can you tell me more about what you have in mind?</p>
<p><strong>Usman Haque:</strong> <em>I keep a clear distinction between my work as creative director of Pachube.com and my work as director of Haque Design + Research. Basically, while Pachube.com continue development of the platform in general, I hope that Haque Design + Research will separately continue creating pioneering interactive experiences, some using Pachube and others not. We have some things in mind, such as the idea of creating an open source building management platform, but that&#8217;s all to come later&#8230;</em></p>
<p><strong>Tish Shute:</strong> One very interesting project you have been involved in is the creation of &#8220;Urban Versioning System 1.0&#8243; which asks &#8220;What lessons can architecture learn from software development, and more specifically, from the Free, Libre, and Open Source Software (FLOSS) movement?&#8221; Can you tell me more about this project, its goals, and its progress? How Does UVS 1.0 relate to Pachube?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/urbanvs.jpg"><img class="alignnone size-full wp-image-2795" title="urbanvs" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/urbanvs.jpg" alt="urbanvs" width="277" height="386" /></a></p>
<p><strong>Usman Haque: </strong><em>TheÂ <a id="xujn" title="Urban Versioning System" href="http://uvs.propositions.org.uk/" target="_blank">Urban Versioning System</a> was essentially an attempt to understand what lessons the &#8220;open source&#8221; approach in software might provide to the collaborative development of environments and cities. It&#8217;s a sort of quasi-license &#8212; not yet quite ready to have the status of something like Creative Commons (which nicely suits media and software based creations, but doesn&#8217;t suit quite so well hardware and physical things beyond their design files). It&#8217;s more of a challenge, a series of constraints that might be applied. It has a link to Pachube, in the sense of encouraging conception at the environment and systemic level &#8212; you might call it the manifesto that connects Constant&#8217;s New Babylon hypothesis to the reality of Pachube!</em></p>
<p><strong>Tish Shute:</strong> I know that you imagine Pachube scaling up to millions (billions???) of users. But scaling the real time web has proved a challenge (e.g the frequent surfacings of the Twitter failwhale during big events). What are the key points of Pachube&#8217;s architecture and design that will enable successful scaling?</p>
<p>How do you see Pachube itself fitting into the FLOSS movement?</p>
<p><strong>Usman Haque: </strong><em>This is a really important question. There are a couple of things we are doing. The first is constantly to assume that we have 20 to 50 times more connections than we actually have&#8230; I put a lot of pressure on Sam about making sure about this, so he&#8217;s constantly developing, thinking about and testing little things for weeks in advance while at the same time fighting the usual daily little fires that arise <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />  The second is that we&#8217;re trying to learn from strategies being developed byÂ <a id="fq2y" title="Vlad Trifa" href="http://vladtrifa.com/" target="_blank">Vlad Trifa</a> and his group at theÂ <a id="zjfb" title="Institute for Pervasive Computing at ETH Zurich" href="http://www.pc.inf.ethz.ch/" target="_blank">Institute for Pervasive Computing at ETH Zurich</a> in Switzerland regarding the development of infrastructures for millions or more entities.</em></p>
<p><em>Regarding the connection to the FLOSS movement, there is no specific technical part of Pachube that is currently open source (apart from all the example apps and tutorials of course). However, I find the approach taken by OpenSim and Hypergrid really fascinating: I haven&#8217;t given this enough thought to how it might be implemented but I find quite appealing the idea of a multitude of open source and geographically dispersed Pachube-enabled servers with seamless transfer of data connections between them as necessary&#8230;..</em></p>
<p><strong>Tish Shute: </strong>I know you have an <a id="ttbg" title="Android Viewer for Pachube" href="http://en.androidwiki.com/wiki/Pachube_Viewer" target="_blank"> Android Viewer for Pachube</a>.Â  Android is a landmark for extended/augmented reality, as <a id="x-.a" title="Wikitude" href="http://www.mobilizy.com/wikitude.php" target="_blank"><span style="color: #0000ff;"><strong>Wikitude</strong></span></a> proved, because with its compass mode Android brings together the essential ingredients for extended/augmented reality &#8211; knowing who YOU are, WHERE you are, WHAT you are doing, WHAT is around you.Â  It seems Pachube could be a powerful backend to a number of multi-user, mobile augmented/enhanced reality android applications?Â  Do you have any ideas/thoughts on this?</p>
<p><strong>Usman Haque:</strong> <em>That&#8217;s right &#8212; the Android viewer was created by rcreations.com/ a Pachube user &#8212; this new platform brings amazing opportunities to mobile devices. I would be really interested to see what I would consider the obvious next step: an app that becomes both a Pachube input and an output feed, one that overlays existing Pachube data, with new context-based, site specific data.</em></p>
<p><em>If I was to make a parallel to a Japanese anime, I&#8217;m fascinated byÂ <a id="ht3b" title="Dennou Coil" href="http://en.wikipedia.org/wiki/Dennou_Coil" target="_blank">Dennou Coil</a> a Japanese anime set 20 years in the future where children take for granted the overlay of the digital world with the physical world. BUT, I&#8217;d say that Pachube somehow relates more closely toÂ <a id="zg78" title="Furi Kuri" href="http://www.adultswim.com/shows/flcl/index.html" target="_blank">Furi Kuri</a> in itsÂ <a id="gko_" title="pataphysical" href="http://en.wikipedia.org/wiki/%E2%80%99Pataphysics" target="_blank">pataphysical</a> stance and because one of the main characters has a portal to another galaxy in his head&#8230;&#8230;.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/furikuri.jpg"><img class="alignnone size-full wp-image-2793" title="furikuri" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/furikuri.jpg" alt="furikuri" width="420" height="320" /></a></p>
<p><strong> Tish Shute:</strong> Do you do you see Haque design picking up on the challenge of creating some cool next generation interfaces/GUIs for extended/enhanced/augmented (sorry no perfect term) reality?</p>
<p><strong>Usman Haque:</strong> <em>Actually, no, I don&#8217;t see this as Haque Design + Research&#8217;s core focus going forward. We did some of this early on, getting involved in, for example, the development of aÂ <a id="ty:5" title="3d smell interface" href="http://www.haque.co.uk/scentsofspace.php" target="_blank">3d smell interface</a>; and exploring theÂ <a id="ykap" title="role of electromagnetic fields on perception of haunted spaces" href="http://www.haque.co.uk/haunt.php" target="_blank">role of electromagnetic fields on perception of haunted spaces</a>. But these days, in the context of HDR, I&#8217;m less interested in making seamless interfaces and more interested in exploring what authentic interaction actually is (whether technologically based or not). I think it&#8217;s challenge enough for me to make a light-switch engaging, dynamic and conversant before getting to the perceptual infrastructure that goes on top of it all! HDR will also spend more time exploringÂ <a id="p2v5" title="passive systems, phase-change materials and plants" href="http://www.haque.co.uk/climateclock.php" target="_blank">passive systems, phase-change materials and plants</a> in the context of the built environment.</em></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/scentsofspace.jpg"><img class="alignnone size-full wp-image-2796" title="scentsofspace" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/scentsofspace.jpg" alt="scentsofspace" width="550" height="197" /></a></p>
<p><strong>Tish Shute: </strong>I know there has been some interesting integrations with Pachube lately &#8211; <a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">Andy Stanford-Clark&#8217;s mentioned using MQTT as the feed to get EML data into and out of Pachube</a> rather than over HTTP. He said thatâ€™s interesting because MQTT is a much more lightweight protocol, designed for small sensors and low bandwidth / expensive (e.g. cellular) networksâ€¦ and itâ€™s also true push.. i.e. data is pushed to you directly from the broker (the hub in the middle), rather than you having to ask for it constantly (polling).</p>
<p>Have you opted for MQTT over HTTP polling?</p>
<p><strong>Usman Haque:</strong> <em>We haven&#8217;t yet implemented an MQTT bridge in part because it has proved pretty difficult. HTTP is quite important for us right now because there&#8217;s a whole universe out there using it; from your average web browser, to mobile devices, to ethernet devices and a whole range of languages and platforms &#8212; they all work, pretty much out of the box with HTTP. However, what we are exploring instead is being able to interface withÂ <a id="a4w." title="Oliver Goh" href="http://www.eolusone.com/cms/website.php" target="_blank">Oliver Goh</a>&#8216;s Shaspa project &#8212; they&#8217;re already in the middle of solving the MQTT-Pachube bridge problem, and so that should hopefully provide Pachube access to and from MQTT devices.</em></p>
<p><strong>Tish Shute:</strong> Chris Dalby just released <a id="qcm6" title="Pachube Air" href="http://www.yellowpark.net/cdalby/index.php/2009/01/10/pachube-air-the-first-release/" target="_blank">Pachube Air.</a> Have you had a chance to play with that yet?</p>
<p><strong>Usman Haque:</strong> <em>I have indeed! It&#8217;s still early days yet, and I know he did it partly just to test the AIR development process rather than solely solving a desperate Pachube need but I&#8217;m looking forward to future iterations!</em></p>
<p><strong>Tish Shute:</strong> Peter Quirk felt the Pachube web page positions Pachube as a social networking site focused on data exchange, inviting anyone with an interest in sharing environmental or other data to publish data or construct interesting uses for the data.</p>
<p>What is your response to that?</p>
<p><strong>Usman Haque:</strong> <em>Hmm&#8230; I don&#8217;t really see Pachube as a social networking site. Yes, it perhaps enables the creation of social-networking objects and environments, but in itself and in terms of networking of people that has barely begun yet. Certainly Pachube exists quite comfortably in facilitating mashups and visualisations and other web 2.0 based social applications but I don&#8217;t see that as a driving force. I think it would be a mistake also to conceive of Pachube solely as being the storage of machine communication that then gets experienced by people; rather, it can transition quite easily to being solely useful for machine-to-machine communication. </em></p>
<p><em>In fact, with recent API releases (which as it happens as of this writing we haven&#8217;t announced&#8230; <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />  it&#8217;s now possible to use most of Pachube&#8217;s features without ever going to the website: i.e. your Arduino can create feeds, search feeds, edit feeds, delete feeds. Over time,Â as direct machine-to-machine communication becomes more prominent,Â it&#8217;s quite likely that the website itself becomes less and less important, while the backend becomes the focus of everything.</em><br />
<strong><br />
Tish Shute:</strong> I am interested in some of the differences between<a href="http://sensorbase.org/" target="_blank"> SensorBase.org&#8217;s project</a> and Pachube. Is Sensorbase as more of a data repository (environmental data in particular)?</p>
<p><strong>Usman Haque</strong>: <em>The difference I see between Pachube and SensorBase is that while (from what I know) SensorBase is mostly about &#8220;write&#8221; operations, with later &#8220;read&#8221; operations (i.e. it&#8217;s about being a data repository), Pachube is really &#8220;read-write&#8221; (i.e. it&#8217;s about being both a data repository _and_ a quasi-realtime proxy). Pachube will be able to handle potentially millions of connections, both incoming and outgoing, and as we&#8217;ll soon start storing every data point ever recorded, so of course the data repository aspect will be crucial. However, the fact that it *also* facilitates one-to-many realtime broadcasts of that data (and facilitates conversion to a number of different formats: EEML, CSV and JSON now, more in the future) means that the two-way connectivity aspect of it is just as important.</em></p>
<p><strong>Tish Shute</strong>: I know you mentioned something that sounding a lot like Pachube would facilitate buildings and products ability to benchmark and optimize themselves against/with each other?</p>
<p><strong>Usman Haque:</strong> <em>Further down the line, I would like to see Pachube able to help two particular processes:</em></p>
<p><em>1) to make it straightforward for developers and manufacturers to web-enabled their products and services; and 2) to help building and environment designers create their buildings (by providing access to realtime site data) and also help in the post-occupancy evaluation process &#8212; where buildings will be able to talk with each other, share information on energy consumption, resource management or occupancy rates and even &#8220;learn&#8221; from each others&#8217; strategies. This type of approach has a parallel at the level of individuals (for example, networked electricity meter users who are able to compare and contrast their usage and strategies for conservation). I don&#8217;t want Pachube to become the application; rather I want to make it easier for other people and companies to create such applications. So in that sense, yes, perhaps Pachube can be considered an enabler of social networking applications&#8230;!</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/feed/</wfw:commentRss>
		<slash:comments>64</slash:comments>
		</item>
		<item>
		<title>Is it â€œOMG Finallyâ€ for Augmented Reality?: Interview with Robert Rice</title>
		<link>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/</link>
		<comments>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/#comments</comments>
		<pubDate>Sun, 18 Jan 2009 01:03:32 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual economy]]></category>
		<category><![CDATA[virtual goods]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[AR Geisha Doll]]></category>
		<category><![CDATA[compass in the android]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[hybrid augmented/virtual reality]]></category>
		<category><![CDATA[immersive mobile augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[massively multiuser augmented reality]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Neogence]]></category>
		<category><![CDATA[next generation transparent wearable displays]]></category>
		<category><![CDATA[NYC Tech Meetup]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[socializing sensor data]]></category>
		<category><![CDATA[Unreal 3]]></category>
		<category><![CDATA[Web Alive]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2620</guid>
		<description><![CDATA[Neogence is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, Robert Rice.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&#38;D labs: â€œthe patio on the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignnone size-full wp-image-2557" title="neogencesekrithqpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/neogencesekrithqpost.jpg" alt="neogencesekrithqpost" width="450" height="412" /></p>
<p><a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a> is on stealth mode with an immersive mobile augmented reality platform &#8211; â€œtools, sdk, and infrastructure plus some applications.â€ They are probably six months away from YouTubing anything according to CEO, <a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a>.Â  But Robert rustled up this pic for me &#8211; a Google street view of Neogence R&amp;D labs: â€œthe patio on the lower left is where I do a lot of pacing and smoking my pipe and the porch and office upstairs is whereÂ  a lot ofÂ  meetings have been held.â€</p>
<p><a id="rzgp" title="Robert Rice" href="http://curiousraven.squarespace.com/about-me/" target="_blank">Robert Rice</a> (<a id="x_:i" title="@RobertRice" href="http://twitter.com/RobertRice" target="_blank">@RobertRice</a> ), CEO of <a id="zd89" title="Neogence" href="http://www.neogence.com/sekrets.html" target="_blank">Neogence</a>, recently tweeted:</p>
<p><em><strong>Iâ€™m changing my name to Robert Mobile Ubiquitous Geospatial Augmented Rice. Iâ€™m betting on radical changes in next 18 months.</strong></em></p>
<p>Although Robertâ€™s new AR platform is still under wraps, I think you will get a good idea of what direction he is going in from this interview (full text at end ofÂ  this post). Robert is the author of â€œ<a id="c:rr" title="MMO Evolution" href="http://books.google.com/books?id=dkZ-6C5utz8C&amp;dq=MMO+Evolution&amp;printsec=frontcover&amp;source=bn&amp;hl=en&amp;sa=X&amp;oi=book_result&amp;resnum=4&amp;ct=result" target="_blank">MMO Evolution</a>â€ and is a key developer and thought leader in persistent immersive environments, simulations, virtual worlds and massively multiplayer games as well as large scale communities and social networking.</p>
<h3>It is OMG finally, at least, for minimally immersive but truly useful AR.</h3>
<p>Since the launch of Android a new generation of useful augmented reality applications like <strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> are emerging.</p>
<p>After the last<a href="http://www.meetup.com/ny-tech/calendar/9466657/" target="_blank"> NYC Tech Meetup</a>, myÂ  friend <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas</a>,Â  <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">(</a><a title="@NatDefreitas" href="http://twitter.com/natdefreitas" target="_blank">@NatDefreitas</a>),Â <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank"> </a>or rather Nathan Mobile Meets Social Freitas, demoed for me a cool graffiti appÂ  he has developed on Android.Â Â  You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art.Â  For more cool mobile projects Nathan is working on &#8211; <a href="http://blog.twittervotereport.com/" target="_blank">Vote Report </a>and data collection for mass events, a guide to pubs and nightlife in New York City, and more, see his blog, â€œNathanâ€™s<a href="http://openideals.com/" target="_blank"> OpenIdeals. </a>With Camera, GPS, compass, and accelerometer, and APIs on Android for temperature, light meters, (no hardware yet), Nathan says Android:</p>
<p><a href="http://openideals.com/" target="_blank"><em><strong> </strong></em></a><em><strong>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ </strong></em></p>
<p>As Nathan is fond of saying:</p>
<p><strong><em>The compass is a killer app enabler!</em></strong></p>
<p><a href="http://openideals.com/" target="_blank">Also see </a><a id="ixwx" title="OpenIntents" href="http://code.google.com/hosting/search?q=label:sensors" target="_blank">OpenIntents</a> for some interesting Android Sensor projects.</p>
<p><img class="alignnone size-full wp-image-2558" title="wikitudepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/wikitudepost.jpg" alt="wikitudepost" width="450" height="356" /></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong> was one of <em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Thomas Wrobel</a>â€™s</strong></em> two top AR milestones for 2008 (see <a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/" target="_blank">Gamesalfreso</a>):</p>
<p><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> I think. Seems the first released, useful, AR software.</strong></em></p>
<p><em><strong></strong></em> <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> is also a remarkable breakout for AR &#8211; but useful, nah.</p>
<p>I asked Robert if he also thought <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and <a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> asÂ  significant breakthroughs:</p>
<p><em><strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</strong></em></p>
<p><em><strong>Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</strong></em></p>
<p><em><strong>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</strong></em></p>
<h3>So when is it â€œOMG finally!â€ for massively multiuser augmented reality?</h3>
<p><img class="alignnone size-full wp-image-2559" title="ar-guipost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-guipost.jpg" alt="ar-guipost" width="450" height="360" /></p>
<p>The picture above is from <a id="kzm2" title="benjapo's portfolio" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">benjapoâ€™s portfolio</a> on istockphoto &#8211; also see the <a id="cqhi" title="istock video here" href="http://www.istockphoto.com/file_closeup/technology/computers/3919295-futuristic-computer-panel.php?id=3919295" target="_blank">istock video here</a>.</p>
<p><a id="ylpn" title="Alex Soojung-Kim Pang considers" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">Alex Soojung-Kim Pang</a> (who weighed in recently on the <a id="vr8o" title="twitter-baby" href="http://www.endofcyberspace.com/2008/12/twitter-baby.html" target="_blank">twitter-baby</a> debates &#8211; see my <a href="http://tishshute.com/twitter-baby-debates" target="_blank">KickBee Posterous</a> blog) challenges design assumptions for augmented reality that take as a given the userâ€™s desire for numerous private enhancements to their reality.</p>
<p>Alex points out less will probably be more so that enhancements do not impinge on shared experience.Â  See his write up of a talk he gave at the Royal College of Art, <a id="bxx1" title="&quot;and the end of my own private Shibuya.&quot;" href="http://www.endofcyberspace.com/2006/11/royal_college_o.html" target="_blank">â€œand the end of my own private Shibuya.â€</a> Photo below by <em>StÃ©fan, â€œ</em><em><a href="http://www.flickr.com/photos/st3f4n/130889444/in/pool-84787688@N00">Karaoke in Shibuya</a></em><em>â€œ</em></p>
<p><em></em><em><strong>Part of the pleasure of these streetscapes is precisely that theyâ€™re collectively experienced, rather than individual visions: for even a brief period, we share with other postmodern, globe-hopping flaneurs and expatriates and temporary natives the light of the ABC-Mart sign and storefront.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2560" title="karaokepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/karaokepost.jpg" alt="karaokepost" width="450" height="338" /><br />
</strong></em></p>
<p>It is collective experience of enhanced, augmented, virtual or real experiences that interests me too. This is one of the reasons I find <strong><em><a href="http://www.pachube.com/" target="_new">Pachube</a></em></strong> and the <a href="http://www.eeml.org/" target="_blank">EEML project </a>of Haque Design and Research so interesting.</p>
<p><strong><em>Extended Environments Markup Language (EEML), a protocol for sharing sensor data between remote responsive environments, both physical and virtual. It can be used to facilitate </em><em>direct connections between any two environments; it can also be used to facilitate many-to-many connections as implemented by the web service <a href="http://www.pachube.com/" target="_new">Pachube</a>, which enables people to tag and share real time sensor data from objects, devices and spaces around the world.</em></strong></p>
<h3>â€œDistinctions between virtual and real are as quaint and outmoded as distinctions between mind and bodyâ€ (Usman Haque)</h3>
<p><img class="alignnone size-full wp-image-2603" title="chair1post1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair1post1.jpg" alt="chair1post1" width="150" height="150" /><img class="alignnone size-full wp-image-2602" title="remotechair-slpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/remotechair-slpost.jpg" alt="remotechair-slpost" width="150" height="150" /><img class="alignnone size-full wp-image-2604" title="chair2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/chair2post.jpg" alt="chair2post" width="150" height="150" /></p>
<p>Usman Haque (founder of <a href="http://www.haque.co.uk/pachube.php" target="_blank">Pachube</a> and <a href="http://www.haque.co.uk/" target="_blank">Haque Design and Research</a>) points out this is an underlying premise of his work &#8211; and augmented reality (full interview coming up soon!).</p>
<p>The pictures above show the Haque Design project, <a href="http://www.haque.co.uk/remote.php" target="_blank">Remote</a>:</p>
<p>â€˜<em><strong>Remoteâ€™ connects together two spaces, one in Boston the other in Second Life, and treats them as a single contiguous environment, bound together by the internet so that things that occur in one space affect things that happen in the other and vice versa &#8211; remotely controlling each other.</strong></em></p>
<p>There was a discussion in twitter recently about how the terms like Second Life, Exit Reality, Virtual Worlds are misleading and outmoded. As Robert pointed out we need:</p>
<p><em><strong>one word pleaseâ€¦that sums up virtual and/or augmented reality, interactive, immersive, virtual worlds, mmorpgs, simulations, etcâ€¦ also, I really donâ€™t like the term â€œaugmented realityâ€ or â€œmixed realityâ€. Neither is all that great. And NO â€œmatrixâ€ or â€œmetaverseâ€ please</strong></em></p>
<p>Robert argues strongly that there is a stultification both in virtual world technology &#8211; much of what we call virtual world technology was already, basically, where it is now in the mid 90â€™s. And MMOGs have devolved into gameplay design â€œthat emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.â€</p>
<p>Robert suggested I take a cruise through a new Virtual Space -Â  <a href="http://www.cooliris.com/">CoolIris</a> to find some good pictures for this post (note the partnership between <a href="http://blog.cooliris.com/2009/01/14/cooliris-and-seesmic-streamline-video-blogging/" target="_blank">CoolIris and Seesmic to Streamline Video Blogging.</a> I added the Cooliris Plugin to Firefox and typed Augmented Reality into search and soon I was cruising a highway of images and links. The Road Map image grabbed my attention (see below). It shows the continua that <a href="http://www.metaverseroadmap.org/" target="_blank">the Metaverse RoadMap</a> authors thought are likely to influence the ways in which the Metaverse unfolds. It is â€œa map of the spectrum of technologies and applications ranging from augmentation to simulation; and the spectrum ranging from intimate (identity-focused)external (world-focused)â€</p>
<p><img class="alignnone size-full wp-image-2561" title="metaverseroadmap" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/metaverseroadmap.jpg" alt="metaverseroadmap" width="452" height="427" /></p>
<p>Quite to my surprise, when I clicked out of <a href="http://www.cooliris.com/">CoolIris</a> to the source for the image, I found it had been drawn from a post I wrote in May 2007, <em><strong><a id="jv.r" title="Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle." href="../../2007/05/22/hybridized-digitalphysical-worlds-where-pop-and-corporate-cultures-mingle/" target="_blank">Hybridized Digital/Physical Worlds: Where Pop and Corporate Cultures Mingle.</a> </strong></em>My post talks about a number of hybridization experiments that were bringing together lifelogging, sensors everywhere, simulation, virtual worlds, and augmentation.</p>
<p>The striking difference from 2007 to now is that we have definitely moved on from mere experimentation. And the poles of the continua<em><strong> intimate/extimate, augmentation/simulation </strong></em>as<em><strong> </strong></em>expressed in the Metaverse Roadmap are now becoming entwined (note the picture above seems to be slightly different to the one used in the road map as <a id="vdcf" title="posted here" href="http://www.metaverseroadmap.org/overview/" target="_blank">published here</a> &#8211; perhaps I had an early version?)</p>
<h3>&#8220;Augmented Reality is not just about overlaying dataâ€¦&#8221; (Robert Rice)</h3>
<p><img class="alignnone size-full wp-image-2562" title="totalimmersion" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/totalimmersion.jpg" alt="totalimmersion" width="450" height="332" /></p>
<p>Th<em>e </em>screenshot above is from <a id="c7vm" title="TotalImmersions video" href="http://www.t-immersion.com/en,video-gallery,36.html#">TotalImmersions video</a> demoing Augmented Reality with 3D Cell Phones.<em> Also see <a id="tvca" title="video of their immersive games" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">video of their immersive games</a>, and FutureScope kiosks <a id="eje0" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a> and <a id="h-:s" title="here" href="http://www.t-immersion.com/en,video-gallery,36.html#" target="_blank">here</a>.<br />
</em><br />
<a id="vwuu" title="Gamesalfreso" href="http://gamesalfresco.com/">Gamesalfreso</a> noted that Will Wright, delivered the best <a href="http://www.pocketgamer.co.uk/r/Various/Spore+Origins/news.asp?c=8725" target="_blank">augmented reality quote</a> of the year. When describing AR as the way of the future for games, Will Wright said:</p>
<p><em><strong>â€œGames could increase our awareness of our immediate environment, rather than distract us from itâ€.</strong></em></p>
<p>Robert points out in this interview the term Augmented Reality itself has become associated with a very limited understanding of what â€œenhancing your specific reality,â€ is really about. Robert notes:</p>
<p><em><strong>it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.</strong></em></p>
<p><em><strong>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</strong></em></p>
<p><strong><em>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</em></strong></p>
<p><strong><em>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</em></strong></p>
<p><strong><em><br />
</em></strong></p>
<h3>Goggleâ€™s Are Not The Secret Sauceâ€¦</h3>
<p><strong><em><img class="alignnone size-full wp-image-2563" title="ar-catpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/ar-catpost.jpg" alt="ar-catpost" width="137" height="150" /><img class="alignnone size-full wp-image-2564" title="goggles-avatarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/goggles-avatarpost.jpg" alt="goggles-avatarpost" width="150" height="150" /><br />
</em></strong></p>
<p>AR Cat left and Robert Rice right</p>
<p>What has come to be associated with the term Augmented Reality, in the popular imagination &#8211; an idea of 3D graphics projected over markers that has been forever waiting for the advent of â€œwicked next generation transparent wearable displaysâ€ &#8211; nirvana for augmented reality. While such displays may be nirvana for AR (and they could be with us in less than twenty four months), Goggles are not the â€œsecret sauceâ€ of AR as Robert points out.<strong><em><br />
</em></strong></p>
<p><em><strong>All the glasses are, is another display device. At the end of the day, it doesnâ€™t matter if you are looking at an LCD monitor, an IPhone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</strong></em><br />
<em><strong><br />
The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</strong></em></p>
<p><em><strong><img class="alignnone size-full wp-image-2585" title="vuzix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/vuzix.jpg" alt="vuzix" width="450" height="318" /><br />
</strong></em></p>
<p>There has been quite<a href="http://www.realwire.com/release_detail.asp?ReleaseID=10934" target="_blank"> a buzz going around</a> about the new <a href="http://www.vuzix.com/iwear/products_wrap920av.html" target="_blank">Vuzix Eyewear</a>, and recently Robert talked with Vuzix and checked The Wrap 920AV eyewear out:</p>
<p><em><strong>Vuzix is not alone in pursuing the ultimate in hardware, at least as far as wearable displays. However, I think they are much farther than the rest of the pack in vision, roadmap, and execution. They have put together a team that has a sense of urgency and ambition that will blow the industry away. After talking to them, I got the feeling that they really know what they are doing and there is a lot of mind blowing stuff in their pipeline. Iâ€™m sure they are one of the few companies that really gets it and has a clear vision of the future. Definitely my first choice to work with.</strong></em></p>
<p><em><strong><br />
</strong></em></p>
<h3>Hybrid Augmented/Virtual Reality</h3>
<p><img class="alignnone size-full wp-image-2566" title="qa_2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/qa_2post.jpg" alt="qa_2post" width="450" height="347" /></p>
<p><a id="va0_" title="Cory Ondrejka posted" href="http://ondrejka.blogspot.com/2009/01/anybots-telepresence-robot.html" target="_blank">Cory Ondrejka posted</a> this picture of the anybots telepresence robot and â€œcongrats to <a href="http://www.tlb.org/">Trevor Blackwell</a> and the rest of the <a href="http://anybots.com/">Anybots</a> team on the launch of <a href="http://anybots.com/abouttherobots.html">QA at CES</a>.â€Â  Cory (one of the founders and former CTO of Second Life) also made some predictions for Virtual Worlds, some optimistic and some less so, including â€œthe increasing need to be able to diversify the Second Life product offering to begin truly rebuilding the code base.â€</p>
<p>Robert is unabashedly irritated with the state of play in Virtual Worlds and MMOGS:<br />
<em><strong><br />
</strong><strong>Unless both industries (Virtual Worlds and MMOGs) have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</strong></em></p>
<p><em><strong></strong><strong>Iâ€™d like to see someone grab an engine like Offset, Crytek, HERO, or Unreal 3, and smack on a fat MMO server infrastructure (Eve or Bigworld)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</strong></em></p>
<p>I did just find out that Nortelâ€™s <a id="qkxv" title="WebAlive is powered by the Unreal 3 engine" href="http://www2.nortel.com/go/news_detail.jsp?cat_id=-8055&amp;oid=100251105&amp;locale=en-US" target="_blank">WebAlive is powered by the Unreal 3 engine</a>. You <a id="xqbw" title="can try WebAlive" href="http://www.lenovo.com/elounge" target="_blank">can try WebAlive</a> out here.</p>
<p>Robert<strong><em> </em></strong>points out how rare it has become to see people really push virtual worlds technology and MMOGs into entirely new directions.Â  Although, of course, there are exceptions.Â  I managed to engage some interest from Robert in the possibilities the <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">opensource modular architecture of OpenSim</a> opens up, and <a id="vx_i" title="the augmented reality experiments from Georgia Tech with Second Life" href="http://arsecondlife.gvu.gatech.edu/" target="_blank">the augmented reality experiments from Georgia Tech with Second Life</a> (screenshot below) got praise from Robert for trying to do something new. (Georgia tech have also put out a <a id="kfzj" title="virtual pet app for the iphone" href="http://uk.youtube.com/watch?v=_0bitKDKdg0" target="_blank">virtual pet app for the iphone</a> ).</p>
<p><img class="alignnone size-full wp-image-2567" title="picture-4" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/picture-4.png" alt="picture-4" width="321" height="245" /></p>
<p>But while Robert clearly has zero patience for virtual world technology which he sees stuck in the mid nineties, he notes:</p>
<p><em><strong>the innovative and wonderful stuff about SL isnâ€™t SL, it is what people are doing and creating on their own with terrible tools *IN* SL</strong></em> [Second Life].</p>
<p>The immersive mobile augmented reality platform Robert is building, he hopes, will generate this kind of user creativity but with 21st century tools.</p>
<h3>So is it â€œOMGâ€ finally for the Augmented Reality we have dreamed about?</h3>
<p>According to Robert:</p>
<p><em><strong>It really boils down to a markerless solution and a good application.</strong></em></p>
<p>In the interview below we cover a number of topics including business models for Augmented Reality, e.g., how business models based on micro-transactions and virtual goods will translate to Augmented Reality.</p>
<p>Many of the challenges to becoming mainstream faced by virtual worldsÂ  are similar to the challenges AR must overcome. Robert discusses these including the interface/gui that is a critical element for AR, solving the riddle of one world or many, patent wars in Virtual Worlds and Augmented Reality, the role of Augmented Reality in the future of sustainable computing, and what interoperability is about.</p>
<h3>The Back Story for AR/VRâ€¦</h3>
<p>In case you want to get up to speed on the required background reading forÂ  Augmented Reality. This is Robertâ€™s required reading list and Denno Coil is an absolute <strong>must</strong> see (feel free to add to this list in the comments, please).</p>
<p>â€œIf you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!)â€</p>
<p><img class="alignnone size-full wp-image-2568" title="dennoucoil" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/01/dennoucoil.jpg" alt="dennoucoil" width="450" height="256" /></p>
<p>Screenshot from Denno Coil from<a id="yic5" title="Concrete Badger" href="http://www.concretebadger.net/blog/2007/12/17/dennou-coil-full-series-2007-in-12-day-4/" target="_blank"> Concrete Badger</a>.</p>
<h3>Interview With Robert Rice</h3>
<p><strong>Tish Shute:</strong> I am glad to hear that you are working on this [an immersive mobile augmented reality platform]!</p>
<p><strong>Robert Rice:</strong> We switched gears from MMO stuff about a year ago and we are finally getting some traction. It is very hard doing anything in this economy right now, but we found an opportunity to take AR to a new level beyond what you see on youtube. AR is still too â€œcuteâ€ and novelty. We donâ€™t want to play around.</p>
<p><strong>Tish Shute:</strong> I like Wikitude â€˜cos it even manages to do something useful!</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> Yeah, useful = traction. Now that we are getting near a prototype we are starting to get a lot of interest even though we are still technically way under the radar.</p>
<p><strong>Tish Shute:</strong> r u funded?</p>
<p><strong>Robert </strong><strong> Rice</strong><strong>:</strong> privately funded, some revenues from an early license, and ongoing discussions with several institutional investors. So, we have some funding, but nothing spectacular just yet.</p>
<p><strong>Tish Shute:</strong> are you just developing an AR platform?</p>
<p><strong> Robert Rice:</strong> hrm, sort of, but not just that. By platform I mean tools, sdk, and infrastructure plus some applications. The idea is to build something that facilitates everyone else making cool things and useful applications for different industries/sectors</p>
<p><strong>Tish Shute:</strong> Yes that is the cool thing to do but isnâ€™t that hard to fund!</p>
<p>(Robert grins) Well, that depends on the business model. Weâ€™ve got that figured out. Iâ€™d be absolutely happy if everyone and their brother were making applications on our stuff that gives us an edge on market penetration/saturation. There are plenty examples that prove the model. If you give people free and easy to use tools, they will run with it. ARtoolkit for example, has tons of people making nifty things and posting videos on youtube that has pushed them to the forefront as THE AR middleware to use right now, or heck, look at youtube free service, and they dominate video sharing.Â  Sure there will be a lot of â€œnoiseâ€, but there will also be a lot of â€œsignalâ€ that will rise to the top, facilitating and enabling is creating value in its own right.</p>
<p><strong>Tish Shute:</strong> But how do you expect to monetize?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> There are a good half a dozen ways to monetize AR or an AR platform.</p>
<p><strong>Tish Shute:</strong> What are your top 3?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, microtransactions, localized mobile advertising, and enterprise solutions (visualization)</p>
<p><strong>Tish Shute:</strong> Do you think the consumer market will give the lead?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Iâ€™m not sure. We are getting people from academia, intelligence, defense, border security, and some corporate types knocking on our door already, and pretty aggressively. It may be that those sectors push AR before consumer entertainment really kicks off.</p>
<p>But going back to a discussion we had earlier &#8211; yes working with â€œno markersâ€ is a big deal.</p>
<p><strong>Tish Shute:</strong> Can you talk about what you are doing there or is it still under wraps?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I can say that between some university tech transfer and some of our own proprietary stuff, we are using some fairly common visual tracking technology. if you are really plugged into the AR scene, you will know there are probably half a dozen visual tracking methods out there. We just looked for the best one, licensed it for commercial use, and then started working our magic. This is a very small piece of the overall effort, but worth noting.</p>
<p>The downside with working with university tech is that it is usually based on research, incomplete, and not wrapped up in a nice commercial package on the upside, it can be a good start to build on.</p>
<p><strong>Tish Shute:</strong> As you know I am very interested in â€œtechnology that mattersâ€ in particular tech that can help us accomplish the urgent goal of sustainable living.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: oh, Iâ€™m pretty keen on sustainable living as wellâ€¦after I sell off a few companies and have money of my own, Iâ€™m going to get into arcologies<br />
â¦<br />
Robert grins</p>
<p>The interesting thing with the visual stuff combined with our other tech, is that we can make things multiuser, persistent, dynamic, and mobile.<br />
The markers (fiducials) are really really limiting outside of basic applications. You canâ€™t really plaster everyone and everything with a marker.Â  And they are, by nature, static (even if they are animated or whatever).</p>
<p>Alsoâ€¦ our stuff works indoors and outdoors even without a GPS connection.<br />
â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> Now that does sound interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, with visual, you donâ€™t need a compass or accelerometers either. Less hardware : )</p>
<p>You start with wifi triangulation or gps coord to get a â€œbruteâ€ location, and then you use the visual stuff for down to the meter accuracy and that by nature, gives you your orientation and positioning.</p>
<p><strong>Tish Shute: </strong>Wow this is beginning to sound very interesting!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Once you have that, it doesnâ€™t matter where you go, it continues to track and continually refines areas you have been before. Weâ€™ve spent the last year figuring all this out. There are so many problems and obstacles that are going to be developing in the future for anyone trying to do what we are, but we have already discovered solutions.</p>
<p>oh, visual tracking = gesture based interfaces too thatâ€™s going to take some work, but its doable.Â  The real pain in the ass there isnâ€™t the actual tracking, it is in the interface design.</p>
<p>Thatâ€™s something that almost every AR company, venture, and research program is missing out on entirely. They are so focused on making cute things with markers.Â  They are missing the larger problems of AR Spam, interface, iconography, GUI, metaphor, interoperability, privacy, identity.</p>
<p><strong>Tish Shute:</strong> So how are you dealing with all that!!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> We took the backwards approach of trying to think where we want things to be in ten years (and we read all the cool booksâ€¦Vinge, Stephenson, Gibson, etc.) and then we spent time trying to think of what the potential problems areâ€¦.like AR spam. Its bad enough when a giant penis flies by in second life, we donâ€™t want that to happen in a global wireless AR platform.</p>
<p><strong>Tish Shute: </strong>Do you have a prototype yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, 6 months away from youtubing something. Problem has been slow funding, which equals slow development. We also donâ€™t want to show our cards too soonâ€¦too many potential competitors out there.</p>
<p>â¦<br />
Robert grins</p>
<p><strong>Tish Shute:</strong> when you say microtransactions what is the business potential there?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>hrm last year I think, $1.5B was spent on virtual items. Thatâ€™s games and virtual worlds. That should hit $5B in a couple of years. Thatâ€™s basically people buying and selling things like WoW gold or items in SL or whatever. microtransactions, is basically the same thing, but in AR space.</p>
<p>Why couldnâ€™t a 3D artist make a wicked animated 3D dragon, and then sell it to someone else? With AR, you could sit it on your shoulder. With a good scripting engine, you could train it to do stuff. Thats what I want to enable.</p>
<p>tools + sdk + platform = enabling people to make and create. Add in a commerce level (microtransactions) and wala.</p>
<p><strong>Tish Shute:</strong> At the moment all of these virtual goods are very platform specific, is that a problem for you?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Not at all. This is at a higher level. You have to switch mental models when you talk about what AR could or should be. For example, lets contrast the web and virtual worlds. For every virtual world you go to, you have to download a whole new client. Imagine if that model was applied to the webâ€¦ you would need a brand new browser for every website you went to. That is just soâ€¦wrong.</p>
<p>Itâ€™s the same thing for ARâ€¦people are thinking about it with the same mental and business models and development philosophies as virtual worlds or web.Â  There are some things and aspects that work fine, but not everything.</p>
<p>Virtual worlds, are, by nature, necessarily different and walled gardens. The idea of 100% open and interoperable virtual worlds is a red herringâ€¦it sounds good but in practice it is a really dumb idea.</p>
<p><strong>Tish Shute: </strong>I was wondering if you had a way to leverage all the 3D content already created â€˜cos that would jump start things in AR wouldnâ€™t it?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Oh yeah, thatâ€™s easy. They all use the same polygons. Any virtual item in any game or virtual world is likely created with 3D studio or maya or something similar would be easy to convert and use.</p>
<p><strong>Tish Shute:</strong> So people could bring their WoW weapons into your system?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Not legally, but sure. Its just a 3D model with a texture.Â  It doesnâ€™t matter if you use corel draw or photoshop or paintshop proâ€¦.or one screwdriver or another. Part of my teamâ€™s advantage, is that we are all experienced in MMORPG and virtual world design and development. We know the tools, the tech, and what works and what doesnâ€™t.</p>
<p><strong>Tish Shute:</strong> But some of the 3D content created in the social worlds is what has most value to people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Right, and that can be exported out easily.</p>
<p><strong>Tish Shute: </strong>But back to â€œrealâ€ life applications. Is you platform really markerless?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes.Â  marker = printed icon or glyph, also known as a fiducial</p>
<p><strong>Tish Shute:</strong> But u must have some marker?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, more accurately, you need a point of reference.</p>
<p>Visual tracking has been around for more than a decade.Â  Lots of work for robots and other sectors.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> But isnâ€™t the specificity of reference n terms of RL applications a vital key, for example, for a database of things?</p>
<p>Robert grin That is a different problemâ€¦tracking, registration, mapping, positioning, etc. That question has to do with mapping which is related to visual tracking, but not the same thing. We have a rather unique approach to some of this that I canâ€™t discuss (patent pending).</p>
<p><strong>Tish Shute: </strong>But for example, to create an augmented natural history of food &#8211; say I want to point at the slab of meat on my plate and know where that cow came from, what feed lot how it was treated etc</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>That is not possible without ubiquitous nanotechnology. Shall I explain?</p>
<p><strong>Tish Shute:</strong> Yes please!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Ok, lets step back a minute and turn that burger back into a cowâ€¦ the first problem (of this particular situation) is differentiating from one cow to another since most cows look alike, you can either attempt to discriminate visually (cow patterns) or use a much simpler option, like giving each cow a rfid chip in their bell, or hoof</p>
<p>Now, most people would try to figure out how to jam all sorts of info in the rfid chip, which sounds like a good idea, but isnâ€™t, the trick would be to simply use the rfid to store a unique identifier with is then linked to a database elsewhere, or hoof.</p>
<p>That database should continually be updated with whatever relevant information you need so as you get close with your AR laptop, wearable displays, or embedded brain chip, you get the identifier broadcast, then you get the info downloaded to you, and it â€œsticksâ€ to the cow with the generic visual tracking (object following, even simple bounding box is sufficient for a slow moving cow)</p>
<p>So, up to that point, you can get tons of information about that specific cow, that cow population (remember, AR is not just about overlaying dataâ€¦it is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.) Tie in data visualisation and some farmer tools and all sorts of other things happen. Now, lets move the timeline ahead a bit.</p>
<p>The butcher gets the cow and does his handiworkâ€¦because we know all the info about the cow, all of the meat can be properly labeled and marked. Ideally, with a UPC code or a unique glyph (somewhat problematic depending on how many unique glyphs you can create) so, while you are in the grocery store, you can access the relevant shopping dataâ€¦age of cow, state of origin, type of feed, how many spots, how much body fat, which butcher, whatever not because of what is inside the package, but the package itself.</p>
<p>Getting back to your hamburger, the problem is that it is a burgerâ€¦there is nothing to distinguish that burger from another one at the tableâ€¦unless you stuck a rfid chip in it or splattered it with ink and a unique glyph, or maybe a special one of a kind plate.</p>
<p>However, a properly designed AR system could say â€œhey! that/s a hamburger! and I know I am at Fat Daddyâ€™s Burger Joint in Raleigh North Carolina on Glenwood Avenue, and I know that they cook their burgers this particular way, and their meat supplier is those guys over there, and they usually get their cow meat from a farm out in Utahâ€</p>
<p>With ubiquitous nanomites or whatever, then its not that far out to consider edible nanos that are in the meat and that broad cast info so a slab of meat can tell you about itself and broadcast that to the general public.</p>
<p><strong>Tish Shute:</strong> What useful scenarios can we create without the nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> If it wasnâ€™t a burger or a consumable organic, the scenario changes.</p>
<p><strong>Tish Shute: </strong>What is the time scale on nanomites?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ehhhhhhh 20 years minimum if we are lucky. They sound good on paper, but there is a whole book worth of problems and why they are so far offâ€¦as consumer grade, all over the place, type of stuff.</p>
<p><strong>Tish Shute:</strong> Did you see the Nokia Home Control center?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, I saw the Nokia stuff.</p>
<p>AR for sensors, like security systems, temperature control, etc. all become â€œsources of dataâ€ that a AR system can visualize. So yes, thats easily doable. You could do that in a short period of time with some half decent engineers.</p>
<p>The trick of what Nokia is doing is aggregating sensor data from a building/home/facility, mashing it together, and sending the mobile device alerts and data visualization conceptually rather simple, but no one has done it right or well yet.</p>
<p>It wouldnâ€™t surprise me if Nokia pulled it off.</p>
<p><strong>Tish Shute:</strong> yes and if they do and someone does an AR interface to it that would be an inflection point for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> In a roundabout way, yes. You could get data directly from your house, or get it through your mobile device and in either case, use the AR for visualization and control.</p>
<p>The interface/gui is a critical element for AR. That is one of the areas where it, as an industry, risks doing a bad job and turning into just a fad or another novelty like VR.Â  Virtual worlds have been struggling with that for a while, but MMORPGs have had the effect of extending their life cycle</p>
<p><strong>Tish Shute: </strong>Yes VWs have not solved the interface problem.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The interface is one of their problems yes. Most virtual worlds are stuck in 1996/98</p>
<p><strong>Tish Shute:</strong> If ARÂ  is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc. seems that it is the ideal interface for home control?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Well for home control, you must know:</p>
<p>1) Who am I? Am I authorized to know this information? Am I a guest?</p>
<p>2) Where am I? Is this my house? or someone elses?</p>
<p>3) What am I doing? Do I want to make all the doors lock? Turn on or off lights? Open the garage door? Trigger the security alarm?</p>
<p>So the same questions apply</p>
<p>Iâ€™d say that all virtual worlds are stuck in the mid 90s. They are at least a decade behind the game worldsâ€¦in technology, design, implementation, architecture, etc. etc. In my opinion, things like Second Life are shameful in how they are presented as state of the art, innovative, ground breaking, new, wonderful, and world changing.</p>
<p>But thats another topic of conversation : )</p>
<p><strong>Tish Shute: </strong> Well for me the contribution of VWs is the presence enabled real time interaction with application (as 3D info machine) and context with other people.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Oh,there is no doubt that they are greatly useful and have a phenomenal amount of potential.</p>
<p>They *could* be all those things I just said that SL isnâ€™tâ€¦the problem is that they are either just existing, or they are meandering around without any real focus or direction. They arenâ€™t evolving.</p>
<p>Even MMORPGs are losing their way and beginning to stagnate terribly</p>
<p><strong>Tish Shute:</strong> yes I agree</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>But, AR has the potential to change a lot of things.</p>
<p>Im sure you have seen <a id="n_22" title="the yellowbook commercials" href="http://www.youtube.com/watch?v=zdPFBTQpk-U" target="_blank">the yellowbook commercials</a>? The technologies you are seeing here are doable in hrm, a year or less maybe. The tricky part is the interactivity and AIâ€¦that is, the content. Everything else isnâ€™t a problem. The avatar there could be photorealistic or stylized like a WoW character.</p>
<p>You could do that to some degree with markers for registration but dynamically changing the content linked to those markers is a little weird</p>
<p>(by the way, for the record, I like markers just fine, I just donâ€™t think they are useful for real-world mobile applications)</p>
<p>I also think that the guys that want to dust the planet with miniature rfid chips are on crack and are going about it the wrong way</p>
<p><strong>Tish Shute: </strong>A high level of interactivity is hard though. Isnâ€™t it? Even in VWs it is very limited.</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it depends if you can track what the user is doing, and interpret that properly. Interactive is also a very lose term.</p>
<p>Clicking a button and making a light blink could be considered interactive.</p>
<p><strong>Tish Shute: </strong>In VWs a high level of interactivity wouldÂ  be to wield a virtual hammer and have a real nail go in! is physics part of the problem?</p>
<p><strong>Robert Rice:</strong> physics arenâ€™t difficult, plenty of middleware out there for it. The problem with that isnt so much the physics as much as it is the scale and purpose</p>
<p><strong>Tish Shute:</strong> well for robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> that gets into a conversation about meshes, textures, and volumetric collision detection and stuff</p>
<p><strong>Tish Shute:</strong> virtual robotics?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> You mean teleremote/telepresence of real robots?</p>
<p><strong>Tish Shute: </strong>yes!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> ah, for that, you need some tactile feedback and some other stuff &#8211; doable, but insanely difficult. Thatâ€™s why you donâ€™t see a whole lot of remote controlled surgery robots all over the place.</p>
<p>They do existâ€¦</p>
<p><strong>Tish Shute: </strong> Will AR contribute to sustainable living by freeing us from some of our energy hogging devices?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>AR will ultimately encourage energy saving and recycling. where did I leave a light on at? where is the nearest trash can? what is the UV index outside today?</p>
<p>Yes, computers are energy hogs, but as we start seeing larger SSD drives, more efficient CPUs (even if the number of cores increases in multiples), and so on, the power will go down.</p>
<p>Also, think about thisâ€¦wearable displays potentially use less energy than LCD monitors on your desk.</p>
<p><strong>Tish Shute: </strong>Yes I should pick the brains of my intel chums on energy saving!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Getting rid of the monitor and switching to solid state drives will save an assload of power. Yes, I said assload.</p>
<p>Tell your intel chums to quit screwing around with single core mobile CPUs. We need multiple cores, that are smaller, faster, and use less power.</p>
<p><strong>Tish Shute: </strong>Is AR is the sustainable future of VW and MMOGs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>The fun stuff will happen when they are both integrated in some fashion.</p>
<p><strong>Tish Shute:</strong> So perhaps this is why the Georgia guys are thinking in trying to combine AR and SL (<a id="boum" title="see video here" href="http://uk.youtube.com/watch?v=O2i-W9ncV_0&amp;feature=related" target="_blank">see video here</a>).</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That first video was pretty damn cool. It just pains me that they are using SL for it. And omg, all those markers on the table.</p>
<p>Although, I could care less about seeing my SL avatar on my coffee table. I would rather see an avatar representing ME in the real world, moving around in a virtual world that is a â€œto scaleâ€ replica of the real world. That is MUCH more interesting and innovative.</p>
<p>But even if I donâ€™t like where they are going, or that they are using SL, the important thing is that they are doing something and forging ahead. I have a massive amount of respect for anyone, private, government, or academic, that is doing that.</p>
<p>And yes, the door (or window, or looking glass) has to work both ways for maximum potential, at least, thatâ€™s what Id like to see. They donâ€™t *have* to, but it would be rather cool.</p>
<p>And going back to sustainability, AR has the potential to make monitors generally obsolete, laptops too. Thatâ€™s a lot of power hungry devices with all sorts of metals and batteries inside.</p>
<p>But, even if the tech was absolutely crazy awesome right this minute, it would take a little while for consumer adoption.</p>
<p><strong>Tish Shute:</strong> But AR unleashes the mobile device?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, AR is going to be built on powerful mobile devices for the near future, eventually embedded comps in clothing and whatnot. But that is a ways off</p>
<p>Entertainment is going to be the first huge driver.</p>
<p><strong>Tish Shute:</strong> So people will get used to having a pet virtual dragon on their shoulder first?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, virtual dragon is way cool, easy tech for games, and can eventually be leveraged into a smart agent which becomes a practical applicationâ€¦agent based contextual search, etc. Yes, entertainment will also drive people to get used to the tech</p>
<p><strong>Tish Shute: </strong>Oh thanks for turning me on to <a id="kzbv" title="gamesalfresco" href="http://gamesalfresco.com/" target="_blank">gamesalfresco</a>!<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Ive noticed that the good stuff usually gets linked to there. They donâ€™t list my blog, but thatâ€™s what I get for staying under the radar and not posting often. But anyway, gamesalfresco is the first place I send people that need a crash course in AR. Great site, great owner.</p>
<p><strong>Tish Shute:</strong> So are you in agreement with Thomas Wrobelâ€™s positioning ofÂ <a href="http://www.mobilizy.com/wikitude.php" target="_blank"> </a><em><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a></strong></em> and <em><strong><a href="http://gamesalfresco.com/2008/07/20/want-your-own-augmented-reality-geisha/" target="_self">AR Geisha doll</a> </strong></em>as being significant milestones for AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes,Â  these are among the first attempts to get away from the novelty of simply rendering a 3D object based on a marker and making it interesting.</p>
<p class="MsoNormal">Remember, one of the biggest risks that AR has, is being branded as â€œnoveltyâ€, which means â€œcool for five minutes but ultimately a waste of time.â€ I think we have a ways to go before something is truly useful, but as 2009 progresses we should start seeing some effort here. Iâ€™d guess 2010 before something really useful comes outâ€¦at least something practical.</p>
<p>Now, having said that, I should say that I expect entertainment and games to take the lead (as usual), although there are a few companies really trying to leverage AR and video/graphics compositing for marketing (brochures) and location based methods (kiosks, large screen projections, etc.)</p>
<p><strong>Tish Shute:</strong> Many people would say SnowCrash (metaverse) is now and Halting State (AR) is ten years from now. But you are seeing a development timeline for some popular AR apps in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Anyone that says SnowCrash is -now- is living in a box. Virtual Worlds, Virtual Reality, and immersive tech in general stopped innovating in the mid 90s. Iâ€™m continually flabbergasted at the number of people that think that things like Second Life are state-of-the-art or innovative. You might as well try to market a walkman as cutting edge, even though we have IPods out there.</p>
<p>Id like to see someone grab an engine like offset, crytek, hero, or unreal 3, and smack on a fat mmo server infrastructure (eve or big world)â€¦toss in the right tools, and you would see a revolution and renaissance occur at the same time in the virtual world space. All the puzzle pieces are there, just no one is putting them together the right way.</p>
<p><strong>Tish Shute:</strong> Why doesnâ€™t anyone do that?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Its not cheap, people will only fund a copy of something that exists already, people fear change and innovation, etc, The list goes on. The right money goes to the wrong people all the time.</p>
<p>Alternatively stated, there is a lot of â€œright idea, wrong implementationâ€</p>
<p>MMORPGs carried the torch and have made huge strides on the technology front, but have devolved in design. More often than not the gameplay emphasizes the single player experience and does nothing to take advantage of the potential of the massively connected internet.</p>
<p>Unless both industries have some serious upheaval or radical new approaches, they will quickly be eclipsed by AR, which will eventually evolve into something hybrid..AR/VR depending on your level of access and hardware.</p>
<p>But yes, Iâ€™d say that the next 18 months are going to be very interesting with a lot of money being thrown around, new ventures, and plenty of content/applications. I expect most of this will be centered on single user AR experienced through a mobile device with a screen (iphone, android, etc.). I expect that there will be a significant boost after Vuzix releases some of their wearable *transparent* displays, putting Microvision back into the â€œhas potential but is too quietâ€ position.</p>
<p><strong>Tish Shute:</strong> AR conjurs an image in many peopleâ€™s minds of dreadful head gear!</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes, it is either transparent wearable displays (in eyeglass formfactor) or nothing. HMDs with miniature LCD or OLED displays are good for streaming video, but for the mobile ubiquitous AR we all dream about, it has to be something that looks and feels like a pair of Oakleys.</p>
<p>I should also mention that several different types and modes of AR are going to find themselves being defined and refined over the next two years as we continue to blaze new trails, establish a lexicon (we keep borrowing terms from games, VR, virtual worlds, mmorpgs), and really work out the how as well as the why.</p>
<p>Even though the idea of AR has been around for a long time, the technology is just beginning to emerge, and very few people are even looking far enough ahead to figure out the problems and solutions that the tech creates. Really, who is thinking about how to deal with AR spam right now?</p>
<p><strong>Tish Shute: </strong>Do you see any successful networked AR applications emerging in the next 18 months?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes and no.</p>
<p>When I talk about AR, I try to expand the definition a little bit. Usually, when you talk to someone about augmented reality, the first thing that comes to mind is overlaying 3D graphics on a video stream. I think though, that it should more properly be any media that is specific to your location and the context of what you are doing (or want to do)â€¦augmenting or enhancing your specific reality.</p>
<p>In this sense, anything that at least knows who you are (your ID, mobile phone #, etc.), where you are (GPS coord or a specific place like a cafe), and gives you relevant data, information, or media = augmented reality. Sure, you can make things more interactive or immersive, but that is the minimum.</p>
<p>So, in this case, yes, I think there will be networked applications in the next 18 monthsâ€¦mostly things that are enhanced by friends lists (you are here, your friend is over there). These will be *application specific*. My team at Neogence is already going beyond this, building a platform and infrastructure for other applications to be developed onâ€¦all networked through the same backbone. Now, in this context (the science fiction AR that we all dream about), no I do not see anyone else trying to leap a generation or two ahead of the industry to build a massively multiuser shared AR space. Expect to see things like multi-user AR games, virtual pets, kiosk marketing, magic book, â€œgee whizâ€ presentations (tradeshow booths, entertainment parks, etc.), and so forth.</p>
<p>The big thing Iâ€™m worried about is AR becoming the next silicon valley trendâ€¦once they realize the potential, an enormous amount of capital will flow to a bunch of startups with half baked ideas, weak business models, ten year old tech, and a lot of overhyped marketing. That is the very thing that will kill this technology as something that has true power and potential to literally change the way we interact with each other, our surroundings, information, and media.</p>
<p><strong>Tish Shute: </strong>Do you think AR has value for a project like Pachube that helps us connect dtat from lots of different environments and sensor actuator data?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I think that AR has value as an interface to this data (essentially data visualization based on information streaming from a sensor or source that is interpreted in some dynamic graphical manner that has meaning). This is one of the â€œbig areasâ€ where ubiquitous augmented reality and wearable computing will really shine. Iâ€™ll definitely be keeping an eye on Pachube .</p>
<p><strong>Tish Shute:</strong> I canâ€™t help it! I am really interested to hear more about the Vuzix glasses?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yeah, everyone is getting hung up on the glasses as the end-all be all and having markers everywhere too.</p>
<p>All the glasses are, is another display device. At the end of the day, it doesnt matter if you are looking at a lcd monitor, a iphone, a head mounted display, or a pair of wicked next generation transparent wearable displays that magically draw directly on your retinas.</p>
<p>The real tricky stuff is what happens on the backendâ€¦making it all persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc.</p>
<p>I think that we are within 24 months of the magic wearables (these new ones by vuzix are probably the real first generation attempt at doing it right). They wont be perfect, but I expect they will be functionalâ€¦and once we have functional, we can start doing the good stuff.</p>
<p><strong>Tish Shute:</strong> You mentioned you disappointement with VWs and MMORPGs earlier.Â  Could you tell me more about that?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Yeah, there was an evolutionary divergence between virtual worlds and mmorpgs a while back. One stagnated almost completely, and the other leapt ahead in one sense and devolved horribly in the other sense. Neither is where the state of the art should be.Â  That is a whole other conversation, and probably a second book.</p>
<p><strong>Tish Shute:</strong> So making AR persistent, massively multiuser, intelligent, interoperable, realistic, etc. etc. that is where your efforts are going?<strong></strong></p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. I fully expect that the hardware is almost ready for it. You can cobble together some amazing things in the lab right now, and I think commercial viability is imminent. The real value (as far as Iâ€™m concerned) is in making it mobile, wireless, persistent, and massively multiuser. You could argue that augmented reality will take over where virtual reality failed and become internet 3, internet one being the internet, internet two being the webâ€¦</p>
<p>mmorpgs are nothing more than single player games in a multiuser environment these days. Iâ€™m more than a bit bitter about it. All the right money went to the wrong people, and the best games we have are barely shadows of what we could have had by now.</p>
<p><strong>Tish Shute:</strong> Are there any open source AR platform dev projects?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>open source? hrm, Im sure there are multiple ones out there</p>
<p>if not entirely open source, there are plenty of things to experiment with that are generally free if you arenâ€™t trying to sell something, DART and ARTOOLKIT come to mind as very accessible applications.</p>
<p>Marker based AR is very important right nowâ€¦it is easy, low tech, understandable, highly customizable, and most importantly, accessible to the average joe. Ultimately though, we need a method of pure trackingâ€¦no markers glued to everything on the planet, no â€œbillions of RFIDsâ€ embedded in every square inch of every object on the planet, etc.</p>
<p><strong>Tish Shute:</strong> What do you mean by interoperability in AR? And what do you think about the development of standards?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> Ooh, good question.</p>
<p>Ok, so the internet is basically computers communicating with computers, and the web is mostly pages linking to other pages (Iâ€™m greatly oversimplifying here). Hold this thought for a minute.</p>
<p>Switch over to MMORPGs. If you want to play in one (or a virtual world), you need to download a client that is specific to that world. One client does not work with another world. There are plenty of efforts to change this, but they are all barking up the wrong tree. The specific uniqueness of each world defeats the need and purpose of true interoperability, unless you completely reinvent the whole thing with a common backbone, features, functionality, etc. The very nature of virtual worlds and mmorpgs rebels against this.You absolutely do not want an avatar from second life running around in world of warcraft (for reasons that should be obvious).</p>
<p>On the other hand, with the web, you can use just about any client (browser) to access nearly any website (some requiring plugins or whatever).</p>
<p>The thing with augmented reality, is how do we go about making this? Iâ€™ve seen a few people thinking about this from the wrong perspective. There was a question at the last techcrunch to the Sekai Camera guys (a conceptual AR application for the iphone) where someone on the panel wanted to know how website owners would convert their content for augmented reality. BZZZZZT! That is a fundamental misunderstanding of what AR is, or could be, and it falls into the same trap I see a lot of people doingâ€¦and that is looking at AR through the web 2.0 lens or the virtual world lens. It is absolutely fundamentally different at the coreâ€¦sure there are similarities: it has social networking/media applications and properties, and it has 3D graphics, but it stops there.</p>
<p>Ubiquitous augmented reality will be dramatically different depending on which standards, approaches, and philosophies get the most traction first. Will you walk down the street with your AR glasses and have a pop up every 30 feet asking you if you want to access the AR content on another server? Will you then have to register, subscribe, or whatever?</p>
<p>Or will all AR content be mediated by one sole master control server deep in the bowels of google? What about some other option? Will you need different sets of glasses to access different features and content from multiple sources?</p>
<p>At the end of the day, it should not matter what brand of glasses you are wearing, you should never have to deal with AR server popups to join/subscribe, and so forth.</p>
<p>Interoperability, in the context of what I was saying earlier, is the sense of how to build the infrastructure so all of this is seamless to the end user, but still maintaining the features/functionality necessary for all of what augmented reality promises usâ€¦I dont want to see everything in AR space, I want to be able to tune in or filter out some things, and I want to customize the snot out of what I see (perhaps changing metaphors or â€œholoscapesâ€), and so on. It all has to work together and simplify the end-user experience or it wonâ€™t get anywhere</p>
<p><strong>Tish Shute: </strong>So what caused the stagnation of new development and devolution of MMOGs in you opinion?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>yes, look at all the hope and hype for the mmorpgs released in the last 12 months really, what is different or better? Now, what is worse?</p>
<p>I bet any decent mmorpg gamer could give you a list of 2 or 3 things for the first question and 20-30 things for the second.</p>
<p>And, VWs seem to be stuck in a feedback loop</p>
<p><strong>Tish Shute: </strong>feedback loop?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Imagine nailing one of your feet to the ground and then trying to run â€™round and â€™round and â€™round.</p>
<p><strong>Tish Shute:</strong> Why do you think this happened to VWs?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Men in suits and flashy watches.</p>
<p>actually, hang onâ€¦..</p>
<p>I saw a video clip the other day from a conference about using various virtual and game technologies for simulations and other real world applications several people were talking about â€œavatar technologyâ€ and how theirs was better than their competitions and what not.</p>
<p>Now, can you tell me what â€œavatar technologyâ€ is? Avatar technology is a red herring. Avatar technology is the same thing as calling a toaster a new â€œfire technology.â€</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The problem is that a lot of people that donâ€™t have a clue about what they are doing are selling the tech to other people that have no clue what they are buying, but they feel like the should for some unknown reason.</p>
<p>That is happening all over the government, academic, and industrial sectors now with a few companies selling virtual worlds (again, mid 90-s tech) as the ultimate solution to all problems.</p>
<p>Anyway, getting back to your question</p>
<p>Once virtual reality started getting some buzz, some people got greedy and jumped into the avatar/virtual world thing and tried making it commercial too soon half of the 3D chat worlds were being jammed into platforms for virtual shopping malls.</p>
<p>Most of the money funding tech R&amp;D started funneling towards VRML, and doing 3D in web pages, etc.</p>
<p><strong>Tish Shute: </strong>yes horrible idea trying make web pages 3D IMHO</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong> The money people got involved too soon, and then the greedy people jumped in and tried patenting everything possible. Take a look at the worlds.com patent for 3D worlds.</p>
<p>They filed it back in 2000 or so and it was awarded in 07 (it shouldnt have been in my opinion) now they are suing everyone they can.</p>
<p><strong>Tish Shute: </strong>Will there be patent wars in AR?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> Yes, the AR patent wars will be legendary once people start waking up to the real potential here.</p>
<p>The only solution is for everyone to band together and pre-emptively patent or make public domain every possible patentable concept, technology, or implementation for AR otherwise, you havenâ€™t seen anything yet.</p>
<p><strong>Tish Shute:</strong> Is the AR community organized enough to do that yet?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> That depends on how my company fares in the next six months.</p>
<p><strong>Tish Shute:</strong> Will you patent or make your tech public domain?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> I plan on patenting the snot out of everything we can possibly think of, and then giving away our content creation tools and SDK stuff for free. The whole goal of what we are trying to build is to empower the end user and facilitate the creation of a wonderful world of augmented reality.</p>
<p>There are some things we will make public domain for sure, on top of that</p>
<p><strong>Tish Shute:</strong> So back to my question on networked real time experience. Will we have networked Real time AR experiences in the next 18 months</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It is possible, yes. Other than what we are doing, I am not aware of anyone else taking the same approach we are, but the potential for an â€œunder the radar ventureâ€ (much like my own company) is definitely there.</p>
<p><strong>Tish Shute: </strong>Will you use cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>I think thatâ€™s overrated and probably another attempt at the whole â€œthin clientâ€ model that some companies have been pushing for the last 20 years.</p>
<p>It sounds good on paper, but ultimately takes power and control away from the end user.</p>
<p><strong>Tish Shute:</strong> cloud computing?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>Yes. You know, we arenâ€™t playing around, We are totally building â€œTHE ARâ€ that everyone keeps dreaming about. None of this cute stuff you see on youtube. Actually, if you want to see the things that have inspired our vision of what we want to build, check out:</p>
<p>* Dream Park by Larry Niven and Steven Barnes<br />
* Rainbows End by Vernor Vinge<br />
* Spook Country by William Gibson<br />
* Halting State by Charles Stross<br />
* The Diamond Age by Neal Stephenson<br />
* Donnerjack by Roger Zelazny and Jane Lindskold<br />
* Otherland by Tad Williams<br />
* Neuromancer by William Gibson<br />
* Idoru by Wiliam Gibson<br />
* Cryptonomicon by Neal Stephenson</p>
<p>and watch the whole anime of Denno Coil (subbed NOT dubbed!).</p>
<p><strong>Tish Shute:</strong> So scaling the real time experience wonâ€™t be a problem in your project hehe</p>
<p>Cos no sharding allowed in AR right</p>
<p>And if you have lots of API calls?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong>: haha, sharding is one of the dumbest things to happen to the VW/MMO industry</p>
<p>It is a solution to a technical problem that was relevant 15 years ago.</p>
<p><strong>Tish Shute:</strong> so why did it stick (i know men in suits)</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> it stuck because â€œthats what the other guys didâ€ and the mmo designers are too lazy to reconcile gameplay for PvP and RP gamers</p>
<p>However, there is a curious problem between dealing with â€œone worldâ€ and â€œanyone can start their own custom AR serverâ€</p>
<p><strong>Tish Shute: </strong>Now that is a very interesting problem the one world and own AR server</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> It took me a few weeks of not sleeping to figure that one out. It gets back to the interoperability issue</p>
<p><strong>Tish Shute:</strong> What did you come up with?</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> a solution. Thats all I can say for now on that.</p>
<p><strong>Tish Shute</strong>: eeextra seeekrit!</p>
<p>Well I will definitely have to bug you on that.</p>
<p>The problem has produced some creativity in OpenSim with people coming up with hybrids of p2p and oneworld</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> As far as virtual worlds are concerned, they need to look at the problem from a different perspective. They are trying to make all virtual worlds interoperable intead of creating a new model for interoperable worlds that new ones will be created to adhere to.</p>
<p><strong>Tish Shute: </strong>well some people are. I would say most OpenSim developers see their modular approach doing this.Â  And you choose to interoperate based on what modules you have activated and then social agreementsâ€¦</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>:</strong> hrm, thats a start, but that only works on a functional and social level &#8211; doesnâ€™t account for content (story, mythos, game rules), unique data (my +3 sword), or the concepts of commerce, inherent value, and intellectual property</p>
<p>Enabling my WoW avatar to run around in SL and vice versa creates more problems than it solves.</p>
<p>Its like two alien races working hard to make sure that their two spaceships can dock but no one is paying any attention to the fact that race A breathes nitrogen and race B breathes sulpher.</p>
<p>Its technically possible, but they are missing the boat on the content side of the problem.</p>
<p><strong>Tish Shute:</strong> Yes but donâ€™t you think when a modular open source tech for virtual worldsÂ  becomes pervasive, what will happen is that those interested in a similar genre will increasingly use the module in ways that allows their content to interoperate if they want it too</p>
<p><strong>Robert</strong><strong> Rice</strong><strong>: </strong>everyone has to use the same backend tech, and the front end clients need to adhere to the same standards. Bu I have to admit, I havenâ€™t been paying much attention to the vw space in the last 9 months or so.</p>
<p>Oh I have to run now.Â  But download and install <a id="vsnt" title="cooliris" href="http://www.cooliris.com/" target="_blank">cooliris</a>. I promise you will be blown away and will start using it to search for images and videos</p>
<p>Its frigging awesome.</p>
<p><strong>Tish Shute:</strong> Will do!Â  Thanks so much great talking to you. I canâ€™t wait for your launch.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/01/17/is-it-%e2%80%9comg-finally%e2%80%9d-for-augmented-reality-interview-with-robert-rice/feed/</wfw:commentRss>
		<slash:comments>27</slash:comments>
		</item>
	</channel>
</rss>
