<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; Virtual Realities</title>
	<atom:link href="http://www.ugotrade.com/category/virtual-realities/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Augmented World Expo 2013:  It&#8217;s a wrap!</title>
		<link>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/</link>
		<comments>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/#comments</comments>
		<pubDate>Tue, 09 Jul 2013 03:38:56 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[gestrural interface]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Linked Data]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Philip Rosedale]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Amber Case]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[Augmented World Expo]]></category>
		<category><![CDATA[AWE2013]]></category>
		<category><![CDATA[Ben Cerveny]]></category>
		<category><![CDATA[connected hardware]]></category>
		<category><![CDATA[gesture interaction]]></category>
		<category><![CDATA[Google Glass]]></category>
		<category><![CDATA[hardware startups]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Steve Mann]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[wearables]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6600</guid>
		<description><![CDATA[Augmented World Expo 2013 was really an amazing experience. I&#8217;m co-founder and co-organizer of the conference, along with Ori Inbar, so it has meant a lot to me to see our event grow over the last four years, and thrilling to make such a big splash this year.Â  There were 1,163 attendees, and the expo [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><iframe width="560" height="315" src="//www.youtube.com/embed/4d0k_7pdPGg" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/NQ-g0Jimg7I" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/9GxVQREssdY" frameborder="0" allowfullscreen></iframe></p>
<p><a href="http://augmentedworldexpo.com/" target="_blank">Augmented World Expo 2013</a> was really an amazing experience.  I&#8217;m co-founder and co-organizer of the conference, along with Ori Inbar, so it has meant a lot to me to see our event grow over the last four years, and thrilling to make such a big splash this year.Â  There were 1,163 attendees, and the expo show cased an ecosystem of emerging technologies &#8211; augmented reality, gesture interaction, eyewear, wearables, and connected hardware ofÂ  many stripes, that mark the beginning of natural computing entering the mainstream.  It was a unique opportunity to get up close and personal with what it feels like to be an augmented human in an augmented world! </p>
<p>Videos of AWE 2013â€²s 35 hours of educational sessions and inspirational keynotes are now available on <strong><a href="http://www.youtube.com/user/AugmentedRealityOrg/videos?view=0&amp;shelf_index=0&amp;sort=dd&amp;tag_id=" target="_self">our YouTube channel</a></strong>.  I am sharing <a href="http://www.youtube.com/watch?v=9GxVQREssdY">my own talk</a> (my slides are also up <a href="http://www.slideshare.net/TishShute/augmented-humansaugmentedworld">on slideshare here</a>), and a few of my favorites in this post, but there are far to many to post here, so please browse further on the Augmented World Expo youtube channel.</p>
<p>One notable high point of AWE2013, for me, was the showcase sponsored by <a href="http://www.meta-view.com/about">Meta</a>, a startup developing the first device allowing visualization and interaction with 3D virtual objects in the real world using your hands.  It was made possible by the generous contribution from the private collections of Paul Travers, Dan Cui, Steven Feiner, Steve Mann, and Chris Grayson, and passionate volunteers who are helping advance the industry.  Sean Hollister of The Verge did this excellent  report on the eyewear showcase <a href="http://www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">35 years of wearable computing history at Augmented World Expo 2013<br />
</a>  Also for more on Meta see <a href="http://news.cnet.com/8301-11386_3-57584739-76/meta-glasses-bring-3d-and-your-hands-into-the-picture/">this article by Dan Farber</a>.</p>
<p>My colleagues at <a href="http://www.syntertainment.com/">Syntertainment</a>, Will Wright, Avi Bar-Zeev, Jason Shankel, and LaurenElliott all gave great talks.  Ironically, weâ€™re not building augmented reality apps or H/W.  We all just happen to continue to be very interested in the field. Â </p>
<p>Thank you to everyone for supporting the event! </p>
<p>The press coverage was truly extensive:</p>
<p style="text-align: left;"><a href="http://www.theverge.com/2013/6/9/4410406/in-the-shadow-of-google-glass-at-augmented-world-expo-2013">In the shadow of Google Glass, an augmented reality industry revs its engines<br />
</a>The Verge, Sean Hollister, June 9, 2013,Â <a href="http://topsy.com/www.theverge.com/2013/6/9/4410406/in-the-shadow-of-google-glass-at-augmented-world-expo-2013">271 Tweets</a></p>
<p><a href="http://news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">The next big thing in tech: Augmented reality<br />
</a>CNET, Dan Farber, June 7, 2013<br />
Pick up onÂ <a href="http://currentnewsdaily.com/the-next-big-thing-in-tech-augmented-reality/">Current News Daily<br />
</a><a href="http://topsy.com/news.cnet.com/8301-11386_3-57588128-76/the-next-big-thing-in-tech-augmented-reality/">350 Tweets</a></p>
<p><a href="http://thepersuaders.libsyn.com/awe-2013-conference-report-augmented-reality-and-marketing">AWE 2013 Conference Report: Augmented Reality and Marketing<br />
</a>The Persuaders Marketing Podcast onÂ Dublin City FM, June 23, 2013</p>
<p><a title="AR Dirt Podcast â€“ Episode 26 â€“ Ori Inbar AWE2013 Extravaganza Recap" rel="bookmark" href="http://www.ardirt.com/general-news/ar-dirt-podcast-episode-26-ori-inbar-awe2013-extravaganza-recap.html">AR Dirt Podcast â€“ Ori Inbar AWE2013 Extravaganza Recap<br />
</a>AR Dirt by Joseph Rampolla,Â June 18, 2013</p>
<p><a href="http://www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">35 years of wearable computing history at Augmented World Expo 2013<br />
</a>The Verge, Sean Hollister, June 9, 2013<br />
<a href="http://topsy.com/www.theverge.com/2013/6/9/4409940/35-years-of-wearable-computing-history-at-augmented-world-expo-2013">7 Tweets</a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/06/augmented-reality-bruce-sterling-keynote-at-augmented-world-expo-2013/">Augmented Reality: Bruce Sterling, keynote at Augmented World Expo 2013<br />
</a>Wired, Bruce Sterling, June 9, 2013<br />
<a href="http://topsy.com/www.wired.com/beyond_the_beyond/2013/06/augmented-reality-bruce-sterling-keynote-at-augmented-world-expo-2013/">9 Tweets</a></p>
<p><a href="http://doc-ok.org/?p=598">On the road for VR: Augmented World Expo 2013<br />
</a>Doc-Ok, Staff, June 7, 2013<br />
<a href="http://topsy.com/trackback?url=http%3A%2F%2Fdoc-ok.org%2F%3Fp%3D598">3 Tweets</a></p>
<p><a href="http://www.wassom.com/my-interview-from-augmented-world-expo-2013-video.html">My Interview from Augmented World Expo 2013 [VIDEO] </a><a href="http://wassom.com/">Wassom.com</a>, Brian Wassom, June 7, 2013</p>
<p><a href="http://zenfri.com/2013/06/augmented-world-expo/">Augmented World Expo</a><br />
ZenFri, Staff, June 7, 2013</p>
<p><a href="http://www.fbnsantos.com/?p=9634">AWE2013: Hardware for an augmented world</a><br />
FBNSantos.com, Felipe Neves Dos Santos, June 6, 2013</p>
<p><a href="http://investorplace.com/2013/06/augmented-reality-will-be-the-new-reality/">Augmented Reality Will Be the New Reality</a><br />
InvestorPlace, Brad Moon, June 6, 2013</p>
<p><a href="http://www.techhive.com/article/2040837/wearable-computing-pioneer-steve-mann-who-watches-the-watchmen-.html">Wearable computing pioneer Steve Mann: Who watches the watchmen?</a><br />
TechHive, Armando Rodriguez, June 6, 2013</p>
<p><a href="http://abclocal.go.com/kgo/video?id=9127769">Expo puts augmented reality in the limelight</a><br />
ABC 7 News, Jonathan Bloom, June 5, 2013</p>
<p><a href="http://www.dvice.com/2013-6-5/these-oled-microdisplays-are-future-augmented-reality">These OLED microdisplays are the future of augmented reality</a><br />
DVICE, Evan Ackerman, June 5, 2013</p>
<p><a href="http://www.engadget.com/2013/06/05/visualized-history-of-augmented-and-virtual-reality-eyewear/?utm_medium=feed&amp;utm_source=Feed_Classic&amp;utm_campaign=Engadget">Visualized: a history of augmented and virtual reality eyewear</a><br />
Engadget, Michael Gorman, June 5, 2013</p>
<p><a href="http://www.papitv.com/wikitude-announces-wikitude-studio-and-in-house-developed-ir-tracking-engine">Wikitude announces Wikitude Studio and in-house developed IR &amp; Tracking engine</a><br />
PapiTV, KC Leung, June 5, 2013</p>
<p><a href="http://www.usatoday.com/story/tech/personal/2013/06/05/augmented-reality-expo-google-glass/2392769/">Augmented reality expo aims for sci-fi future today</a><br />
USA Today, Marco della Cava, June 5, 2013</p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/06/augmented-reality-high-dynamic-range-hdr-video-image-processing-for-digital-glass/">Augmented Reality: High Dynamic Range (HDR) Video Image Processing For Digital Glass</a><br />
Wired, Bruce Sterling, June 5, 2013</p>
<p><a href="http://allthingsd.com/20130604/will-wright-at-augmented-reality-conference-dont-augment-reality-decimate-it/">Will Wright at Augmented Reality Conference: Donâ€™t Augment Reality, Decimate It</a><br />
AllThingsD, Eric Johnson, June 4, 2013</p>
<p><a href="http://news.cnet.com/8301-11386_3-57587672-76/philip-rosedales-second-life-with-high-fidelity/">Philip Rosedaleâ€™s Second Life with High Fidelity</a><br />
CNET, Dan Farber, June 4, 2013</p>
<p><a href="http://www.pcworld.com/article/2040801/google-glass-competitors-vie-for-attention-as-industry-grows.html">Google Glass competitors vie for attention as industry grows</a><br />
PC World, Zack Miners for IDG News Service, June 4, 2013</p>
<p><a href="http://daqri.com/press_posts/press-release-4d-augmented-reality-leader-daqri-announces-15-million-financing-2/#.Ua-RjNhNuSo">4D Augmented Reality Leader Daqri Announces $15 Million Financing</a><br />
Press Release, June 4, 2013</p>
<p><a href="http://www.techzone360.com/topics/techzone/articles/2013/06/03/340432-crowdoptic-powers-lancome-virtual-gallery-app-crowd-powered.htm">CrowdOptic Powers Lancome Virtual Gallery App, Crowd-powered Heat Map</a><br />
TechZone 360, Peter Bernstein, June 3, 2013</p>
<p><a href="http://www.craveculture.net/2013/06/augmented-humans-now/">Augmented humans, enhanced happiness?</a><br />
Crave Culture, Angelica Weihs, June 2, 2013</p>
<p><a href="http://www.metaio.com/press/press-release/2013/metaio-vuzix-to-showcase-ar-ready-smart-glasses-at-the-2013-augmented-world-expo/">Metaio &amp; Vuzix to Showcase AR-Ready Smart Glasses at the 2013 Augmented World Expo</a><br />
Press Release, May 30, 2013</p>
<p><a href="http://qz.com/89467/four-ways-augmented-reality-will-invade-your-life-in-2013/">Four ways augmented reality will invade your life in 2013</a><br />
Quartz, Rachel Feltman, May 30, 2013</p>
<p><a href="http://www.wired.com/beyond_the_beyond/2013/05/augmented-reality-augmented-world-expo-is-next-week/">Augmented Reality: Augmented World Expoâ„¢ is next week</a><br />
Wired, Bruce Sterling, May 28, 2013</p>
<p><a href="http://www.prweb.com/releases/candy-lab/augmented-reality/prweb10763283.htm">Strike it Rich with Cachetown and AWE 2013 Playing the Gold Rush 49â€™er Challenge In Augmented Reality</a><br />
Press Release, May 24, 2013</p>
<p><a href="http://interact.stltoday.com/pr/lifestyle/PR052413071613074">Local Community College Student Headed to Silicon Valley to Learn More about Augmented Reality</a><br />
St. Louis Post-Dispatch, Staff, May 24, 2013</p>
<p><a href="http://www.cnet.com.au/explore-an-intricate-labyrinth-with-smartphone-ar-339344350.htm">Explore an intricate labyrinth with smartphone AR</a><br />
CNET Australia, Michelle Starr, May 21, 2013</p>
<p><a href="http://thechronicleherald.ca/business/1130672-dartmouth-firm-lands-super-app">Dartmouth firm lands super app</a><br />
Herald Business, Remo Zaccagna, May 21, 2013</p>
<p><a href="http://siliconangle.com/blog/2013/05/17/augmented-world-expo-2013-the-future-of-augmented-reality/">Augmented World Expo 2013â€“The Future of Augmented Reality</a><br />
Silicon Angle, Saroj Kar, May 17, 2013</p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/o6L3dcsLEto" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/FhLx7k07Pa4" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/ON7VUzsNcYI" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/qhVdTFcR6TA" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/REoEj-JkDww" frameborder="0" allowfullscreen></iframe></p>
<p><iframe width="560" height="315" src="//www.youtube.com/embed/ohatuq8tekk" frameborder="0" allowfullscreen></iframe></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2013/07/09/augmented-world-expo-2013-its-a-wrap/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Game is about the World not Dragons: Talking with Will Wright about Augmented Reality</title>
		<link>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/</link>
		<comments>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/#comments</comments>
		<pubDate>Thu, 04 Mar 2010 03:29:23 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Life]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D Mapping]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Blaise Aguera y Arcas]]></category>
		<category><![CDATA[crowd sourced intelligence]]></category>
		<category><![CDATA[DARPA AI]]></category>
		<category><![CDATA[Engage]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games for Learning]]></category>
		<category><![CDATA[Games for Learning Institute]]></category>
		<category><![CDATA[high dynamic lighting photographs]]></category>
		<category><![CDATA[hyper-local experiences]]></category>
		<category><![CDATA[hyper-local search]]></category>
		<category><![CDATA[immersive games]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[siri]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[Stupid Fun Club]]></category>
		<category><![CDATA[The Sims]]></category>
		<category><![CDATA[The Sims2]]></category>
		<category><![CDATA[Wii]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5171</guid>
		<description><![CDATA[&#8220;The game is about the world not dragons,&#8221; Will Wright, Founder and Chief ExecutiveÂ  Stupid Fun Club, Creator of Spore and The Sims. I had a brief chat with Will Wright after his talk at Engage!, and I was delighted to hear that augmented reality is high on his agenda at the moment: &#8220;a lot [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><a href="http://www.stupidfunclub.com" target="_blank"><img class="alignnone size-medium wp-image-5200" title="Screen shot 2010-02-22 at 12.26.12 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-22-at-12.26.12-PM-300x289.png" alt="Screen shot 2010-02-22 at 12.26.12 PM" width="300" height="289" /></a><br />
</strong></p>
<p><strong>&#8220;The game is about the world not dragons,&#8221; Will Wright, Founder and Chief ExecutiveÂ  <a href="http://www.stupidfunclub.com" target="_blank">Stupid Fun Club, </a>Creator of <a href="http://www.spore.com/" target="_blank">Spore</a> and <a href="http://thesims2.ea.com/" target="_blank">The Sims.</a><br />
</strong></p>
<p>I had a brief chat with <a href="http://en.wikipedia.org/wiki/Will_Wright_%28game_designer%29" target="_blank">Will Wright</a> after his talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a>, and I was delighted to hear that augmented reality is high on his agenda at the moment:</p>
<p><strong>&#8220;a lot of our stuff is kind of in the experimental format right now, but definitely one of our strong interests is AR.&#8221; </strong></p>
<p>Will Wright will be coming to speak at <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality Event</a>, Santa Clara, CA., June 2nd, 3rd,Â  2010.Â  But, for now, here are a few hints at some of the directions that are intriguing him, e.g., the game potential of 3D mapping like <a href="http://www.ted.com/talks/blaise_aguera.html" target="_blank">Blaise Aguera y Arcas&#8217;sÂ  demo of augmented reality maps at TED</a> -Â  see the full conversation below.</p>
<p>There has been a vital shift, Will Wright points out.Â  Before the Wii,Â  immersive was understood asÂ  how much we were pulled into the world of the game.Â  Now immersive is how much the game pulls us deeper into our world, e.g., our relationship with the people we are playing with as in Rock Band, or engaging with other people&#8217;s crazy antics when playing Wii games.</p>
<h3><strong>&#8220;Computers are imagination amplifiers and toys are imagination constructors.&#8221;</strong></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/computerareimaginatinamplifiers.jpg"><img class="alignnone size-medium wp-image-5183" title="computerareimaginatinamplifiers" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/computerareimaginatinamplifiers-300x290.jpg" alt="computerareimaginatinamplifiers" width="300" height="290" /></a><br />
</strong></p>
<p><em>The slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a> </em></p>
<p>Will Wright&#8217;s talk was extraordinary, dense, layered, and deeply thought provoking.<strong><br />
</strong></p>
<p>I have picked out a few samples from Will Wright&#8217;s vast tome of slides here.Â  They are just a glimpse of the many insights he offered.Â  If you are still wondering what will transform augmented reality into a mainstream experience, I suggest studying this talk carefully (I think the audio will be posted on the <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage! web site</a>).Â  Also watch Will Wright&#8217;s, <a href="http://g4li.org/" target="_blank">Games For Learning Institute </a>talk at NYU, February 17th, 2010, <a href="http://g4li.org/archives/1986" target="_blank">archived here</a>.</p>
<p>Will Wright and <a href="http://www.stupidfunclub.com/home.html">Stupid Fun Club</a> are getting ready to takes us to the next level of imagination amplification and construction.</p>
<h3><strong>&#8220;Smart&#8221; things can make us dumber by overriding our instincts<br />
</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/replacingourinstincts.jpg"><img class="alignnone size-medium wp-image-5182" title="replacingourinstincts" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/replacingourinstincts-300x199.jpg" alt="replacingourinstincts" width="300" height="199" /></a></p>
<p>Just one of the many wonderful anecdotes Will Wright toldÂ  was the story of his experiences with a new &#8220;smart&#8221; car (he bought this car with the intent of exploring the pinnacle of the &#8220;smart&#8221; car experience). <em>The slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a></em></p>
<p>Increasingly, artifacts are being designed to send us more and more data, and this car was endowed with an array of sensors supplying data aimed at assisting parallel parking &#8211; a notoriously challenging aspect of driving.Â  But the carÂ  failed miserably in helping. Â  While parallel parking had been easy for him prior to being deluged with all this data, Will Wright pointed out, ironically, he had to learn to ignore this stuff to park the &#8220;smart&#8221; car.</p>
<p>Instinctively, we learn to filter the information necessary for parking to the relevant stuff.Â  This kind of pre-conscious filtering is a key challenge for augmented reality, and one that Will Wright, as a game designer, has given great deal of thought to.</p>
<p>As Will Wright pointed out, aÂ  lot of our ideas about augmented reality, and sensor enabled artifacts, are rooted in trying to give us more data, to &#8220;take over our instincts.&#8221; Â  Not only do these artifacts attempt to give us more data, which as in the case of the HUDs for parallel parking can get in the way of our own highly effective intuitive instincts.Â  But, as Will Wright also noted, these artifacts also have more data which they can deploy independently to override our instincts, e.g., the car detecting your head has turned back to talk to a passenger and applying the brakes!</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;Toys Encourage Agency&#8221;</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-19-at-3.14.53-AM.png"><img class="alignnone size-medium wp-image-5188" title="Screen shot 2010-02-19 at 3.14.53 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-19-at-3.14.53-AM-300x200.png" alt="Screen shot 2010-02-19 at 3.14.53 AM" width="300" height="200" /></a></p>
<p>Toys can be the antidote to instinct blocking &#8220;smart things.&#8221;Â  In contrast to &#8220;smart&#8221; data spitting cars that &#8220;take over&#8221; our instincts, toys encourage agency.Â  Will Wright gave the example ofÂ  high dynamic lighting photographs that make the world &#8220;toy like&#8221; and encourage us want to reach in and play with it (<a href="http://hdrcreme.com/photos/36-Sunset" target="_blank"><em>photo above from HDRCreme</em></a>).</p>
<h3>&#8220;What Computers are really good at is harvesting human intelligence&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/HiveMind1.jpg"><img class="alignnone size-medium wp-image-5194" title="HiveMind" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/HiveMind1-300x199.jpg" alt="HiveMind" width="300" height="199" /></a></p>
<p>Another key insight that Will Wright explored in depth in his talk was the significance ofÂ  crowd sourced intelligence (<em>the slide above is from Will Wright&#8217;s talk at <a href="http://www.engageexpo.com/ny2010/" target="_blank">Engage!</a>)</em>.Â  If the crowd is training the filter, he suggested to me, this might build the kind of context we need to build meaningful augmented reality experiences (for more on this see the conversation below).</p>
<h3>Talking with Will Wright at Engage!, NYC, 2010</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/WillWright2.jpg"><img class="alignnone size-medium wp-image-5174" title="WillWright2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/WillWright2-277x300.jpg" alt="WillWright2" width="277" height="300" /></a></p>
<p><strong>Tish Shute:</strong> I was very interested by the idea you put out that this deluge of information gathered by sensors is not necessarily a kind of nirvana for augmented reality, in fact it can be just the opposite.Â  In the embryonic world of augmented reality, we have two streams it seems at the moment &#8211; one is the idea of a kind of like hyper local nirvana imagined for AR, in which we get information relevant to us, when and where we need it. Â  But you talked about some of the problems in realizing this, didn&#8217;t you?Â  The other strandÂ  is the emerging stream of play which you are exploring..</p>
<p><strong>Will Wright:</strong> Right.Â  I think part of it is like what I was talking about-the way our senses are set up to know how to filter out 99% of what is coming into them.Â  That is why they work, and that is what is beneficial.Â  I think that is why AR needs to focus onâ€¦</p>
<p>You look at what I can find out on Google or whatever, the amount of information is just astronomical.Â  The hard part, the intelligent part, is how do you figure out that one tenth of 1% that I actually care about at this given second?</p>
<p><strong>Tish Shute: </strong> Yes.Â  Have you seen any examples of AR beginning to do that?</p>
<p><strong>Will Wright: </strong> No, not at all.Â  I think that you have to have a contextual understanding of where I am at, where my mindset is, what my situation is, what my goal state is in a moment by moment basis.Â  And then it is still a complex task.Â  But the very first thing we need is more context for building a filter.Â  See, that filter is changing every few minutes, you know, what I am filtering into my senses is changing, and my context is changing moment to moment.</p>
<p><strong><br />
Tish Shute: </strong> I really liked your emphasis on crowd sourced intelligence as the key power of a networked world, is this the seed..?</p>
<p><strong>Will Wright:</strong> Well, you can imagine crowd sourcing that filterâ€¦it would affect a million people and get a sense of what mental context that they were in and what filter they turned on.Â  And so, in a sense, the crowd is training the filter.</p>
<p><strong>Tish Shute:</strong> Yes.Â  The problem with projects like <a href="http://siri.com/" target="_blank">SIRI</a>, that is driven by the big DARPA AI project, CALO, is it is centralized &#8211; although I am not sure what they intend to do in terms of crowd source corrections?Â  But if it was all open and we could crowd source as well that would be interesting.Â  But in the end we need a framework for AR that is as open as the internet, don&#8217;t we?</p>
<p><strong><br />
Will Wright:</strong> Right.Â  I think the technological infrastructure needs to be much lighter so that it can be grounded in more like a Twitter feed or something.</p>
<p><strong>Tish Shute:</strong> Yeah.Â  Iâ€™m actually working on a project using the Wave Federation protocol as the basis for a<a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">n open communications framework for augmented reality, AR Wave</a> &#8211; not the Wave user interface, just the real-time federation protocol.Â  But, of course,Â  for it to become an open framework that could be a vehicle for crowd trained augmented reality it would need good take-up!</p>
<p><strong>Will Wright: </strong> Right.Â  You really want a million people involved.</p>
<p><strong>Tish Shute:</strong> Yes our dream is that the creation of augmented reality content will be as open, accessible andÂ  simple as making an html page, or contributing to a wiki.</p>
<p>So in terms of AR games what is interesting on the horizon, presumably games also have to solve the problems ofÂ  delivering a hyper local experience.Â  The car that you described in your talk tried hard to use augmented reality to solve the problem of parallel parking and ended up making it harder.Â  So giving us the information we need, where we need it, when we need it, and specific to who we are is going to be a big challenge.Â  But I mean in terms of games, what kinds of hyper local experiences will be most fun and what have you seen that is interesting in terms of augmented reality games up to now?</p>
<p><strong>Will Wright: </strong> Iâ€™ve not actually seen much at all.Â  Iâ€™ve seen people doing interesting stuff with like Google Maps.Â  They arenâ€™t really entertainment oriented, but I think you can start thinking aboutâ€¦</p>
<p>I mean I think for a lot of people, Google Street View is entertainment.Â  But I havenâ€™t really seen something that was really leaning into an entertainment application using existing technology and data that is already out there.</p>
<p>I mean I have seen some cool experiments-people playing Pac-Man in Washington Square and stuff like that, but nothing really serious.</p>
<p><strong><br />
Tish Shute: </strong>Yeah.Â  of course I think one of the missing links is that the barrier of entry is way to high for creating social augmented experiences for smart phones, and as you point out in your talk it is the social implications of the game is what makes it compelling.</p>
<p><strong>Will Wright: </strong> Also, I think using them [smart phones] as data aggregation devices rather than just data consumption devicesâ€¦so that people out there are using their phone, cameras, microphones, or whatever to gather data and get an experience where they are rewarded for gathering data.</p>
<p><strong>Tish Shute: </strong> Like <a href="http://foursquare.com/" target="_blank">foursquare</a> where you get the badges, and people can become the mayor of like a cafe or something.</p>
<p><strong>Will Wright:</strong> Right.Â  Yeah, you can imagine people using their phones to actually kind of pull informationâ€¦</p>
<p><strong>Tish Shute: </strong> A Dutch developer/artist/game designer, Thomas Wrobel,Â  <a href="http://www.lostagain.nl/" target="_blank">Lost Again</a>, came up with the original concept for the AR framework we are building on the Wave Federation protocol.Â  Thomas and his partner Bertine van Hovell design alternate reality games, amongst other things they doâ€¦so they are deeply immersed in the potential of the world as game.</p>
<p><strong>Will Wright:</strong> Yeah, one of my programmers actually works in Amsterdamâ€¦.there is a whole sub-communityâ€¦<br />
Well, yeah.Â  The possibilities are tremendous.Â  And Wii is actually training us that way [to be as much engaged with the other players in the physical space as the virtual game], so it is going to happen.</p>
<p><strong>Tish Shute: </strong> What are the most exciting things you see at the moment, and for the next 12 months for augmented reality?</p>
<p><strong>Will Wright:</strong> Gosh.Â  I mean I just think there is cool stuff happening in mapping, in general.<strong><br />
</strong></p>
<p><strong>Tish Shute:</strong> Like <a href="http://www.ted.com/talks/blaise_aguera.html" target="_blank">Blaise Aguera y Arcas&#8217;sÂ  demo of augmented reality maps at TED?</a></p>
<p><strong>Will Wright: </strong> Yeah, I thought the 3-D mapping with Microsoftâ€¦I think like the next level of that is going to be really compelling.</p>
<p><strong>Tish Shute:</strong> You see game potentials in that?</p>
<p><strong>Will Wright: </strong> Yeah.Â  You start overlaying really cool game potential on top of that.</p>
<p><strong>Tish Shute:</strong> Might you get interested and do something?</p>
<p><strong>Will Wright:</strong> Oh, yeah.Â  I mean in terms of games, that is one of my biggest interests, is AR.</p>
<p><strong>Tish Shute: </strong>Are you allowed to talk about anything specific at all?</p>
<p><strong>Will Wright:</strong> Not yet, no.Â  I mean a lot of our stuff is kind of in the experimental format right now, but definitely one of our strong interests is AR.</p>
<p><strong>Tish Shute: </strong> Yeah, absolutely.Â  We are over being tied to our desks to use computers -we want to be doing it anywhere, anytime, with anythingâ€¦</p>
<p><strong>Will Wright: </strong> Now the game is about the world instead of about dragons.Â  I love that.</p>
<p><em><a href="http://www.stupidfunclub.com/home.html"></a></em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Creating the Information Landscapes of the Future: Locative Media, Loose Interaction Topologies, and The Shape of Alpha</title>
		<link>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/</link>
		<comments>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/#comments</comments>
		<pubDate>Sun, 17 May 2009 20:13:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[3D mapping for AR]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[augmented reality systems]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[body controllers]]></category>
		<category><![CDATA[community mapping]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[experimental human-computer interfaces]]></category>
		<category><![CDATA[flea market mapping]]></category>
		<category><![CDATA[geotagged photos]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[information landscapes of the future]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[internet 2.0]]></category>
		<category><![CDATA[ITP Spring Show 2009]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative media manifesto]]></category>
		<category><![CDATA[loose interaction topologies]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[mining geotagged photos]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[mud pong]]></category>
		<category><![CDATA[Mud Tub]]></category>
		<category><![CDATA[multi-touch surfaces]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[S Ring]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart mud]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[Where Week 2009]]></category>
		<category><![CDATA[WhereCamp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3521</guid>
		<description><![CDATA[I am excited about going to Where Week 2009 &#8211; Where 2.0 and WhereCamp, this week (for more see Brady Forrest&#8217;s post).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future. As you know, if you have read my previous post &#8211; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/looseinteractionphilosophiespost.jpg"><strong></strong></a><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles.jpg"><img class="alignnone size-medium wp-image-3533" title="shapefiles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles-150x300.jpg" alt="shapefiles" width="150" height="300" /></a></strong></p>
<p>I am excited about going to <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">Where Week</a><a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank"> 2009</a> &#8211; <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0 </a>and <a href="http://wherecamp2009.eventbrite.com/" target="_blank">WhereCamp,</a> this week (for more <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">see Brady Forrest&#8217;s post</a>).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future.</p>
<p>As you know, if you have read <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my previous post &#8211; here</a>, I think the <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">â€œMobile Reality</a>â€ panel is a must.Â  And I have been looking forward to hearing more about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope</a>, Flickr, since <a href="http://en.oreilly.com/et2009" target="_blank">Etech 2009</a> when I was introduced to Aaron by <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> (see<a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> my interview with Mike Kuniavsky at Etech here</a> and more on Mike&#8217;s concept &#8220;information shadows&#8221; <a href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">in his Etech talk</a>).</p>
<p>Shape of Alpha is revealing some fascinating possibilities for mining geotagged Flickr images.</p>
<p>As <a href="http://twitter.com/timoreilly/statuses/1777871797" target="_blank">Tim O&#8217;Reilly noted in a tweet</a>, Aaron Straup Cope&#8217;s recent post,<strong> <a href="http://code.flickr.com/blog/2009/05/06/the-absence-and-the-anchor/" target="_blank">The Absence and the Anchor, </a></strong>describes, <strong>&#8220;some of <span class="status-body"><span class="entry-content">the surprising things Flickr is learning about people from geotagged photos.&#8221;</span></span></strong> Aaron&#8217;s post also announces that the &#8220;donut hole shapes&#8221; are available for developers to use with their developer magic via the <a href="http://www.flickr.com/services/api">Flickr API</a>.</p>
<p><strong>&#8220;If the shapefiles themselves are uncharted territory, the donut holes are the fuzzy horizon even further off in the distance. Weâ€™re not really sure where this will take us but weâ€™re pretty sure thereâ€™s something to it all so weâ€™re eager to share it with people and see what they can make of it too.&#8221;</strong></p>
<p>For more on shape files see Aaron&#8217;s blog post about <strong>&#8220;<a href="http://code.flickr.com/blog/2009/01/12/living-in-the-donut-hole/">some experimental work that Iâ€™d been doing with the shapefile data</a> we derive from geotagged photos.&#8221;</strong></p>
<h3>Creating the Information Landscapes of the Future</h3>
<p>I have been thinking and writing a lot about augmented reality lately.Â  And key thought leaders in this space like <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a>, <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a><strong> </strong>(<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>),<strong> </strong> and<a href="http://gamesalfresco.com/about/" target="_blank"> Ori Inbar</a> (<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>), have clued me in to how vital it is, for an ubiquitous experience,<strong> </strong>for us to find ways to allow people to fill in the stories that can be used for augmented reality.</p>
<p>As Ori noted in conclusion to our recent conversation:</p>
<p><strong> &#8220;in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</strong></p>
<p><a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick,</a> another key thinker in this area (interview upcoming), also notes:</p>
<p><strong>&#8220;you can imagine a crowd sourced set of hints for any location so, AR knows roughly where it is and can do photosynth style matchingÂ  to find out exactly what it&#8217;s looking at and get the extra data it needs about that thing (humans are really good image recognition systems, and are also pretty good at interfacing with networks) instead of marking up real objects with ids you take pictures of real objects, tag them and then search them based on images from your ar system.&#8221;</strong></p>
<p>Ori Inbar suggested to me an idea that I really liked &#8211; the notion of bread crumbs where, <strong>&#8220;</strong><span class="ru_50CCC5_tx"><strong>You don&#8217;t have a constant view of what is happening when you walk but you get images and text and all sorts of things from people who walked there before &#8211; like breadcrumbs.</strong>&#8220;Â  And as </span><a href="http://www.designundersky.com/dus/2008/10/31/geotagged-photo-cartography.html" target="_blank">Design Under Sky</a> points out about Shape of Alpha:</p>
<p><strong>&#8220;The truly amazing part of this process is how the &#8220;community&#8221; has the authority to provide areas previously unmapped.Â Â By uploadingÂ personal photos ofÂ areas not covered by mapping software, members have theÂ power of further shrinking our world through greater visual access and understanding ofÂ locations one might not be willing or unable to visit.&#8221; </strong></p>
<p><strong><br />
</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod.jpg"><img class="alignnone size-medium wp-image-3536" title="aaronmiketod" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod-300x265.jpg" alt="aaronmiketod" width="300" height="265" /></a></p>
<p><em>Aaron Straup Cope, Flickr, Todd E. Kurt, <a href="http://thingm.com/" target="_blank">ThingM</a> and Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a></em></p>
<h3>The Locative Media Manifesto</h3>
<p><a href="http://stamen.com/" target="_blank">@stamen&#8217;s</a> tweet brought AndrÃ© Lemos&#8217; brilliant, thought provoking, &#8221; <a href="http://www.andrelemos.info/2009/05/locative-media-manifesto.html" target="_blank">Locative Media Manifesto</a>,&#8221; to my attention.Â  I am also looking forward to hearing about how old maps &#8220;can shed light on modern geography when placed in counterpoint to the state of art in modern maps from Google or Microsoft&#8221; from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/3486">Michal Migurski</a>, Stamen Design, who will present <a href="http://en.oreilly.com/where2009/public/schedule/detail/7276" target="_blank">Flea Market Mapping</a> at Where 2.0.</p>
<p>AndrÃ© Lemos writes:</p>
<p><strong>&#8220;After uploading to Matrix up there &#8211; Internet 1.0 &#8211; now is the time to &#8220;download cyberspace,&#8221; information about things down here &#8211; Internet 2.0. We are not dealing with what is virtual up there, but of what to do with all this information about things and places down here! How can we relate to things and places? And now that these things and places are provided with digital information and Internet connections? Do we invoke Heidegger and Lefevbre?&#8221;</strong></p>
<p>I will leave it to people smarter than I to invoke Heidegger and Lefevbre as Andre Lemos does so eloquently in Locative Media Manifesto. But by reminding us artists and activists created the term &#8220;locative media&#8221; to &#8220;question the mass use of LBS (location based services) and LBT (location based technologies,&#8221;Â  the manifesto delivers 30 principles to inspire creators of Locative Media and explorers of the,<strong> &#8220;current dimension of cyberculture, comprising the era of &#8220;cyberspace leaking into the real world&#8221; (Russel, 1999); an era of the &#8220;internet of things.&#8221;</strong></p>
<p>I feel well primed for Where Week by my visit to the <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">ITP Spring Show, 2009</a> last Sunday. It was an interaction riot, jam packed with brilliance and off beat explorations of locative media which I experienced through the senses of my 9 year old.Â  His pick for best of show is below. But he had many favorites and I have <a href="http://www.flickr.com/photos/ugotrade/sets/72157618216853047/" target="_blank">put some pictures up on my FLickr stream</a> with links to the creator&#8217;s sites.Â  One of my favorite projects Alexander Reeder&#8217;s <a href="http://artandprogram.com/sring/" target="_blank">S Ring</a> &#8211; <a href="http://tishshute.com/seducing-people-by-talking-with-your-hands" target="_blank">&#8220;seducing people by talking with your hands,&#8221; is up on my Posterous blog</a>.Â  You can see a list of the extensive <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">media coverage the show got here</a>.</p>
<h3>Loose Interaction Topologies</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><img class="alignnone size-medium wp-image-3528" title="mudpongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost-300x199.jpg" alt="mudpongpost" width="300" height="199" /></a></p>
<p>The picture above is of a game of mud pongÂ  in <a href="http://dirtycomputing.com/" target="_blank">Tom Gerhardt&#8217;s Mud Tub</a>.Â  The mud interface &#8211; &#8220;a smart tub with some mud&#8221; knows the topology of the mud and where your hand is. Mud Tub takes advantage ofÂ  a complex material &#8211; to explore loose interaction topologies, including as seen above a game of Mud Pong.Â  Loose interaction topologies are a way we can explore meaning in &#8220;the internet of things.&#8221;</p>
<p>Tom explained his own exploration of the internet of things to me very succinctly:</p>
<p><strong>&#8220;I am not trying to make mud better. I am trying to make computer</strong><strong>s better with mud.&#8221;</strong></p>
<p>He elaborates on the value of Mud Tub in this regard on his site, <a href="http://dirtycomputing.com/" target="_blank">dirtycomputing</a>:</p>
<p><strong>&#8220;The Mud Tub occupies a space similar to other experimental human-computer interfaces, like, multi-touch surfaces, body controllers, augmented reality systems, etc, which push the boundaries of codified interaction models, and drive the development of innovative software applications. Beyond its role as a research topic, the Mud Tub also exists as an open-sourced hardware/software platform on which interactive artists and designers explore new meth</strong><strong>ods for creating and displaying their work.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Composing Reality and Bringing Games into Life: Talking with Ori Inbar about Mobile Augmented Reality</title>
		<link>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/#comments</comments>
		<pubDate>Wed, 06 May 2009 14:50:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Kids With Cameras]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternative reality gaming]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented times]]></category>
		<category><![CDATA[Better Place]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Caryatids]]></category>
		<category><![CDATA[Come Out and Play]]></category>
		<category><![CDATA[composing reality]]></category>
		<category><![CDATA[Cory Doctorow]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[game development conference]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[games for preschoolers on the iphone]]></category>
		<category><![CDATA[games on the iphone]]></category>
		<category><![CDATA[GDC 2009]]></category>
		<category><![CDATA[GE augmented reality ad]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Int 13]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[iPhone OS 3]]></category>
		<category><![CDATA[iphone versus the android]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[jane mcgonigal]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kweekies]]></category>
		<category><![CDATA[Loopt]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microsoft Tag]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile gaming]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Netweaver]]></category>
		<category><![CDATA[open source augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pookatak]]></category>
		<category><![CDATA[Pookatak Games]]></category>
		<category><![CDATA[reality experiences]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Shai Agassi]]></category>
		<category><![CDATA[smart environments]]></category>
		<category><![CDATA[smart objects]]></category>
		<category><![CDATA[The End of Hardware]]></category>
		<category><![CDATA[the Pong for augmented reality]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous augmented reality]]></category>
		<category><![CDATA[ubiquitous experience]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[WARM 09]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[WikiMouse]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3448</guid>
		<description><![CDATA[Recently, I talked to Ori Inbar (above), formerly senior vice- president at SAP.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of Pookatak Games &#8211; a video game company, &#8220;with a vision to upgrade the way people experience the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost.jpg"><img class="alignnone size-medium wp-image-3449" title="oriinbarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost-300x199.jpg" alt="oriinbarpost" width="300" height="199" /></a></p>
<p>Recently, I talked to <a href="http://gamesalfresco.com/">Ori Inbar</a> (above), formerly senior vice- president at <a href="http://www.sap.com/">SAP</a>.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of <a href="http://gamesalfresco.com/about/" target="_blank">Pookatak Games</a> &#8211; a video game company, <strong>&#8220;with a vision to upgrade the way people experience the world.&#8221;</strong> Ori will be participating May 20th, in<a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank"> O&#8217;Reilly&#8217;s Where 2.0 panel, &#8220;Mobile Reality</a>&#8221; -Â  an event not to be missed IMO.</p>
<p>The taste for computing anywhere anytime has entered human culture via the iphone and is spreading like chocolate cake and pizza at a preschool party (see <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_self">why the iPhone changed everything</a>).Â  And while the full flowering of the next step is yet to come &#8211; computing anywhere, anytime by anyone and <strong>anything </strong><a href="http://en.wikipedia.org/wiki/Internet_of_Things" target="_blank">(&#8220;the internet of things&#8221;</a>), our love for these first devices capable of being <strong>mediating artifacts for ubiquitous computing</strong> (Adam Greenfield) is a vital first step to free us from our tethers to computer screens, and fulfill the promise of augmented reality.</p>
<p>If you need more convincing on the pivotal role augmented reality will play as the web moves into the world, check out Tim O&#8217;Reilly&#8217;s recent comments in <a id="iz1_" title="this video clip on Augmented Times" href="http://artimes.rouli.net/2009/04/tim-oreilly-on-recognition-rfid-and-web.html" target="_blank">this video clip posted on Augmented Times</a> and <a id="wtf4" title="here" href="http://radar.oreilly.com/2008/02/augmented-reality-a-practical.html" target="_blank">here</a> early last year.</p>
<p>From another perspective, the gloomy specter of economic and environmental catastropheÂ  is driving a movement to &#8220;<a id="h5pf" title="infuse intelligence into the way the world work's&quot;" href="http://news.bbc.co.uk/2/hi/technology/7992480.stm" target="_blank">infuse intelligence into the way the world work&#8217;s.&#8221;</a> But the challenge for a smart planet is not just about making environments smart, it is about using smart environments to enable people to act smarter (<a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see my interview with Adam Greenfield</a>).</p>
<p>We need a rapid upgrade in both the way the world works, and the way we experience the world.</p>
<p>((Note:Â  It is time to read (if you haven&#8217;t already) <a href="http://search.barnesandnoble.com/The-Caryatids/Bruce-Sterling/e/9780345460622" target="_blank">Bruce Sterling&#8217;s Caryatids</a> (<a href="book of the year for 2009" target="_blank">Cory Doctorow&#8217;s book of the year for 2009</a>) &#8220;as a software design manual&#8221; (<a href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">see Julian Bleeker</a>) because Caryatids reveals the Gordian knots of human folly, greed, compassion and desire entwined in near future designs for technologies to save the world.))</p>
<p>Ori Inbar, worked with Shai Agassi (Shai is now leading the world changing <a id="v5ow" title="Better Place" href="http://www.betterplace.com/" target="_blank">Better Place</a> ) driving <a id="gf_5" title="Netweaver" href="http://en.wikipedia.org/wiki/NetWeaver" target="_blank">Netweaver</a> from a mere concept to a &#8220;major, major business for SAP.&#8221; So Ori has already been through the cycle of working in a very small startup and growing it into a billion dollar business.Â  He has both the experience and the passion to realize his vision for augmented reality.</p>
<p>At Pookatak, he explains :</p>
<p><strong>&#8220;We design â€œreality experiencesâ€ that make usersâ€™ immediate environments more significant to them. We wish to free young and old from getting lost in front of the screen. By delivering the worldâ€™s information to peopleâ€™s field of view, and by weaving real world objects into interactive narratives, we help people rediscover the real world.&#8221;</strong></p>
<p>Pookatak will release their first game this summer. Currently it is under wraps. But Ori gives us some glimpses of what is to come in the interview below.</p>
<p>In addition to founding Pookatak, Ori is involved in a broader effort to move augmented reality forward. On his blog, <a id="ie5s" title="Games Alfresco" href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> &#8211; he recently welcomed <a href="http://gamesalfresco.com/about/" target="_blank">a new partner, Rouli Nir</a>, Ori has focused his eye of wisdom on every significant recent advance in Augmented Reality (check out <a id="zr9y" title="this essence of Ori's thinking in a fast paced video" href="http://gamesalfresco.com/2009/03/09/augmented-reality-today-ori-inbar-speaks-at-warm-2009/" target="_blank">this essence of Ori&#8217;s thinking in a fast paced video</a> presentation for <a href="http://gamesalfresco.com/2009/02/12/live-from-warm-09-the-worlds-best-winter-augmented-reality-event/" target="_blank">WARM â€˜09</a>).</p>
<p>Also Ori is one of the organizers of the interactive media track at <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a>.Â  At ISMAR this year, Ori explained,<strong> &#8220;we are trying to bring in people that develop interactive experiences for consumers, beyond the traditional attendees coming from a research perspective.</strong>&#8221;</p>
<p>In the interview below, Ori explains much of his thinking on how augmented reality will become commercially successful.Â  Enjoy it, think about it, and share it. And most importantly, if you can, get involved with ISMAR 2009.</p>
<p>OriÂ  has inspired me to participate in <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> this year.Â  Ori pointed out:</p>
<p><strong>The </strong> <a href="http://campwww.informatik.tu-muenchen.de/ismar09/lib/exe/fetch.php?id=ismar09%253Astart&amp;cache=cache&amp;media=ismar09:ismar09-cfp_090211_final.pdf" target="_blank">call for papers</a> <strong>is on, and this year it targets well beyond the typical research papers audience and into interactive media and art folks. </strong></p>
<p><strong>There are plenty of opportunities such as:</strong></p>
<p><strong>Art Gallery</strong></p>
<p><strong>Demonstrations</strong></p>
<p><strong>Tutorial</strong></p>
<p><strong>Workshops</strong></p>
<p>It&#8217;s a huge opportunity to shape the emergence of augmented reality.<br />
<br /></br></p>
<h2><strong> Interview With Ori Inbar</strong></h2>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png"><img class="alignnone size-full wp-image-3479" title="picture-41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png" alt="picture-41" width="107" height="146" /></a><br />
<h3>Making Augmented Reality Commercially Successful</h3>
<p><strong>Tish Shute: </strong>You are considered a key trail blazer in AR and you have the go to blog for augmented reality!Â  What are the most important lessons you have learned researching, writing, and developing AR in the last couple of years?</p>
<p><strong>Ori Inbar: You need to have a vision. You need to know where this is going to go in ten or fifteen or twenty years. But you&#8217;ve got to start with something really simple that makes use of the technology you have on hand. And do something that is practical, that people will like, and something they would actually want to buy. Its as simple as that. I&#8217;m currently looking at what we could do with existing technology. First of all, you have to put it in front of people. Right now most people have never heard about the term augmented reality. Go into the street, and ask 100 people about it, maybe 2 would know about it. So you need to put it in front of people because most people think it&#8217;s still science fiction or a special effect you see in movies, not something you can experience in real life. </strong></p>
<p><strong>Tish: </strong>It seems to me to that for augmented reality applications to become popular with existing technology the key breakthrough would be getting people to hold up their phones. What are the obstacles to getting people to use their mobile devices like this?</p>
<p><strong>Ori: There&#8217;s a really nice cartoon by </strong><em> </em><strong><a href="http://www.tonchidot.com/">Tonchidot</a> (below) &#8211; the Japanese company behind the Sekai Camera. It&#8217;s an illustration showing the evolution of man, from ape to man (holding a cell phone looking down), to the developed man holding a device like a camera &#8211; in front of its eyes.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37.png"><img class="alignnone size-medium wp-image-3454" title="picture-37" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37-300x221.png" alt="picture-37" width="300" height="221" /></a><strong></strong></p>
<p><strong>Which is exactly what you&#8217;re talking about. People ask, &#8220;are people going to walk with this like that all day long?&#8221; Probably not. I mean you have to build it in a way that doesn&#8217;t require them to hold it like that all the time. People are used to this gesture with the ubiquitous digital cameras. I tested one of my prototypes on a two and a half year old girl. She had no problem holding it just like she holds a camera.<br />
</strong><br />
<strong>Tish:</strong> <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank"> Blair MacIntyre</a> mentioned, &#8220;The problem with the mobile phone as a AR device is a problem of awareness,&#8221; i.e., you have to have a way of letting people know when there&#8217;s something interesting wherever they are. One of the issues regarding this is if you get too many alerts, then you tune them out.</p>
<p><strong>Ori: First of all Blair is one of the people in academia that get it. Because he looks at it from an experience perspective. Not just as an interesting technical problem to solve. Let&#8217;s start with getting people to enjoy this new experience. The AR demos so far were mostly eye candies, and mostly for advertising &#8211; the<a href="http://ge.ecomagination.com/smartgrid/#/landing_page" target="_blank"> GE AR ad</a> created a lot of buzz; but you look at it for 10 seconds and you forget about it.Â  You need to build something that people would want to experience over time and would be willing to pay for. I think that&#8217;s the big test, right?</strong></p>
<p><strong>Now in terms of having a ubiquitous experience where you&#8217;re continously connected, it doesn&#8217;t have to be an overwhelming experience. Just like some of the social media tools we&#8217;re using today, we decide when to connect, and we filter out the trash. You could get alerts only for things that really matter to you, not for everything that happens in your immediate environment. </strong></p>
<p><strong>There will be many layers of information, and it&#8217;ll be up to you to pick the ones you want to experience. The real benefit is that you get the information in your own field of view and in context of where you are or what you do.</strong></p>
<p><strong>Tish:</strong> So what are you working on these days?</p>
<p><strong>Ori: We are working on a little app that targets a very different audience than what you&#8217;d expect: pre schoolers. We think we can encourage them to get away from a PC or TV screen and learn something while playing &#8211; in the real world. You&#8217;ll hear more about it as soon as this summer. Nuff said.</strong></p>
<p><strong>But, it is a small application that will run on the iPhone. People ask how many pre-schoolers own iPhones? Well, their parents do. </strong></p>
<p><strong>Tish:</strong> Yes there are certainly many New York kids with iPhones &#8211; my kid now has my old iphone.Â  He has pretty much switched from playing games on his DS to the iPhone. I noticed in your WARM video you place a big emphasis on AR as something that will get kids away from screens and engaged with reality.Â  This is something parents will approve of!</p>
<p><strong>Ori: Yes I saw something really interesting at my kids&#8217; party one day; they were all sitting around the room &#8211; looking down at their own DS screens.Â  You could play the DS anywhere, but kids would usually play it on the sofa, looking at the screen, isolated from the world. With an iPhone and a camera, and the application we&#8217;re producing, reality becomes part of the game. Yes that makes it all of a sudden much more interesting for parents. Because kids are spending so much time in front of the screen, all of a sudden they&#8217;re something that will encourage them to interact with real objects, real things. Every parent I&#8217;ve talked to loves that idea.</strong></p>
<p><strong>Tish:</strong> Yes that is what is cool about the work of <a href="http://www.katilondon.com/" target="_blank">Kati London</a> &#8211; I think I saw someone say this on Twitter, &#8220;Kati puts the computer in the game not the game in the computer.&#8221;</p>
<p><strong>Ori: Yes, kids are spending more time in front of games and the computer because it&#8217;s more interesting. It captivates them with &#8220;<a id="x_z0" title="game pleasures" href="http://8kindsoffun.com/">game pleasures</a> &#8221; that tap into their brain&#8217;s dopamine circuitry &#8211; constantly seeking reward and satisfaction. So you&#8217;re not going to be able to tell them to go back to playing in reality without these pleasures. We have to study these mechanics from games and bring them into reality. It&#8217;s about programming real life; and augmented reality helps you achieve that.</strong></p>
<p><strong>Here&#8217;s an example: cause and effect; in a game when you do something you always get an immediate effect. You&#8217;re good, you get a reward. You&#8217;re not good, you get a cue to improve. In real life you do things and you could wait 2 or 3 years until you actually get feedback (if you&#8217;re lucky). Augmented Reality allows you to bring these mechanics into the real world. I think that&#8217;s going to help kids rediscover reality, in a new sense, which is what every parent is dreaming about.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know how much you can say about your app. But in regard to doing augmented reality on the iPhone.. there&#8217;s no compass. Is this a limitation?</p>
<p><strong>Ori: True, no compass yet. But the camera gives you a lot of information that you can interact with. When you run the application, you see the world in front of you, and if the app can recognize real life objects &#8211; it can put virtual elements on top of it.</strong></p>
<p><strong>Tish:</strong> But not with any accuracy unless you&#8217;re using markers. Are you using markers?</p>
<p><strong>Or</strong><strong>i: We&#8217;re using natural feature recognition. It doesn&#8217;t have to be an ugly looking marker. It can be any image.</strong></p>
<p><strong>Tish:</strong> So you&#8217;re using image recognition. Are you working with one of these image recognition startup companies (<a id="nws6" title="list here" href="http://www.educatingsilicon.com/2008/11/25/a-round-up-of-mobile-visual-search-companies/" target="_blank">list here</a> )?</p>
<p><strong>Ori: We&#8217;re working with one of those. What&#8217;s unique about it is it runs very nicely on any cell phone, and on the iPhone it works the best. For this first app, it doesn&#8217;t really matter where you are physically; the geolocation is not part of the experience. </strong><span style="background-color: #ffff00;"><br />
<strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong> For a truly engaging AR experience we will need more of a backend than is currently available?</span><br />
</span><br />
<strong>Ori: I call the backend the cloud, where you have all this information and ways to access it from anywhere. Actually I think it&#8217;s become pretty mature today. If you look at the different elements required to enable an augmented reality experience to work, you have &#8211; first &#8211; the user whose always in the center. Then you have the lens. The lens can be an iPhone, or glasses, even a projector. The lens allows you to watch, sense and track information in the real world: people, places, things. Then in the backend you have the cloud where you store and retrieve information.</strong></p>
<p><strong>So if you look at the maturity of these different elements, I think the cloud is in pretty good shape. Because there&#8217;s so much information we&#8217;re collecting and storing. Anything from Google, Wikipedia, Facebook, all that kind of stuff, it&#8217;s a lot of useful information you can access from anywhere using APIs. And a lot of it is also starting to include geolocation information. Take <a id="zhag" title="Loopt" href="http://www.loopt.com/" target="_blank">Loopt</a> or Google&#8217;s <a href="http://www.google.com/latitude/intro.html" target="_blank">friends service</a> that allows you to see where your friends are and what they&#8217;re doing. There&#8217;s tons of information out there and it&#8217;s pretty easy to access it. Now what do you do with it is the question?</strong></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> is such a simple and brilliant application and nobody thought about doing it until this guy from Salzburg did. It doesn&#8217;t have any sophisticated visual tracking. It knows your position and it&#8217;s simply looking at the angle you&#8217;re pointing to. Based on these parameters it brings information from Wikipedia that pertains to your field of view. So most of it was already there. It&#8217;s just a matter of connecting the pieces in an experience that is valuable for people.</strong></p>
<p><strong>Tish: </strong>It is the uptake of even a very simple technology that puts the magic in it.</p>
<p><strong>Ori:Â  Yes, take Twitter. If you go to its homepage it looks like a very simple boring app but it is something that is both enjoyable and very useful to people.</strong></p>
<h3><strong>Why you should participate in ISMAR 2009</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40.png"><img class="alignnone size-medium wp-image-3478" title="picture-40" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40-222x300.png" alt="picture-40" width="222" height="300" /></a><br />
<strong>Tish: </strong>I know that you are involved in organizingÂ  <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> (picture above from Ori&#8217;s post on <a href="http://gamesalfresco.com/2009/02/23/ismar-2009-the-worlds-best-augmented-reality-event-wants-you-to-contribute/" target="_blank">&#8220;ISMAR 2009: The World&#8217;s Best Augmented Reality Event&#8230;,</a>&#8220;) and there is a call out for papers and for volunteers, can you tell me more about it?</p>
<p><strong>Ori: Yes, we hope to have the first ISMAR where we practice what we have just discussed: let&#8217;s build on all the research invested so far and instead of thinking only about 5-10 years from now, let&#8217;s see what we can do today. So we are bringing people in from other disciplines &#8211; artists, interactive media developers and people from the entertainment industry.Â  The goal is to use the technology to make something interesting for people &#8211; again, something that people would buy, and making it commercially successful.Â  Many people either don&#8217;t know about ISMAR because in the past it was a pure engineering-orientated event and peopleÂ  from a commercial perspective of AR weren&#8217;t attracted to it.Â  The Chair of the Event this year is based in Florida and he is going to bring in a lot of people from the entertainment industry such as Disney. I think this will transform this event into something more like SIGGRAPH &#8211; more of an industry event.Â  As one of the organizers of the interactive media track we are trying to bring in people that want to build applications for consumers.</strong></p>
<p><strong>Tish:</strong> In terms of AR applications what are the flagships today?</p>
<p><strong>Ori: There are very few because it&#8217;s just the beginning. There&#8217;s one tiny studio in France called <a id="z1ln" title="Int 13" href="http://www.int13.net/en/" target="_blank">Int 13</a> . They&#8217;ve created maybe the first commercial game running on a mobile device using AR technology. It&#8217;s called <a href="http://www.youtube.com/watch?v=Te9gj22M_aU" target="_blank">Kweekies</a>. It was one of the contenders for the Nokia Mobile innovation awards. They were one of the ten finalists, but they didn&#8217;t win it. It&#8217;s looks really cool. It&#8217;s somethng that runs on your desk, with a marker. Many AR folks say markers are the past, markers are ugly. But it&#8217;s still a cool experience. I think people will go for it.</strong></p>
<p><strong>Tish:</strong> Yes I think we will have to look to small companies that are free to think creatively to lead the way.Â  It seems many games companies are tied up pulling off huge big budget projects and enterprise is still catching up on how to use social media!</p>
<p><strong>Ori: Yes, last year I was in the game development conference (GDC); there was no mention of augmented reality &#8211; not on the exhibition floor, none of the sessions, nobody talked about it. I was stunned. Then this year, there was a little a change. There were like three demos on the exhibition floor, <a href="http://www.metaio.com/" target="_blank">Metaio,</a> <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and a Dutch company called <a href="http://www.augmented-reality-games.com/" target="_blank">Beyond Realit</a>y.Â  And then there was Blair&#8217;s talk, which was very very cool. The room was packed with people. And after the talk there were dozens of people lining up to talk with him about the topic. There was definitely interest, but still on the very edge. The video game industry is still a hit driven business and publishers spend upward of 20-30 million dollar to create the best AAA game possible. They just can&#8217;t take the risk. So it&#8217;s going to come from smaller companies, from outsiders coming in with a vision and understanding on how to put the AR pieces together to create a totally new experience.</strong></p>
<p><strong>Tish:</strong> But the basic tool set is there isn&#8217;t it?</p>
<p><strong>Ori: I talked to some folks at the games developer conference, many folks with MMO background, and they have great ideas about AR. It&#8217;s great to see different people with different views on what&#8217;s needed first. &#8220;Joe the Programmer&#8221; had this idea of creating a small piece of hardware that you can put in every house and provide accurate geospatial information in your home. That couldÂ  open up many opportunities for AR experiences in homes.</strong></p>
<p><strong>Tish:</strong> Don&#8217;t you think we have enormous resources in terms of image databases that provide a great basis for augmented reality.Â  I was talking to Aaron Cope at ETech about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> &#8211; Flickr&#8217;s vernacular mapping project using all the geotagged photos in Flickr. That is such cool project. <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron will be speaking at Where 2.0</a> also.</p>
<p><strong>Ori: Think of Google Earth. Google Earth leveraged communities to basically map all the major cities around the world into 3D models. And that is an essential step to be able to do augmented reality outdoors. Because if you had to model everything from scratch, it wouldn&#8217;t be realistic.</strong></p>
<h3><strong>Augmented Reality and Becoming Greener.</strong></h3>
<p><strong>Tish:</strong> I am really interested in how AR interfaces might be useful to some of the emerging energy identity/metering projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.wattzon.com/" target="_blank">WATTZON</a> because I think it is very important that people have very intuitive, immediate, and enjoyable ways to relate to energy data so they can make greener choices.</p>
<p><strong>Ori: Back in the day I had an idea to build an Augmented Reality application to become greener. You look at things around your home with the camera and itÂ  recognizes its green gas footprint and makes recommendations to reduce it.Â  I guess it was a bit too early to do that based on visual recognition alone&#8230;you&#8217;d needÂ  additional sensors that would provide related information about what you are looking at.</strong></p>
<p><strong>Tish:</strong> Well as there is more interest in Green technology do you think we may see VC interest in some green AR projects now?</p>
<p><strong>Ori: I talked to some of the investment folks, Angels as well as VC&#8217;s about AR and they had no clue what it is. There&#8217;s a need for a whole lot of education. And there are no proof points (as in successful investments in this domain), and counter to popular belief &#8211; they don&#8217;t like risk so much&#8230;</strong></p>
<p><strong>Tish:</strong> And consumer adoption must lead the way, right?</p>
<p><strong>Ori: Just like with every emerging technology in history, people never bought the technology, they bought the content, the apps, the benefits that came on top of the technology. Whether it was VHS winning over Beta Max, or BluRay winning over HD. It&#8217;s always because of more/better content. Look at the video game console war: Xbox, and Nintendo did better than Sony just because they had more and better games. Even Windows was a success thanks to its applications. People bought it for the applications not the OS. The content is the first to drive demand.</strong></p>
<p><strong>Tish:</strong> One of the challenges to giving people new ways to relate to their energy consumption is that you can just have them looking at graphs of how bad they have been in the past you &#8211; that may make them feel bad but that doesn&#8217;t necessarily give them ways or motivation to change. There perhaps needs to be more immediate relationship to the data to facilitate change. I think the mantra for optimization of anything from energy usage to supply chains is timely, actionable data?</p>
<p><strong>Ori: There are a lot of ideas about measuring information and displaying it to people. For example, the Prius hybrid car, one of its interesting features &#8211; which is kind of game like &#8211; is a constant display of your current fuel consumption. That alone changes how people drive because they try to beat the &#8220;Score&#8221; and as a result conserve more fuel. That model can be applied to our homes&#8230;</strong></p>
<p>Tish: Yes that is something I am very interested in. I have been following several projects in this area &#8211; one of my favorites is the <a href="http://www.arduino.cc/" target="_blank">Arduino</a>, <a href="http://www.currentcost.com/" target="_blank">Current Cost</a>/<a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> integrations <a href="http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/" target="_blank">I saw at Homecamp</a>.</p>
<p>You joined a start up with Shai Agassi which was bought out by SAP right? He has a brilliant approach with Better Place.</p>
<p><strong>Ori:Â  I think what&#8217;s really unique about Better Place&#8217;s approach is that he doesn&#8217;t require people to change their behavior. People are still going to have their own cars. They&#8217;ll be able to drive as far as they want, and for the same (or lower cost). Its not necessarily about a new technology, electric cars have been around for a long time but there was no way people were going to be limited by the 50 or 70 mile range and Better Place is solving that problem. With its infrastructure of charging spots and battery switching stations, drivers are going to be able to drive anywhere. And it&#8217;ll be similar to having to stop once in a while to refuel your car. The price maybe even lower than what you pay today for your transportation needs &#8211; and you&#8217;ll stop generating green gas. It&#8217;s a clever way of taking technology to a whole new level without changing the behavior of people.</strong></p>
<p><strong>Tish: </strong>Better Place is a classic example of things as a service isn&#8217;t it?Â  It is basically a utility company.</p>
<p><strong>Ori: It is similar to a phone carrier model.Â  You pay for a membership that gives you access to the car (equivalent to the phone) and electricity (equivalent to the phone line) for the same price of fuel cost today. And as bonus you get to save the world.</strong></p>
<h3><strong>How the iphone changed the game for AR &#8211; and the iphone versus Android</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38.png"><img class="alignnone size-medium wp-image-3472" title="picture-38" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38-300x198.png" alt="picture-38" width="300" height="198" /></a><em></em></p>
<p><em>Picture from Ori&#8217;s post</em><strong><em>, <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_blank">&#8220;GDC 2009: Why the iphone changed everything&#8221; </a></em></strong></p>
<p><strong>Ori: And back to AR, you have to take the same approach, because nobody&#8217;s wants to don those huge head mounted displays or backpacks. You have to take advantage of people&#8217;s current behavior: they already carry their iPhones or similar devices.</strong></p>
<p><strong>Tish:</strong> As we discussed, you just have to get people raising up their phones and looking through them when that is a useful thing to do. Both Wikitude and Nathan Freitas&#8217;s graffiti app were enough to get me interested in the evolutionary step of raising my phone! Nathan&#8217;s graffiti app is nice. You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art (see my interview, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG finally for Augmented Reality?&#8221;</a>).</p>
<p><strong>Ori: The app store has fundamentally changed the mobile gaming industry. Last year they were in shambles. There was no growth. Everybody was complaining, &#8220;we can&#8217;t handle it, there&#8217;s a million phones, and you have to test it on each phone. And carriers suck, they don&#8217;t care about sharing and promoting your content. Everything was bad. This year mobile gaming is the hottest thing. And it&#8217;s all because of the iPhone. It changed the game.</strong></p>
<p><strong>Tish: </strong>How do you think Android is going to get traction against the iphone?</p>
<p><strong>Ori: Well the number one thing is the form factor &#8211; the iPhone is just much cooler than the G1. Its OK but it doesn&#8217;t have the same feel. People thought it was going to be easy to clone the iPhone but none of the attempts succeeded so far.</strong></p>
<p><strong>Tish: </strong>How much does it matter for AR not being able to runs things persistently in the background on the iphone?</p>
<p><strong>Ori: Actually they have add a such a capability in OS 3.Â  You can now make use of a background service.</strong></p>
<p><strong>Tish:</strong> OS 3 will open up new possibilities for AR?<strong> </strong></p>
<p><strong>Ori: The access to the video API is still not public.Â  But there is a new Microsoft application &#8211; Microsoft Tag that makes use of that API which means it is probably OK to use it.</strong></p>
<p><strong>Tish: </strong>(I ask Ori for his card and he shows me how to read it with my iphone.) Oh nice you have an AR card, of course!</p>
<h3><strong>In Search of Pong for Augmented Reality</strong></h3>
<p><strong>Tish: </strong>So how will AR begin to, as Blair&#8217;s friend put&#8217;s it, &#8220;facilitate a killer existence,&#8221; particularly as we are probably looking at some new and perhaps pricey hardware?</p>
<p><strong>Ori: You could take the Better Place approach. We&#8217;re going to give you a great experience and we&#8217;ll include the devices as part of that experience for the same price. Let&#8217;s say you subscribe to an AR experienceÂ  which offers access to multiuser, support, and all the information you need wherever you go &#8211; exactly according to the vision. You pay for a subscription on a monthly basis and included in that cost we give you a better device that offers aÂ  better AR experience. It&#8217;s following the phone carrier approach, but in a good way.</strong></p>
<p><strong>But first of all we do need our Pong! I was sitting with a couple of AR game enthusiasts at the GDC and we were asking ourselves, &#8220;how do we create the first pong for AR?&#8221;</strong></p>
<p><strong>Was Pong a multiplayer game? Not necessarily! Did it connect to the network? No! We have to create the first dot in a long line of dots that will bring us to our destination.</strong></p>
<p><strong>Tish: </strong>You haven&#8217;t seen a Pong yet have you?</p>
<p><strong>Ori: Not yet. I mean there&#8217;s maybe a handful of games and apps out there, but I don&#8217;t think any of them is a Pong yet. Still, it&#8217;s getting closer.</strong></p>
<p><strong>Tish: </strong>Kati London is doing some very interesting work on bringing games into reality, isn&#8217;t she?</p>
<p><strong>Ori: Yes, she works with Frank Lanz at <a href="http://playareacode.com/" target="_blank">Area/Code</a>. He teaches at NYU and has designed games for the <a href="http://www.comeoutandplay.org/" target="_blank">&#8220;Come Out and Play&#8221;</a> festival here in Manhattan. And a lot of these games are actually low tech.</strong></p>
<p><strong>Tish:</strong> Yes I have a big alternate reality game blog brewing that I haven&#8217;t had time to write yet!</p>
<p><strong>Ori: The city is the gameboard is their slogan. It&#8217;s going to be a great playground for AR games. The city becomes a theme park. The city could become an even bigger touristic attraction. People will come to the city to be part of these games. So you&#8217;re having thousands of people running around the city playing all sorts of games from laser-tag style to history adventures, to treasure hunts.</strong></p>
<h3><strong>Composing Reality</strong></h3>
<p><strong>Tish: </strong>So why haven&#8217;t you focused on one of these kinds of games with your company?</p>
<p><strong>Ori: We have a couple of scenarios along these lines that we&#8217;re planning for 2010-11. But first focus on what&#8217;s possible today.</strong></p>
<p><strong>Tish: </strong>And what&#8217;s stopping you from doing those kind of games today?</p>
<p><strong>Ori: Many things. The devices are not there yet, location services are not accurate enough, ubiquitous sensors are notÂ  there yet.</strong></p>
<p><strong>Tish: </strong>You think alternate reality gaming needs more &#8220;ubiquity&#8221; than is currently available?</p>
<p><strong>Ori: Not necessarily. People are doing alternate reality games with no &#8220;ubiquity&#8221; at all. But my interest is to add the visual aspect. I believe humans are mostly driven visually.</strong></p>
<p><strong>Jane McGonigal said in a talk at GDC, that AR would allow us to program reality, which is exactly how I look at it. Once you can recognize things, some of it with WiFi and RFID and all sorts of sensors. But visual sensors is always going to be the ultimate way to recognize things. And once you recognize things and know what they are, and can pull information about those things (or people and places) from the internet, you can program it (visually). You could program it to be fictional, like in a video game, or it could be programmed as non-fictional, like a documentary. And that allows you to do things that before were unimaginable.</strong></p>
<p><strong>Tish: </strong>But you can&#8217;t forget the visual, it is primary the connection to peoples&#8217; primary sensory relationships.</p>
<p><strong>Ori: Yes, it&#8217;s like you go to a grocery store and you pick your vegetables, a lot of it is by sight and by touch. And what if you could also see just by looking at it that it&#8217;s from a local store, and that it&#8217;s organic?</strong></p>
<p><strong>Tish:</strong> It goes beyond overlays really?</p>
<p><strong>Ori: By the way, I don&#8217;t like the term &#8216;overlay&#8217;. I know that&#8217;s how it looks: you either overlay or superimpose, but I&#8217;m still searching for a better term. A term I prefer to use is &#8220;composing reality&#8221;. Just like painters, they use brushstrokes and colors and compose a painting. We need to take the real element and the virtual element and compose them into something new. It&#8217;s not just about slapping one on top of the other.</strong></p>
<p><strong>Tish: </strong>yes I think the idea of dashboards is not so appealing.</p>
<h3><strong>Pookatak Games</strong></h3>
<p><strong>Tish: </strong>Do you want to explain the evolution of your company? You have an interesting history of success with high end enterprise applications.</p>
<p><strong>Ori: Since I was a kid I wanted to invent and create things. When I discovered software, that was a really cool way of actually creating things from nothing. From thin air; and you can do it very quickly. That&#8217;s what brought me into software. But I was always looking for the intersection between technology and art. Looking for ways to bring these things together. In the early nineties virtual reality was doing it. It had the appeal of cutting edge technology that can be combined with art. But then, as we all know, it crashed. So I joined Shai Agassi&#8217;s startup (who is now doing Better Place) back in the early nineties. I was one of the first employees in his startup which was developing multimedia products. I was leading the development of one of its flagship product. At some point we realized the technology could be great for an enterprise environment.</strong></p>
<p><strong>It was a really great experience. First going through this cycle from a very small startup and growing into this multi billion dollar business. I was responsible for defining and marketing SAP&#8217;s platform, which was called Netweaver. It was just an idea when we joined SAP and by the time I left it was a major, major business for SAP. I learned about the challenges of building a platform. No matter what purpose you&#8217;re building it for, it typically has similar rules. It&#8217;s definitely not just about the technology; the content that comes with it is really key to making a platform successful.</strong></p>
<p><strong>The third part of this platform trifecta is the community. If you don&#8217;t build a community, you won&#8217;t get the critical mass required for adoption. It may be your own platform but it&#8217;s not necessarily the people&#8217;s platform. That experience is very key to what we&#8217;re doing today. Now, a new industry is being born on the basis of a remarkable technology. But to drive adoption, first we&#8217;ll need good content. The content will be created using today&#8217;s technology with internal tools developed to simplify the process. Next step would be to make the tools used internally &#8211; available to other developers. Help scale the industry, enable innovation on a larger scale. That way we have a chance to create a platform. So it isn&#8217;t really just about my company. I&#8217;m so passionate about augmented reality, I want to it to become a healthy and successful industry for the next 5, 10, 15 years.</strong></p>
<p><strong>Tish: </strong>Yes I am so ready to be liberated from the sitting behind a computing screen! And I know that all this hardware is murdering the environment.</p>
<p><strong>Ori: There&#8217;s &#8216;s the book by Rolf Hainich which is called &#8220;<a id="ba8p" title="The End Of Hardware" href="http://www.theendofhardware.com/">The End Of Hardware.</a> &#8221; It&#8217;s about hardware for augmented-reality. Once you use goggles or other AR interfaces you eliminate the need for screens, laptops, etc. It&#8217;s going to be great for the environment. You have read Rainbow&#8217;s End, right? According to the book in few years there will barely be any (visible) hardware. At least it&#8217;ll have a much smaller footprint for the environment. And it&#8217;ll touch every aspect of life, everything you do. It&#8217;ll change the way you interact with the world.</strong></p>
<h3><strong>The Illusive Eyewear for Immersive AR.</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost.jpg"><img class="alignnone size-medium wp-image-3469" title="retroar-googlespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost-300x225.jpg" alt="retroar-googlespost" width="300" height="225" /></a><br />
<em>Friend of Ori&#8217;s in San Francisco wearing retro AR goggles (from <a href="http://gamesalfresco.com/2009/05/04/gdc-2009-roundup-a-tiny-spark-of-augmented-reality/" target="_blank">Games Alfresco, Ori&#8217;s roundup of GDC 2009</a>)</em></p>
<p><strong>Tish:</strong>OK lets talk about goggles.</p>
<p><strong><strong>Ori: Goggles are going to happen, we want to be hands free.</strong></strong></p>
<p><strong>It&#8217;s going to happen because it&#8217;s just a more intuitive way to use this technology. But above all it has to look cool. Because if it&#8217;s not, if it&#8217;s a big headset, then maybe a small percent of the population might use it, but most people won&#8217;t. It has to look like an accessory, like new cool eyeglasses that you just must wear.</strong></p>
<p><strong>I recently talked to a friend, who runs an industrial design firm, and has experience in designing such glasses for companies like Microvision and Lumux. He says that when you try to bring the images so close to our eyes &#8211; there are some really hard problems to solve. Otherwise it can become really annoying and cause dizzyness.</strong></p>
<p><strong>But I&#8217;m optimistic. I believe it&#8217;s going to happen 3 to 5 years from now. It&#8217;s already starting now: Vuzix announced goggles that will be available this year. Some AR apps that are going to take advantage of next year. Initially only a fraction of the population will use it. And that&#8217;s going to help advance it and make it better and better. But it&#8217;s going to take time until it reaches the mass market.</strong></p>
<p><strong>Tish:</strong> In virtual worlds we have seen, I think, a lot of mistakes in terms of reinventing the wheel and producing too many proprietary versions of the same thing and not enough concerted effort on standards and open platforms that could create a vibrant ecosystem.Â  How can augmented reality not make the same mistakes?</p>
<p><strong>Ori: There are some early AR open source efforts ARTookit, ARtag but it is not a movement yet.Â  One of the things we&#8217;re trying to do at ISMAR this year is to put togetherÂ  discussions around key industry issues, such as standards. Some people say it&#8217;s too early, you have to have a defacto standard to start from. But pretty soon it&#8217;s going to be too late. Just like with virtual worlds, all of a sudden you have all these islands that don&#8217;t talk to each other. Why get to that point if we can plan to avoid it? Let&#8217;s start thinking about it right now. On the other front there are devices. There are pockets of people working on adapting devices for AR, second guessing the hardware companies. Why not get them together with the Intels and Nvidias of the world, and discuss what this device should be able to do. And then compete to make it happen.</strong></p>
<p><strong>Tish: </strong>How much luck are you having with this discussion part?</p>
<p><strong>Ori: People are very interested in doing this. We proposed these panels for ISMAR. And I&#8217;ve got some key people already on board. They have tons of input, they want to get involved. We&#8217;ll see how much we can actually get out of it.</strong></p>
<p><strong>Tish: </strong>In virtual worlds it was a while before vibrant opensource communities developed.Â  OpenSim has I think been the breakthrough community in this regard.</p>
<p><strong>Ori: You have to think about the elements up front. The dream job is to architect the industry. Say we agree on the required pieces. Then we could help the right companies succeed in delivering the pieces. Next, we have to collaborate so that these pieces talk to each other. And eventually these communication methods will become defacto standards and most developers will adopt it.</strong></p>
<p><strong>Tish: </strong>So I&#8217;m going to put you in the role. You&#8217;ve got your dream job. You&#8217;re going to architect this community. So what are the key pieces and where would you like to see the open source communities take hold first?</p>
<p><strong>Ori: Open source will not be exclusive. It&#8217;s going to live side by side with proprietary technology.</strong></p>
<p><strong>The key pieces? You have the user at the center. And the user interacts with a lens. The lens includes both the hardware and the software. And then the lens senses and interacts with the world, which includes people, things and places. And these people-things-places emit information &#8211; about who they are, where they are, what they&#8217;re doing, etcÂ  &#8211; which is then stored in the cloud.</strong></p>
<p><strong>And then you have the content providers, the people and companies, composers who weave AR experiences through the pieces we mentioned before. These composers need a platform that glues these pieces together. Pieces of the platform will be on the lens, and in the world, and in the cloud. If you manage to remove the frictions, and connect these pieces into an experience that people like &#8211; then you have a platform. What the platform does it reduces the overhead and accelerates innovation.</strong></p>
<p><strong>Tish: </strong>Another problem virtual worlds faced in their development was their isolation from the world wide web.Â  Will augmented reality avoid this plight?</p>
<p><strong>Ori:Â  Yes, I believe the key, like you said before, is not to reinvent the wheel. The cloud is already there.Â  Take Wikitude for example, all <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a> had to do is buildÂ  a relatively simple client app, connected to wikipedia, and all of a sudden it offered a wealth of information in your field of view.</strong></p>
<p><strong>I think we can learn a lot from web 2.0. For example, in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, you&#8217;ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So let&#8217;s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So let&#8217;s find ways to make people create information that can be used for AR.</strong></p>
<p><object width="425" height="344" data="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /></object></p>
<p><em>Ori Inbar directed <a title="Wiki Mouse" href="http://www.youtube.com/watch?v=GTXtW3W8mzQ" target="_blank">Wiki Mouse</a> &#8211; a WIKI Film co-created by a swarm of movie makers around the world.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>HomeCamp 2: Home Energy Management and Distributed Sustainability</title>
		<link>http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/</link>
		<comments>http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/#comments</comments>
		<pubDate>Fri, 24 Apr 2009 19:14:16 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Bar Camp]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[distributed sustainability]]></category>
		<category><![CDATA[electricity 2.0.]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[home energy management]]></category>
		<category><![CDATA[intelligent energy management]]></category>
		<category><![CDATA[living greener]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[sustainable interaction design]]></category>
		<category><![CDATA[TweetaWatt]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3423</guid>
		<description><![CDATA[HomeCamp is a home hacking, automation and green technology community that will be gathering in London tomorrow, Saturday 25th April 2009, 10am until 6pm BST (GMT + 1), and in an OpenSim event running alongside for virtual participation, to brainstorm new possibilities for distributed sustainability, creative smart meters, monitoring, graphing and visulaizing energy usage. More [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-31.png"><img class="alignnone size-medium wp-image-3424" title="picture-31" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-31-299x300.png" alt="picture-31" width="299" height="300" /></a></p>
<p><a rel="nofollow" href="http://homecamp.org.uk/">HomeCamp</a> is a home hacking, automation and green technology community that will be <a href="http://maps.google.co.uk/maps?f=q&amp;source=s_q&amp;hl=en&amp;geocode=&amp;q=65+-+71+Scrutton+Street,+London,+EC2A+4PJ&amp;sll=51.509912,-0.129361&amp;sspn=0.100214,0.30899&amp;ie=UTF8&amp;ll=51.524379,-0.080895&amp;spn=0.006582,0.019312&amp;z=16&amp;iwloc=addr" target="_blank">gathering in London</a> tomorrow, Saturday 25th April 2009, 10am until 6pm BST (GMT + 1), and in an <a href="http://homecamp.pbwiki.com/Virtual-Home-Camp">OpenSim event running alongside for virtual participation</a>, to brainstorm new possibilities for distributed sustainability, creative smart meters, monitoring, graphing and visulaizing energy usage.</p>
<p class="MsoNormal">More details and videos on the <a href="http://homecamp.org.uk" target="_blank">blog.</a> <a href="http://homecamp.pbwiki.com/" target="_blank">The wiki, which includes signup</a>, is the main portal to all the online activity.<a href="http://homecamp.pbwiki.com/"></a></p>
<p>As James Governor notes <a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">here</a>:</p>
<blockquote><p><span lang="EN-GB">there has been a huge amount of code and applications released focused purely on using technology for home energy monitoring and automation.Â  We have an active google group and quite a few videos and content showcasing the various applications and hardware currently being used by geeks to save money and live greener.</span></p></blockquote>
<p><span lang="EN-GB">Now the challenge is to see how this seedling home energy management movement</span><span lang="EN-GB"> can </span><span lang="EN-GB">really grow into widely adopted distributed sustainability solutions that </span><span lang="EN-GB">everyone can use, and participate in.</span></p>
<p>Both <a href="http://www.yellowpark.net/cdalby/index.php/about/" target="_blank">Chris Dalby</a> (<a href="http://www.yellowpark.net/cdalby/index.php/2009/04/23/homecamp-2-is-this-saturday/" target="_blank">see here)</a>, <a href="http://andypiper.wordpress.com/2009/04/24/home-camp-mark-2/" target="_blank">Andy Piper</a>, James Governor of <a href="http://www.redmonk.com/jgovernor/" target="_blank">Monkchips</a> (<a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">see here</a>),Â  and Tom Raftery of <a href="http://greenmonk.net/" target="_blank">GreenMonk</a> (<a href="http://greenmonk.net/homecamp-ii/" target="_blank">see here</a>), have posted on tomorrow&#8217;s <a href="http://homecamp.pbwiki.com/" target="_blank">Ho</a><a href="http://homecamp.pbwiki.com/" target="_blank">meCamp</a> event. So I am just going to add some quick notes, especially to highlight some of what will be going on virtually for those of you, like me, who canâ€™t make it to London.</p>
<p>You can tune in either on the live video ustream, or sign up on <a href="http://reactiongrid.com/">ReactionGrid </a>and join the <a href="http://homecamp.pbwiki.com/Virtual-Home-Camp">OpenSim event</a>. Also, you can keep up on what is happening on Twitter #homecamp. I highly recommend that you catch Tom Raftery&#8217;s talk which will be streamed from Spain live into the London meeting, the OpenSim event on ReactionGrid, and Ustream. Tom Raftery, a leading Green technology analyst at <a href="http://redmonk.com/" target="_blank">RedMonk</a> <a href="http://greenmonk.net/" target="_blank">(see also GreenMonk</a>), will be picking up, in depth, on some themes raised in his brilliant ETech 2009 presentation, <a href="http://en.oreilly.com/et2009/public/schedule/detail/5655" target="_blank">&#8220;Electricity 2.0: Applying the Lessons of the Web to Our Energy Networks.&#8221;</a></p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt.jpg"><img class="alignnone size-medium wp-image-3425" title="tweetawatt" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt-300x162.jpg" alt="tweetawatt" width="300" height="162" /></a></p>
<p class="MsoNormal">There will be homecampers dropping in to virtual homecamp in ReactionGrid throughout the day, including <a href="http://blogs.ipona.com/chris/" target="_blank">Chris Hart (the awesome &#8220;girl-geek&#8221;@dstrawberrygirl)</a>, <a href="http://mikethebee.mevio.com/" target="_blank">MiketheBee</a>, and <a href="http://www.cminion.com/wordpress/" target="_blank">Cminion</a>, who has a number of cool projects to demo, including <a href="http://www.cminion.com/wordpress/?p=43" target="_blank">his energy turbines</a>.Â  <a href="http://www.gomaya.com/glyph/" target="_blank">Dave Pentecost</a> (pictured above with his <a href="http://twitter.com/tweetawatt" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> Orb) and I (<a href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj" target="_blank">see our presentation for EarthWeek SL here</a>) plan to be at Virtual Homecamp on ReactionGrid between 9am and 10.30am EST. Dave has done a number of cool energy monitoring hacks including a <a href="http://www.pachube.com/" target="_blank">Pachube</a> link to and from <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a>.</p>
<p><span class="title">Also keep your eye on Dave&#8217;s blog, <a href="http://www.gomaya.com/glyph/" target="_blank">The Daily Glyph</a>, for what&#8217;s new in distributed sustainability. Dave just posted some great links on Sustainable Interaction, design</span> and work by ITP researchers and others in sustainable use of technology.</p>
<p><a title="Sustainable Interaction | Main / Papers" href="http://itp.nyu.edu/sustainability/interaction/Main/Papers">Sustainable Interaction | Main / Papers</a></p>
<p><a title="Sustainable interaction design | Sustainable Minds" href="http://www.sustainableminds.com/category/categories/sustainable-interaction-design">Sustainable interaction design | Sustainable Minds</a></p>
<p><a title="Design For the Other 90% | Cooper-Hewitt, National Design Museum" href="http://other90.cooperhewitt.org/">Design For the Other 90% | Cooper-Hewitt, National Design Museum</a></p>
<p class="MsoNormal">If you are in London, look out for Oliver Goh of <a href="http://www.shaspa.com/" target="_blank">Shaspa</a> as Oliver will be at Homecamp in London. As I mentioned in <a href="http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/" target="_blank">my previous post</a>, Oliver will soon be launching both Shaspa commmunity and enterprise hardware and software packages for &#8220;Intelligent Energy Management.&#8221;</p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-35.png"><img class="alignnone size-medium wp-image-3428" title="picture-35" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-35-300x229.png" alt="picture-35" width="300" height="229" /></a></p>
<p>For a bit of homecamp history, James Governor (picture below from <a href="http://chinposin.com/home/monkchips" target="_blank">Chinposin)</a>, recapsÂ  some of the successes ofÂ  the first HomeCamp <a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">here</a>.</p>
<p>And last but not least, a big thanks to sponsors, <a href="http://currentcost.co.uk/">CurrentCost</a>, <a href="http://greenmonk.net/">Greenmonk</a>, <a href="http://www.pachube.com/">Pachube</a>, <a href="http://www.onzo.co.uk/" target="_blank">Onzo</a>, and <a href="http://reactiongrid.com/">ReactionGrid</a>,Â  and media partner <a href="http://theattick.tv/" target="_blank">theattick.tv</a> who are making the London and virtual homecamp events possible.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-33.png"><img class="alignnone size-medium wp-image-3426" title="picture-33" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-33-294x300.png" alt="picture-33" width="294" height="300" /></a></p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt.jpg"></a></p>
<p class="MsoNormal"><a href="http://homecamp.pbwiki.com/"></a></p>
<p class="MsoNormal">
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Sensor Networks and Sustainability: &#8220;Connecting Real, Virtual, Mobile and Augmented Spaces&#8221;</title>
		<link>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/#comments</comments>
		<pubDate>Sun, 19 Apr 2009 06:32:59 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[MQTT and RSMB]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[Carbon Goggles]]></category>
		<category><![CDATA[distributed sustainability]]></category>
		<category><![CDATA[home energy management]]></category>
		<category><![CDATA[open data]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[sensor networks and sustainability]]></category>
		<category><![CDATA[SHASPA]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[TweetaWatt]]></category>
		<category><![CDATA[Virtual Worlds]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3381</guid>
		<description><![CDATA[Today, I did a presentation, on connecting real, virtual, mobile, and augmented spaces to support sustainability, for Earth Week SL, with Dave Pentecost and Jim Purbrick, who presented on Carbon Goggles. Dave and I focused on sensor networks, open data, Pachube, OpenSim, and sustainability from perspective of, &#8220;hack local, think global.&#8221;Â  Dave and I will [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-21.png"><img class="alignnone size-medium wp-image-3382" title="picture-21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-21-300x225.png" alt="picture-21" width="300" height="225" /></a></p>
<p>Today, I did a presentation, on <a href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj" target="_blank">connecting real, virtual, mobile, and augmented spaces to support sustainability,</a> for <a href="http://slearthweek.wordpress.com/2009/04/10/earth-week-press-release-see-schedule-also/" target="_blank">Earth Week SL</a>, with <a href="http://www.gomaya.com/glyph/" target="_blank">Dave Pentecost</a> and <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a>, who presented on <a href="http://carbongoggles.org/" target="_blank">Carbon Goggles</a>.</p>
<p>Dave and I focused on sensor networks, open data,<a href="http://www.pachube.com/" target="_blank"> Pachube</a>,  <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim,</a> and sustainability from perspective of, &#8220;hack local, think global.&#8221;Â  Dave and I will be picking up on some of these themes of sensor networks and sustainability next week in our presentation with <a href="http://www.darleon.com/" target="_blank">Dimitri Darras</a> at ITP,Â  NYU, Aprl 24th, 6.30 pm to 8 pm &#8211; <a href="http://itp.nyu.edu/sigs/news/special-event-open-sim/" target="_blank">details here</a>.Â  If you are in New York City, I hope to see you there.</p>
<p>We got some interesting insights into augmented reality from <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a> whose <a href="http://carbongoggles.org/" target="_blank">Carbon Goggles</a> project prototypes how we can use augmented reality to read carbon identity and to combine well organized, verified data from <a href="http://www.amee.com/" target="_blank">AMEE</a> &#8211; a neutral aggregation platform to measure the &#8220;carbon footprint&#8221; of everything on earth, with crowd sourced tagging and linking.</p>
<h3>Shaspa &#8211; &#8220;the sensor network system that has it all&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-22.png"><img class="alignnone size-medium wp-image-3391" title="picture-22" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-22-300x224.png" alt="picture-22" width="300" height="224" /></a></p>
<p>We also discussed, recently launched, <a href="http://www.shaspa.com/" target="_blank">Shaspa</a>. Shaspa&#8217;s energy management packages connect spaces &#8211; real, virtual, mobile and augmented.Â  Shaspa has been bloggedÂ  by <a href="http://www.maxping.org/business/real-life/virtual-management-of-energy-consumption-in-the-home.aspx/" target="_blank">Maxping</a> and <a href="http://www.virtualworldsnews.com/2009/04/shaspa-launches-home-energy-organizer-on-opensim.html" target="_blank">Virtual World News</a>, so you can read all about it, but the Shaspa device kit won&#8217;t be available until next week. Some key features of the Home EnergyÂ  package are listed on the slide above.Â  However, this evening, Dave Pentecost and I got a sneak preview of both the Shaspa commmunity and enterprise hardware and software packages from Shaspa founder Oliver Goh. We were pretty impressed.</p>
<p><strong>Dave:</strong> &#8220;<strong>It&#8217;s the ultimate hackable device for energy management!&#8221;</strong></p>
<p><strong>Oliver:</strong> <strong>&#8220;Bring us any sensor device &#8211; with documentation, and within three days we will put a driver into Shaspa.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/daveandoliverpost.jpg"><img class="alignnone size-medium wp-image-3392" title="daveandoliverpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/daveandoliverpost-300x178.jpg" alt="daveandoliverpost" width="300" height="178" /></a></p>
<p>Oliver is on the right and Dave on the left in the picture above. The picture below shows Shaspa in OpenSim. Oliver and I will be attending the <a href="http://www.3dtlc.com/"><span style="color: #810081;">3D Training, Learning and Collaboration</span></a> Conference in Washington, DC, next week.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-23.png"><img class="alignnone size-medium wp-image-3412" title="picture-23" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-23-300x208.png" alt="picture-23" width="300" height="208" /></a></p>
<h3>Links</h3>
<p>Here are some of the links that came up in the presentation as many people asked for them to be published. Dave also has them on <a href="http://www.gomaya.com/glyph/archives/002520.html#002520" target="_blank">his blog</a>.</p>
<p>SLIDES on GOOGLE DOCS:<br />
<a title="Earth Week SL Presentation, April 18th, 2009 - Google Docs" href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj">Earth Week SL Presentation, April 18th, 2009 &#8211; Google Docs</a></p>
<p><a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">Pachube, sensor networks</a></p>
<p><a href="http://www.gomaya.com/glyph" target="_blank">Dave&#8217;s blog covering Maya archaeology, jungle ecology, and technology</a></p>
<p><a href="http://www.gomaya.com/glyph/archives/001914.html" target="_blank">Maya Frontier, Usumacinta River videos</a></p>
<p><a href="http://en.wikipedia.org/wiki/Collapse_(book)" target="_blank">Collapse</a></p>
<p><a href="microcontrollers http://arduino.cc/" target="_blank">Arduino</a></p>
<p><a href="http://community.pachube.com/tutorials" target="_blank">Pachube &#8211; tutorials</a></p>
<p><a href="http://apps.pachube.com/" target="_blank">Pachube Apps </a>-</p>
<p><a href="http://www.pachube.com/feeds/1284" target="_blank">Arduino-SL-Pachube data site</a></p>
<p><a href="http://www.pachube.com/feeds/1505" target="_blank">SL to Pachube site</a></p>
<p><a href="http://www.zachhoeken.com/connecting-to-the-world" target="_blank">Dave&#8217;s Danger Shield &#8211; Pachube  tutorial</a></p>
<p><a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">TweetaWatt site (LadyAda)</a></p>
<p><a href="http://www.gomaya.com/glyph/archives/002505.html" target="_blank">Dave&#8217;s post on TweetaWatt to Opensim/SL</a></p>
<p><a href="http://peterquirk.wordpress.com/2008/12/22/tutorial-using-the-streamlined-tool-chain-for-importing-sketchup-models-into-realxtend-04/" target="_blank">Peter Quirk&#8217;s post on Importing Sketchup into RealXtend</a></p>
<p><a href="http://opensimulator.org/wiki/Main_Page" target="_blank">Opensim</a></p>
<p><a href="http://www.realxtend.org/" target="_blank">RealXtend</a></p>
<p><a href="http://reactiongrid.com/" target="_blank">ReactionGrid</a></p>
<p><a href="http://homecamp.pbwiki.com/" target="_blank">homecamp</a></p>
<p><a href="http://www.cminion.com/wordpress/" target="_blank">cminion -wind turbines in OpenSim</a></p>
<p><a href="http://mikethebee.mevio.com/" target="_blank">MiketheBee</a></p>
<p><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">Is it &#8220;OMG finally&#8221; for Augmented Reality?</a></p>
<p><a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">Smart Planet: Interview with Andy Stanford-Clark</a></p>
<p><a href="http://www.orangecone.com/" target="_blank">Orange Cone &#8211; Information Shadows and Things as Services</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>&#8220;Do Well By Doing Good:&#8221; Talking Experience and Design in a Mobile World with Nathan Freitas and David Oliver</title>
		<link>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/</link>
		<comments>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/#comments</comments>
		<pubDate>Sat, 04 Apr 2009 06:05:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Metarati]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Phones in Africa]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[albany's king geek]]></category>
		<category><![CDATA[andrew hoppin]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[android APIs]]></category>
		<category><![CDATA[android market place]]></category>
		<category><![CDATA[android on HTC]]></category>
		<category><![CDATA[Bre Pettis]]></category>
		<category><![CDATA[Coovents]]></category>
		<category><![CDATA[crowd sourced]]></category>
		<category><![CDATA[david oliver]]></category>
		<category><![CDATA[geo report android]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[government 2.0]]></category>
		<category><![CDATA[greporter]]></category>
		<category><![CDATA[information age volunteerism]]></category>
		<category><![CDATA[inkscape]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[MeetMoi]]></category>
		<category><![CDATA[Mobile design]]></category>
		<category><![CDATA[mobile user experience design]]></category>
		<category><![CDATA[mobile voter]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[NYC Resistor]]></category>
		<category><![CDATA[oliver coady]]></category>
		<category><![CDATA[Oliver+Coady]]></category>
		<category><![CDATA[open intents]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Peek]]></category>
		<category><![CDATA[tech president]]></category>
		<category><![CDATA[the extraordinaries]]></category>
		<category><![CDATA[Thingiverse]]></category>
		<category><![CDATA[viaplace]]></category>
		<category><![CDATA[Volunteerism in the information age]]></category>
		<category><![CDATA[widget based commerce]]></category>
		<category><![CDATA[xtify]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3356</guid>
		<description><![CDATA[Nathan Freitas holding a Peek with Oliver+Coady partner David Oliver talking to fans at New York Tech Meetup &#8211; Mobile Meets Social Volunteerism and participation in public life seem to come naturally to Nathan Freitas. Nathan is one of the leading innovators/developers in NYC in mobile strategy/design (for more on his Android development read on). [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathafreitaswithpeek.jpg"><img class="alignnone size-medium wp-image-3357" title="nathafreitaswithpeek" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathafreitaswithpeek-300x199.jpg" alt="nathafreitaswithpeek" width="300" height="199" /></a></p>
<p><em>Nathan Freitas holding a <a href="http://www.getpeek.com/indexb.html" target="_blank">Peek</a> with <a href="http://olivercoady.com/" target="_blank">Oliver+Coady</a> partner David Oliver talking to fans at <a href="http://www.meetup.com/ny-tech/calendar/9466657/" target="_blank">New York Tech Meetup &#8211; Mobile Meets Social</a><br />
</em><br />
Volunteerism and participation in public life seem to come naturally to <a id="chzc" title="Nathan Freitas" href="http://openideals.com/" target="_blank">Nathan Freitas</a>. Nathan is one of the leading innovators/developers in NYC in mobile strategy/design (for more on his Android development read on). And he is much in demand as speaker who shows others how to realize their mobile experience and design dreams (for upcoming speaking engagements see Nathan&#8217;s blog). But also Nathan has spent much of the last ten years working on new ways for causes and non profits to benefit from technology.</p>
<p>Most recently <a id="plcq" title="Nathan has started working part time for the NY Senate under, &quot;Albany's King Geek,&quot;" href="http://www.observer.com/2009/media/albany%E2%80%99s-king-geek" target="_blank">Nathan has started working part time for the NY Senate under, &#8220;Albany&#8217;s King Geek,&#8221;</a> the new CIO Andrew Hoppin:</p>
<p><strong>&#8220;The CIO team is organizing training sessions for senators and their staff on social networking platforms and how to pay attention to online feedback. Last week, they hired mobile specialist <span class="il">Nathan</span> <span class="il">Freitas</span> to create new phone applications that will allow citizens to get government news on the go.&#8221; </strong></p>
<p>Also, Nathan is currently supporting engineer on, <a href="http://www.theextraordinaries.org/" target="_blank">The Extraordinaries</a>, a smart phone application that explores territory &#8220;beyond the flattening tendency of online relationships&#8221; (see <a id="i6qw" title="this list from Andy Oram" href="http://www.praxagora.com/andyo/professional/government_participation_question.html" target="_blank">this list from Andy Oram</a> of the Questions on Government participation).Â  <a href="http://www.theextraordinaries.org/" target="_blank">The Extraordinaries</a> is Ben Rigby and Jacob Colker&#8217;s prize winning projectÂ  &#8211; &#8220;a smartphone application that delivers volunteer opportunities on-demand.&#8221;</p>
<p>Ben&#8217;s post, <a title="Information Age Volunteerism - Open Sourced! Crowdsourced!" href="http://techpresident.com/blog-entry/information-age-volunteerism-open-sourced-crowdsourced" target="_blank">Information Age Volunteerism &#8211; Open Sourced! Crowdsourced!</a> and the extensive comments give a detailed analysis and critique of this brilliant and creative new approach to volunteersim in the information age.</p>
<p>Nathan, in my view, is a great example of how to &#8220;do well by doing good.&#8221; And, I am particularly excited by the work Nathan and his partner in <a id="nwp6" title="Oliver+Coady" href="http://olivercoady.com/">Oliver+Coady,</a> David Oliver, are doing on Android, e.g., Nathan&#8217;s new <a id="jjed" title="gReporter - opensource, geotagging, media capture report client" href="http://openideals.com/greporter/" target="_blank">gReporter &#8211; opensource, geotagging, media capture report client</a> (you can <a id="ycbi" title="download the source here" href="http://github.com/natdefreitas/georeport-android/tree/master">download the source here</a>).</p>
<p>I first met Nathan when I interviewed him about <a id="kx4_" title="Cruxy" href="http://openideals.com/2009/03/11/cruxy/">Cruxy</a> in 2007 (see my post, <a href="http://www.ugotrade.com/2007/05/24/the-mixed-reality-metarati-at-destroy-tv-merging-art-commerce-politics-and-play/" target="_blank">The Mixed Reality Metarati and &#8220;Destroy TV:&#8221;Â  Merging Art, Technology, Politics and Play</a>).Â  Nathan recently announced that <a id="v9nm" title="&quot;the fat lady has just uploaded her last song,&quot;" href="http://openideals.com/2009/03/11/cruxy/">&#8220;the fat lady has just uploaded her last song.&#8221;</a> Cruxy was an innovative distributed music venture Nathan started with Jon Oakes.Â  Although, as Nathan explains, Cruxy &#8220;never really broke through in the way we hoped.&#8221; Nevertheless Cruxy seems to have been a fertile garden for ideas that are coming of age in Oliver-Coady&#8217;s current mobile experience endeavors.Â  As Nathan explains, &#8220;the world, including Apple and iTunes, has shifted to embrace some of the ideals we have always had &#8211; open formats, more ways to distribute and promote online, more avenues for niche content to be discovered and heard.&#8221; Cruxy&#8217;s technology platform, built by the incomparable Will Meyer:<br />
<strong><br />
&#8220;was a great success in my mind, being one of the first to fully embrace Amazonâ€™s cloud and provide a widget-based commerce system that actually worked!&#8221;</strong></p>
<p>Nathan has a new company, Oliver+Coady. But Nathan told me that he feels he is over his &#8220;start up phase.&#8221;</p>
<p><strong>Nathan Freitas:</strong> I am just tired of the term &#8220;startup.&#8221; I&#8217;m more interested in being defined as person than a member of a corporation. Also I am more interested in the ideas of cooperatives, and have been working on this idea (<a id="un1g" title="see here for more on the New York Creative Cooperative" href="http://scratch.openideals.com/index.php/New_York_Creative_Cooperative" target="_blank">see here for more on the New York Creative Cooperative</a> ).</p>
<p><strong>Tish Shute:</strong> You do a high percentage of non profit work. Are you still managing to keep the home fires burning in the economic downturn?</p>
<p><strong>Nathan Freitas:</strong> There is definitely profit to be made in non-profits because even if you only get paid half of what you get for corporate work, it is worth it in terms of fulfillment, ego, respect, and general contribution back to the planet. However, I&#8217;ve also been investing time &amp; energy w/o pay into thinking about how causes can benefit from technology for over ten years. So its not just something you decide to do one day, and suddenly are successful.</p>
<p><strong>Tish Shute:</strong> What are some of the highlights of your non profit work recently?<br />
<strong><br />
Nathan</strong>: Well, <a id="nywz" title="The Extraordinaries" href="http://www.theextraordinaries.org/about.html" target="_blank">The Extraordinaries</a> project is definitely a highlight. It is focused on a whole new approach to volunteering and winning the first prize at the <a href="http://wemedia.com/miami09/" target="_blank">WeMedia Conference</a> for the non-profit tech category was a great validation of the work. I am just a supporting engineer on the effort, which was founded by my good friend Ben Rigby (a longtime non-profit tech guy as well) and Jacob Colker.</p>
<p>Ben wrote this excellent book on mobile tech and organizing, <a id="lrfb" title="Mobilizing Generation 2.0" href="http://www.amazon.com/Mobilizing-Generation-2-0-Practical-Technologies/dp/0470227443" target="_blank">Mobilizing Generation 2.0</a> He&#8217;s done a ton of mobile work with youth voters via his non-profit, <a id="u5yr" title="Mobile Voter" href="http://mobilevoter.org/about.html" target="_blank">Mobile Voter</a>.</p>
<p>The Extraordinaries is really taking all of our joint experience and putting it into a whole new system that is meant to go beyond generic email blasts that just ask you to &#8220;send a fax&#8221; or &#8220;send a link&#8221;. it gives people specific tasks they can accomplish on their phone or in their local area using their phone.</p>
<p><strong>Tish: </strong>Did you do Twitter Vote Report with Ben too?</p>
<p><strong>Nathan:</strong> Oh, no, <a id="rkbs" title="Twitter Vote Report" href="http://twittervotereport.com/" target="_blank">Twitter Vote Report</a> was with a different group of folks&#8230;mostly east coast-based, organized by the <a id="z91u" title="TechPresident.com blog" href="http://techpresident.com/" target="_blank">TechPresident.com blog</a>. But Ben and I worked on SMS efforts for the 2004 election. We sent 40,000 messages out to SEIU labor members and MoveOn members&#8230; really the first time SMS was used in a wide-scale manner to help get out the vote on election day.</p>
<p><strong>Tish:</strong> Do you have a new mobilization project planned?</p>
<p><strong>Nathan:</strong> Its all about The Extraordinaries right now. We&#8217;ve got a big launch coming in June, and are working actively to add more causes that can benefit from volunteers and organizations that have volunteers but don&#8217;t know what to do with them.</p>
<p><strong>Tish:</strong> I was just looking at <a id="mg55" title="your post on Peek" href="http://openideals.com/?s=peek&amp;x=0&amp;y=0" target="_blank">your post on Peek</a> too.</p>
<p><strong>Nathan:</strong> Yeah&#8230; fortunately that is a completelyÂ  &#8220;for profit&#8221; gig.Â  But I like the company a lot, and think their spirit of providing access to email at a very low cost plays well with the non-profit world.</p>
<p><strong>Tish:</strong> So it isn&#8217;t just iphone apps that are paying the bills?</p>
<p><strong>Nathan:</strong> Nope. iPhone is just an aspect. Everyone is so obsessed with it and how to strike it rich quick, but in the greater scheme of things, there is a huge ecosystem of mobility out there for you to find a niche in, if you are looking.</p>
<p><strong>Tish:</strong> Are you able to monetize your work on Android yet?</p>
<p><strong>Nathan:</strong> here and there&#8230; releasing some for pay apps soon, also including &#8220;free&#8221; Android ports in some high-profile iPhone apps we hope to have out soon. Some successful iPhone app developers are looking for people to port their apps to Android, as well.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/georeporter.jpg"><img class="alignnone size-medium wp-image-3358" title="georeporter" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/georeporter-145x300.jpg" alt="georeporter" width="145" height="300" /></a></p>
<p><a id="jjed" title="gReporter - opensource, geotagging, media capture report client" href="http://openideals.com/greporter/" target="_blank">gReporter &#8211; opensource, geotagging, media capture report client</a></p>
<p><strong>Tish: </strong>So what are your hopes for Android development in general and your gReporter app in particular?</p>
<p><strong>Nathan:</strong> I think Android represents right now what Linux on desktops did in 99 or 00Â  Though as we all know, cycles of technology seem to speed up. There is huge interest in it at the academic level and there is also a genuine interest in its use by non-profit/development agencies working around the globe.</p>
<p>You have to jump through hoops to get an unlocked, open iPhone w/o contract. Android provides an alternative solution to this, that acts more like a true platform, and not just a consumer product.</p>
<p><strong>Tish:</strong> At the moment the Android market place is only for free apps right?</p>
<p><strong>Nathan:</strong> No, it now supports paid apps. I just bought one today for $2.99</p>
<p><strong>Tish:</strong> What did you buy?</p>
<p><strong>Nathan:</strong> An app that allows me to turn my G1 phone into a WiFi hotspot sharing my 3G connection to anyone who connects.</p>
<p><strong>Tish:</strong> So what are the most important aspects of Android in your view?</p>
<p><strong>Nathan:</strong> There areÂ  two sites to help demonstrate what is really going on with Android that makes it significant</p>
<p>1) <a id="jr_o" title="Open Intents" href="http://www.openintents.org/en/intentstable" target="_blank">Open Intents</a> &#8211; this is the ecosystem of developers, all creating services and apps that interoperate, share data, and generally build a very rich Microsoft style platform:<br />
except all these are open-source and built by lots of small developers and not one big corporation.</p>
<p>2) <a id="zdqw" title="Android on HTC" href="http://www.androidonhtc.com/" target="_blank">Android on HTC</a> &#8211; this is the home for all the efforts to port Android to pre-existing HTC/XDA mobile phone hardware. You can see the status of ports here: http://wiki.xda-developers.com/index.php?pagename=Android_devicesÂ  Imagine&#8230; taking an old Windows Mobile HTC phone, and then popping in an SD card that reformats it over to Android brand new phone!Â  For much of Asia, India and Africa, there is huge interest in this.</p>
<p><strong>Tish:</strong> Nice! You mentioned earlier that you are thinking of doing SDK for the android sensor API&#8217;s?</p>
<p><strong>Nathan: </strong>That would be part of the geo report app&#8230; expanding it to capture all sensing data and report that when you submit your text, photo or audio report.Â  Right now it just detects your lat and lon, but no reason it couldn&#8217;t also check your compass, altitude and whatever other data the device might offer.</p>
<p><strong>Tish</strong>: So what will your geo report do now?</p>
<p><strong>Nathan:</strong> It allows you to submit a text, photo or audio report, tagged with geo coordinates, timestamp, and basic user info (name, email, home location, etc) to whatever server it is configured to us. it is the latest release of code used for the TwitterVoteReport and InaugurationReport efforts.</p>
<p>There is also just a lot to learn or use from the code itself, which is available at: http://github.com/natdefreitas/georeport-android</p>
<p>Lots of little lessons learned packaged up into a functioning application</p>
<p><strong>Tish:</strong> How many sensor APIs does android have?</p>
<p><strong>Nathan</strong>: http://developer.android.com/reference/android/hardware/SensorManager.html</p>
<p>int SENSOR_ACCELEROMETER A constant describing an accelerometer.<br />
int SENSOR_ALL A constant that includes all sensors<br />
int SENSOR_DELAY_FASTEST get sensor data as fast as possible<br />
int SENSOR_DELAY_GAME rate suitable for games<br />
int SENSOR_DELAY_NORMAL rate (default) suitable for screen orientation changes<br />
int SENSOR_DELAY_UI rate suitable for the user interface<br />
int SENSOR_LIGHT A constant describing an ambient light sensor Only the first value is defined for this sensor and it contains the ambient light measure in lux.<br />
int SENSOR_MAGNETIC_FIELD A constant describing a magnetic sensor See SensorListener for more details.<br />
int SENSOR_MAX Largest sensor ID<br />
int SENSOR_MIN Smallest sensor ID<br />
int SENSOR_ORIENTATION A constant describing an orientation sensor.<br />
int SENSOR_ORIENTATION_RAW A constant describing an orientation sensor.<br />
int SENSOR_PROXIMITY A constant describing a proximity sensor Only the first value is defined for this sensor and it contains the distance between the sensor and the object in meters (m)<br />
int SENSOR_STATUS_ACCURACY_HIGH This sensor is reporting data with maximum accuracy<br />
int SENSOR_STATUS_ACCURACY_LOW This sensor is reporting data with low accuracy, calibration with the environment is needed<br />
int SENSOR_STATUS_ACCURACY_MEDIUM This sensor is reporting data with an average level of accuracy, calibration with the environment may improve the readings<br />
int SENSOR_STATUS_UNRELIABLE The values returned by this sensor cannot be trusted, calibration is needed or the environment doesn&#8217;t allow readings<br />
int SENSOR_TEMPERATURE A constant describing a temperature sensor Only the first value is defined for this sensor and it contains the ambient temperature in degree centigrade.<br />
int SENSOR_TRICORDER A constant describing a Tricorder When this sensor is available and enabled, the device can be used as a fully functional Tricorder.<br />
float STANDARD_GRAVITY<br />
with a few easter eggs as well<br />
GRAVITY_DEATH_STAR_I<br />
SENSOR_TRICORDER<br />
 <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /> </p>
<p><strong>Nathan</strong>: They are all in the API however, there isn&#8217;t hardware to support all of them yet&#8230; for instance TEMPERATURE is not yet supported<br />
nor is LIGHT.<br />
<strong><br />
Tish:</strong> and errr what is gravity_deathstar</p>
<p><strong>Nathan: </strong>It is a value representing the fictional gravity on the Death Star from Star Wars &#8211; geek humour<br />
<strong><br />
Tish: </strong>That makes me think of <a id="t8:v" title="this great essay by Julian Bleeker, Design Fiction: A Short Essay on Design Science, Fact and Fiction" href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">this great essay by Julian Bleeker, Design Fiction: A Short Essay on Design Science, Fact and Fiction</a>:</p>
<p><strong>&#8220;When you trace the knots that link science, fact and fiction you see the fascinating crosstalk between and amongst ideas and their materialization. In the tracing you see the simultaneous knowledge-making activities, speculating and pondering and realizing that things are made only by force of the imagination. In the midst of the tangle, one begins to see that fact and fiction are productively indistinguishable.<em>&#8220;</em></strong><em><br />
</em><br />
Picture below is Nathan playing his dream ukulele &#8211; designed using the free, open-source <a href="http://www.inkscape.org/">Inkscape</a> vector drawing tool (see his <a href="http://www.thingiverse.com/thing:299">open-source Ukulele plans here)</a><br />
See <a id="dqj2" title="Nathan's blog for the whole story" href="http://openideals.com/2009/03/27/open-source-ukulele-proto-uno-lazzzzored-ftw/" target="_blank">Nathan&#8217;s blog for the whole story</a> of how the Flying V Rockinâ€™ Ukulele Design he posted to <a href="http://thingiverse.com/">Thingiverse</a> a few weeks ago, after being inspired by <a href="http://twitter.com/bre">Bre Pettisâ€™</a> talk at ROFLThang materialized at theÂ  <a href="http://nycresistor.com/">NYC Resistor</a> &#8220;amazing workshop laboratory in Brooklyn where they let anyone come over and hang out at, to learn how to make, build and fabricate pretty much anything. They also have a <a href="http://www.nycresistor.com/laser/">laser</a> (aka â€œLAAAZZZOOORâ€) which you can think of as an automagic thing cutter-outer!&#8221;</p>
<p>so this&#8230;.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/lazoorukele.jpg"><img class="alignnone size-medium wp-image-3359" title="lazoorukele" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/lazoorukele-300x164.jpg" alt="lazoorukele" width="300" height="164" /></a></p>
<p>became this &#8230;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathanfreitasplayingukele.jpg"><img class="alignnone size-full wp-image-3360" title="nathanfreitasplayingukele" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/nathanfreitasplayingukele.jpg" alt="nathanfreitasplayingukele" width="240" height="180" /></a></p>
<p>Nathan and David presented <a id="oofs" title="Coovents" href="http://www.coovents.com/" target="_blank">Coovents</a> at NYTM &#8211; Mobile Meets Social. They had a large group of questioners surrounding them (see picture below).Â Â  I talked to David after the presentation.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/new-yorktechmeetup.jpg"><img class="alignnone size-medium wp-image-3361" title="new-yorktechmeetup" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/new-yorktechmeetup-300x199.jpg" alt="new-yorktechmeetup" width="300" height="199" /></a></p>
<p>David Oliver was a software architect, user experience designer and product manager in the areas of mobile/wireless and electronic payment at IBM for over a decade.Â  Most recently, he lead the effort to productize a mobile client for IBM&#8217;s Lotus Connections enterprise social networking suite.Â  As a software architect, David was often technical lead for IBM&#8217;s business partner relationships with mobile device manufacturers.Â  Prior to IBM, David was co-founder of the Internet&#8217;s ï¬rst &#8220;micropayments&#8221; company, Clickshare.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/david-oliver.jpg"><img class="alignnone size-medium wp-image-3362" title="david-oliver" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/david-oliver-227x300.jpg" alt="david-oliver" width="227" height="300" /></a></p>
<h3>Talking with David Oliver</h3>
<p><strong>Tish Shute: </strong>How are smart phones are causing us to rethink what networked online relationships are all about.</p>
<p><strong>David Oliver: </strong>You know these [mobile] devices are .. there&#8217;s a long time we tried to pitch that we&#8217;re going to treat them like they&#8217;re PC&#8217;s, or they&#8217;re just like anything else. But they&#8217;re really not. It may be the same coding style but the way you think about using them is entirely different. And the way you think about your program. so if you use html, java and that kind of stuff, yes it&#8217;s same code type but the way you think about it is entirely different. And to me these little devices make what you said [<em><strong>relationships</strong></em> <em><strong>inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc.</strong></em>] a lot more possible than a PC. because in a PC you almost have to sit in front of it and like it controls you. But the device is so little and there&#8217;s almost no user interface by comparison. You got to be very smart how you build something so that it&#8217;s almost invisible. And of course that&#8217;s the beauty of the iphone, Apple will tell you. The idea of ubiquitous computing. Ubiquitous what? Am I really computing? I don&#8217;t feel like I&#8217;m computing. I feel like I&#8217;m interacting or something.</p>
<p>I think twitter is very cool. The real way it&#8217;s cool is that there&#8217;s no required client. You can access Twitter any way you want.Â  You can imagine other ways to use it. Tweet Deck happens to be a nice for now. What I like about Twitter is, if you give it a tiny bit of thought, the Twitter network&#8217;s complete white noise, just like the internet itself. If you put a probe on the internet it&#8217;s all white noise, it&#8217;s all unordered packets. It makes no sense. So it&#8217;s cool that Twitter is at the level of little bitty conversations, but collectively all white noise. Totally meaningless white noise.Â  There&#8217;s some neat things going on, but I think we haven&#8217;t seen barely the first of what you can do with Twitter.</p>
<p>The way I see it is it&#8217;s like instant messaging where you don&#8217;t instant message to someone you instant message to the network and there are listeners. So normally in the old world of IM like AOL IM I would say Tish let&#8217;s talk and I kind of like grab you. Then it&#8217;s a narrow pipe you to me. You can add a few people in and make a little group, and that makes a bit of a closed network. But with twitter you just like talk into the air as if I were standing over there and you had a twitter client here, we could have the same interview. Because I would be watching you OH I see Tish&#8217;s question. I&#8217;d be over there talkingÂ  and you&#8217;d be picking me up over here. I&#8217;ts like you&#8217;re talking into white noise, like at this bar. You choose to hear me, this guy is not choosing to hear me right now.</p>
<p><strong>Tish Shute:</strong> So what does Android bring to the party?</p>
<p><strong>David Oliver:</strong> They have the notion that you have a telephone platform that&#8217;s open, and that everybody can use. And it&#8217;s got a variety of sensor data &#8211; not just location but also accelerometer and compass and more.Â  So in theory you can almost broadcast that data. It&#8217;s connected to a network. It&#8217;s easy, open API&#8217;s to get at that data. But the question is who are you going to broadcast it to or who are you sending it to. What are they going to do with it? How are you going to control it, and make sure people don&#8217;t misuse it? As you heard with the services tonight, there&#8217;s a central kind of service necessary to filter and rebroadcast that stuff back out to places that need it, or can use it, or you want to have use it. I think the mobile device is only one piece of this. Nat and I always talk about well we do mobile applications but a portion of it is on the server. And coordinating with the people or the group or the central resource that brings all this data together.</p>
<p><strong>Tish Shute: </strong>There seems to be a lot of new location based services &#8211; platforms to aggregate location based data being developed (e.g. <a id="lm5o" title="xtify" href="http://www.xtify.com/" target="_blank">xtify</a> and <a id="algg" title="viaplace" href="http://www.viaplace.com/" target="_blank">viaplace</a>). What do you think about the direction this development is going in?</p>
<p><strong>David Oliver:</strong> It&#8217;s not conventional wisdom but it&#8217;s one of these things where when a crowd of people does something, and that means people themselves are the service providers,Â  when they all get together the net effect is greater that the individual effect would be. Pooling together makes more sense than doing it individually. Its a little bit like an advanced version of you have to have a password for every single site and you manage your passwords. Location is the same way. If you had to give every single website that you enjoyed your location data or tell them how to get it, what a huge pain. So they&#8217;re offering a way to do that in a more general sense. There are humongous privacy issues though. Just like passwords. Would you really trust a place that held all your passwords centrally?</p>
<p>Even with the most basic level of calling. Now that you can call from anywhere. Largely people are getting into a mode where their mobile phone is them. It&#8217;s always with them. That&#8217;s how you reach me. Forget the home phone, the work phone it&#8217;s just a mobile phone. You have an address attached to you, an address I can reach you at that&#8217;s location independent. So there some beauty in that and it&#8217;s very freeing. It makes your location unimportant, you can call me anywhere. You can text me anywhere, message me anywhere. You can be anonymous. My son told me something recently. &#8220;I love going to New York City because I can just walk around and nobody knows me. I&#8217;m completely anonymous. That&#8217;s the coolest thing&#8221;, he says. At one level that is a good thing and a lot of good things can happen that way. But this new thing is sort of the flip side where everybody knows your location. And we haven&#8217;t figured out if that&#8217;s a good thing yet. But we&#8217;re in the throes of that whole changeover happening. And we&#8217;ll see. There&#8217;ll be some misuse. I&#8217;m not an advertising guy, so the fact that everything&#8217;s got to be ad supported makes it potentially very creepy and very dangerous. So we&#8217;ll see how that evolves.</p>
<p>Is there any model where you can go &#8220;Oh this is just like &#8216;S&#8217;&#8221;? I don&#8217;t see where that&#8217;s possible. It&#8217;s a new world. Where you&#8217;re exposed all the time, potentially. And how do you figure out either as an individual or a larger group, society or whatever, when that works and when that doesn&#8217;t. And you know there&#8217;s going to be some mis-steps probably. But the tangibility creates some of these interesting opportunities, there are just some amazing things that could happen, really, really good things. But we&#8217;re not going to get there in one step.</p>
<p>One of the things that was really a killer for privacy and a killer for in some ways the internet, was during the dot com bust. Prior to the bust, there were web sites that you&#8217;d given your name and email, and they said &#8220;we promise to preserve this privacy.&#8221; But as soon as those companies went bankrupt, their email list was gold. It was value. And a bankruptcy judge, in a court in Delaware, created a legal basis to sell that data. Those things that were formerly private were no longer private &#8211; &#8220;no no no that&#8217;s got value. I&#8217;m going to sell it so the shareholders get their money.&#8221; So all these web sites who had lists of user names that they promised were private, became public information. That was one of the biggest blows to privacy in the history of the internet. That&#8217;s going to happen again and again. Like if <a href="http://www.meetmoi.com/welcome" target="_blank">MeetMoi</a> goes out of business the likelihood is all your shit&#8217;s going to get sold. I&#8217;m sorry it&#8217;s all going to be sold. It&#8217;s all a big joke. And that&#8217;s why central services are horrid, and I don&#8217;t like anything about a central service.</p>
<p>There are some pragmatic things about the way routing on networks actually works and the fact that the internet has gotten very centralized itself. The core ideas of the early internet which were essentially a survivable telecommunications network, remember it was the defense department that did the original internet? So the original idea of the original internet was survivability. The Russians could bomb the daylights out of the United States, territorial U.S. and we would still have a survivable network. That was the idea. And therefore all the nodes were dispersed and did not count on each other, and could reroute. Well now one company UUNET or whatever they are they own the whole thing. And you can look up all their locations on some internet database. 18 well placed bombs and the whole internet goes down. That&#8217;s what happens over time.</p>
<p>Well the whole cloud thing is also kind of a myth. It&#8217;s a very neat sounding term, and some aspects of it are different and new. Nate and I do a lot of cloud computing, it&#8217;s all on Amazon.</p>
<p>But we&#8217;ve always had that. That&#8217;s called time sharing. Strictly speaking it&#8217;s a thin contractual accompanied by a much much much easier application programming interface. That&#8217;s what cloud computing is. It&#8217;s a very skinny contract. Timeshare was aÂ  huge contract. Literally it&#8217;s legal and a little bit of API ease. It&#8217;s just timesharing. But at Amazon and the other ones too, you&#8217;re not responsible for your node going down. If it goes down, they push it somewhere else automatically. Your disk goes down. You&#8217;re not responsible for backing up your disk, it&#8217;s already on 14 copies on 8 continents. They do that. So it&#8217;s a higher level of service. Nate and I have this thing called slice host. And we&#8217;ll probably build some services on it, and if they get popular, it&#8217;s like a vending machine. You just drop in a dime, they give you another slice. No contract at all. It is growth and learning about old ideas. Like this whole idea of software as a service. The company called ADP Automatic Data Processing, who basically in short do payroll for everybody. It&#8217;s software as a service. It&#8217;s been going on since 1952 or something. It&#8217;s more like a reconception using modern tools. It&#8217;s like virtual worlds are a different thing. That&#8217;s a whole different beast.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/04/04/do-well-by-doing-good-talking-experience-and-design-in-a-mobile-world-with-nathan-freitas-and-david-oliver/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>People Meet People Meet Big Data: ScienceSim Explores Collaborative High Performance Computing</title>
		<link>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/</link>
		<comments>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/#comments</comments>
		<pubDate>Wed, 11 Feb 2009 22:40:02 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[science outreach in virtual worlds]]></category>
		<category><![CDATA[scientific simulation in virtual worlds]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[virtual worlds in Japan]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[collaboration and big data]]></category>
		<category><![CDATA[collaborative visualization]]></category>
		<category><![CDATA[haptic interfaces for virtual worlds]]></category>
		<category><![CDATA[Hypergrid]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[modelling complex systems]]></category>
		<category><![CDATA[n-body simulation]]></category>
		<category><![CDATA[Piet Hut]]></category>
		<category><![CDATA[rapid data movement in virtual worlds]]></category>
		<category><![CDATA[ScienceSim]]></category>
		<category><![CDATA[scientific simulation]]></category>
		<category><![CDATA[steering big data simulations from virtual worlds]]></category>
		<category><![CDATA[steering virtual worlds with brain waves]]></category>
		<category><![CDATA[super computing conference]]></category>
		<category><![CDATA[supercomputing]]></category>
		<category><![CDATA[Wilf Pinfold]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2855</guid>
		<description><![CDATA[Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the Supercomputing Conference general chair, is working with some Intel colleagues to make a project called ScienceSim the centerpiece of a special workshop event at the SC09 conference (see Supercomputing Conference, an ACM and IEEE Computer society sponsored event). Recently, I interviewed Wilf Pinfold (see interview [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg"><img class="alignnone size-full wp-image-2861" title="gwave_lg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg" alt="gwave_lg" width="540" height="540" /></a></p>
<p>Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the<em> </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a></em> general chair, is working with some Intel colleagues to make a project called <a href="http://www.sciencesim.com/">ScienceSim</a> the centerpiece of a special workshop event at the SC09 conference (<em>see </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a>, an ACM and IEEE Computer society sponsored event)</em>.</p>
<p>Recently, I interviewed Wilf Pinfold (see interview below), Mic Bowman (also <a href="../../2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/">see my previous interview here</a>), and John A. Hengeveld (see interview below). I wanted to find out what are the underlying goals of this SC conference program?Â  Why are members of the SC community being encouraged to participate with the ScienceSim environment? What projects are beginning to emerge?  And, what are Intel&#8217;s goals in giving infrastructure support to further the conversation between high performance computing and collaborative virtual worlds?</p>
<p>The vision of creating new ways to collaborate and interact with big data does seem to be one of the more significant steps we can take at a time when we find many of our most complex systems roiling and threatening total collapse. As Tim O&#8217;Reilly has pointed out &#8211; from financial markets to the climate, the complex systems we depend on for our survival seem to be reaching their limits.</p>
<p>But,Â  how can we get from the place we are now &#8211; <a href="http://www.youtube.com/watch?gl=GB&amp;hl=en-GB&amp;v=gM4fmL6dLdY" target="_blank">see this example of an n-body simulation in OpenSim</a>, to the point where we can collaboratively steer from our visualizations big data simulations of climate change, financial markets, or the depths of the universe.Â  The picture opening this post is a:</p>
<blockquote><p><em>Frame from a 3D simulation of gravitational waves produced by merging black holes, representing the largest astrophysical calculation ever performed on a NASA supercomputer. The honeycomb structures are the contours of the strong gravitational field near the black holes. Credit: C. Henze, NASA</em></p></blockquote>
<p>Wilf Pinfold explained to me part of the reason to begin a dialogue on collaborative visualization at SC &#8217;09 is that super computing communities (that tend to be highly skilled and visionary) have played key roles in internet development in the past. Wilf pointed out,Â  key browser technologyÂ  developed out of these communities in the early days of the internet &#8211; see <a href="http://en.wikipedia.org/wiki/Mosaic_(web_browser)" target="_blank">this wikipedia entry</a> that givesÂ  a background on the role of NCSA (National Center for Supercomputer Applications).</p>
<p>The hope is, while there are many obstacles to overcome, the super computing community has both the skills and motivation to find solutions to creating collaborative environments capable of the kind of rapid data movement that scientific/big data visualization needs. Solving the problems of realtime collaborative interaction with big data willÂ  have many ramifications for the way we understand virtual reality, the metaverse, virtual worlds (all these terms are becoming increasingly inadequate for cyberspace in the age of ubiquitous computing, an argument I will make in another post!).</p>
<p><em></em></p>
<p>There have already been a number of blogs on ScienceSim (see <a href="http://www.virtualworldsnews.com/2008/11/intel-creating-sciencesim-on-opensim.html" target="_blank">Virtual World News</a>, <a href="http://nwn.blogs.com/nwn/2009/02/intel-outside-.html" target="_blank">New World Notes</a>, <a href="http://www.vintfalken.com/intel-using-opensim-for-immersive-science-project/" target="_blank">Vint Falken</a>, and <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a>). There have also been Intel blogs &#8211; <a href="http://blogs.intel.com/research/2009/01/sciencesim.php" target="_blank">see this post</a> by John A. Hengeveld (a senior business strategist working with Intel planners and researchers to accelerate the adoption of Immersive Connected Experiences). And Intel CTO <a href="http://blogs.intel.com/research/2008/11/immersive_science.php" target="_blank">Justin Rattner&#8217;s pos</a>t announcing the project this November.</p>
<p>But to blow my own horn a little, I think i was the first to blog the encounter between <a href="http://opensimulator.org/">OpenSim</a> and Supercomputing (an encounter I to some degree provoked by making the introductions) <a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/ " target="_blank">see this post</a>.Â  So I have been following the ScienceSim initiative with great interest.</p>
<p>Very shortly after N-Body astrophysicicsts Piet Hut and Jun Makino, creators ofÂ  &#8211; GRAPE (an acronym for â€œgravity pipelineâ€ and an intended pun on the Apple line of computers) &#8211; a super computer that will <a href="http://grape.mtk.nao.ac.jp/grape/news/ABC/ABC-cuttingedge000602.html" target="_blank">become one of the fastest super computers in the world (again)</a>, met <a href="http://www.genkii.com/" target="_blank">Genkii</a> &#8211; a Tokyo based strategic company working with OpenSim, the first N-body simulation appeared in OpenSim.Â  And in a matter of weeksÂ  <a href="http://www.youtube.com/watch?v=gM4fmL6dLdY" target="_blank">this video went up on YouTube</a> &#8211; the result of a collaboration between MICA and Genkii.Â  But the nirvana of being able to create visualizations using real time data from super computers that can be steered from a collaborative environment is still a ways off.</p>
<p>Super computing communities tend to be geographically very dispersed and researchers often find themselves far from simulation facilities so there is both the motivation and skills to pioneer new tools for collaborative visualization. I know that astrophysicists certainly see their value (Piet Hut has some profound ideas on this). Astrophysicist Piet Hut and othersÂ  (<a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/b" target="_blank">see here for more</a>) have been pioneering the use of VWs for collaboration.Â  There are two Virtual World organizations, both founded by <span class="nfakPe">Piet</span> Hut and collaborators, that are currently exploring the use of OpenSim for scientific visualizations. Â One is specifically aimed at astrophysics, MICA, the<a href="http://www.mica-vw.org/" target="_blank"> Meta Institute for Computational Astrophysics</a>, and the other is aimed broadly at interdisciplinary collaborations in and beyond science, <a href="http://www.kira.org/" target="_blank">Kira</a>, a 12-year old organization focused on `science in context&#8217;. Â As of last week, there are two weekly workshops sponsored jointly by Kira and MICA that explore the use of OpenSim, ScienceSim, and other virtual worlds. Â One of them is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=124&amp;Itemid=154" target="_blank">&#8220;Stellar Dynamics in a Virtual Universe Workshop&#8221; </a>and the other is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=119&amp;Itemid=149" target="_blank">&#8220;ReLaM: Relocatable Laboratories in the Metaverse.&#8221;</a></p>
<p>MICA was founded two years ago by <span class="nfakPe">Piet</span> Hut within the virtual world of <a href="http://qwaq.com" target="_blank">Qwaq Forums</a> (see the paper <a href="http://arxiv.org/abs/0712.1655" target="_blank">&#8220;Virtual Laboratories and Virtual Worlds&#8221;</a>). The Kira Institute is much older: it was founded in 1997. Â Later this month, on February 24, Kira will celebrate its 12th anniversary with a presentation of talks, a panel discussion, and a series of workshops. Â See the <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=83&amp;Itemid=113" target="_blank">Kira Calendar</a> for the main event, and the Kira Japan branch for a <a href="http://www.kirajapan.org/event/" target="_blank">special mixed RL/SL</a> event in Tokyo. Â During both events, Junichi Ushiba will give a talk about his research in which <a href="http://nwn.blogs.com/nwn/2007/10/the-second-life.html" target="_blank">he let paralyzed patients steer avatars using only brain waves</a>.</p>
<p>Other early adopters of ScienceSim include Tom Murphy, who teaches computer science at a Contra Costa College. Prior to teaching, Tom spent 35+ years working for supercomputer manufacturers. Tom said:</p>
<blockquote><p>it is very natural for me to find significantly new ways to visualize and interact with scientific mathematical models via ScienceSim and the OpenSim software behind it. ScienceSim also allows us to interact with each other and teach students in new ways.</p></blockquote>
<p>Also Charlie Peck, chair of the SC09 Education Program, (his day job is teaching computer science at Earlham College in Richmond, IN), is working with Wilf Pinfold, Tom Murphy and others &#8220;to explore how 3D Internet/metaverse technology can be used to support science education and outreach.&#8221;</p>
<p><a href="http://www.ics.uci.edu/~lopes/" target="_blank">Cristina Videira Lopes</a>, University of Irvine, is doing very interesting workÂ  on road and pedestrian traffic simulations. Crista is also the creator of <a href="http://opensimulator.org/wiki/Hypergrid" target="_blank">hypergrid in OpenSim</a>,</p>
<h3>People Meet People Meet Data: A Conversation With Mic Bowman</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png"><img class="alignnone size-full wp-image-2908" title="sciencesim_002_thumb1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png" alt="sciencesim_002_thumb1" width="404" height="239" /></a><em></em><br />
<em>Screenshot of ScienceSim from <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a></em></p>
<p><strong>Tish:</strong> How does this work on ScienceSimÂ  fit into a wider dialogue on linked data? Where people meet people meet data, and where data meets data?</p>
<p><em><strong>Mic:</strong> Yeahâ€¦ thatâ€™s hard by the way.Â  Open integration of data (and more interestingly the functions on data) is very hard if it comes from multiple, independent sources.</em></p>
<p><em>Thatâ€™s the people part. For example, if Crista can build a model of the UCI campus somebody else builds an accurate model of several cars and another expert provides the simulation that computes the pollution generated by those cars in that environmentâ€¦its bringing people together to solve real problems, no matter how far apart physically.</em></p>
<p><strong>Tish:</strong> You mention three different simulations here. Could you explain why it is difficult to integrate data from multiple sources?</p>
<p><em><strong>Mic:</strong> integrating data from multiple sources has always been one of understanding &amp; interpreting both the syntax &amp; semantics of the data. Even relatively simple things like multiple date formats require explicit translation. More complex formats, like the many formats data is represented for urban planning, are barely computable independently let alone in conjunction with data from other sources (each with its own representation for data). Its often the expertise &amp; the collaboration of bringing people (and their bag of tools) together that solves these problems.</em></p>
<p><strong>Tish:</strong> and in this case the bag of tools is high performance modeling..?</p>
<p><em><strong>Mic:</strong> high performance modeling, rich visualizations and data. Its the three that matterâ€¦ data, function, and interface.</em></p>
<p><strong>Tish:</strong> Some people have a very hard time wrapping their head aropund the fact that anything that seems related to Second Life can do this.Â  Can you explain more about the difference between SL and OpenSim?</p>
<p><em><strong>Mic:</strong> OpenSim potentially improves data &amp; function because it can be extended through region modules. Region modules hook directly into the simulator to provide additional functionality. For example, a region module could be implemented to drive the behavior of objects in a virtual world according based on a protein folding model.</em></p>
<p><em>We need to work on additional viewer capabilities to address the user interface limitations.</em><br />
<strong><br />
Tish:</strong> Yes Rob Smartâ€™s (IBM) recent data integrations with OpenSim (<a href="http://robsmart.co.uk/2009/01/22/visualizing-live-shipping-data-in-opensim-isle-of-wight-ferries/" target="_blank">see here</a>) are impressive. Re viewers one of the biggest objections to virtual worlds is the mouse pushing and pc tied interface.</p>
<p><em><strong>Mic:</strong> There are great opportunities for improving the interface</em></p>
<p><strong>Tish:</strong> Yes I really like where the Andy Piperâ€™s experiments with Haptic Interfaces for OpenSim lead, <a href="http://andypiper.wordpress.com/2009/02/06/haptic-user-interfaces/" target="_blank">see Haptic Fantastic</a>! And I think that we will have cyberspace ubiquitous in our environment, not just stuck on a pc screen, sooner than we think.</p>
<p><em><strong>Mic:</strong> Micâ€™s opinion (not Intel): until we get souped up sunglasses with HD screens embedded (or writing directly into the eye) there will always be a role for the PC/Console/TV).Â  But, it isnâ€™t about the deviceâ€¦ its about the services projected through the deviceâ€¦ sometimes youâ€™ll want a very rich experienceâ€¦ sometimes youâ€™ll want an experience NOW wherever you are.</em></p>
<p><strong>Tish:</strong> I think people are only just realizing that VWs will be a now and wherever you are experience very soon.</p>
<p><em><strong>Mic:</strong> Thatâ€™s the critical observation the virtual world is not an application you runâ€¦ its a â€œplaceâ€â€¦ and you interact with it where you are or maybe interact through it. Speaking for Intelâ€¦ it is the spectrum of experiences that are critical to support.</em></p>
<h3>Interview with Wilfred Pinfold</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg"><img class="alignnone size-full wp-image-2860" title="gustav_h" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg" alt="gustav_h" width="416" height="200" /></a></p>
<p><em>Picture from National Science Foundation &#8211; <a href="http://www.nsf.gov/news/news_summ.jsp?cntn_id=112166" target="_blank">&#8220;Climate Computer Modeling Heats Up.&#8221;</a></em></p>
<p><strong>Tish Shute:</strong> I know your day job for Intel is in High Performance computing.  Could you explain to me a little bit more about what you are working on in this regard &#8211; a mini state of play for high performance computing from your perspective?</p>
<p><em><strong>Wilfred Pinfold:</strong> My title is Director, Extreme Scale Programs. This program drives a research agenda that will put in place the technologies required to make an Exa (10^18) scale systems by 2015. The current generation of high performance computers are Peta (10^15) scale so this is a 1000x increase in performance and this increase will require significant improvements in power efficiency, reliability, scalability and new techniques for dealing with locality and parallelism.</em></p>
<p><strong>Tish:</strong> The nirvana in terms of linking supercomputers to the collaborative spaces of immersive virtual worlds is to be able to create visualizations using real time data from super computers in collaborative VW environments, and ultimately for researchers to be able to collaborate and steer their simulations from their visualizations.Â   Where are we at now in terms of scientific data visualization in VWs? And what are the current obstacles to using realtime data from super computers?</p>
<p><em><strong>Wilf: </strong>Being able to steer a simulation from a visualization requires both a visualization interface that allows interaction and a simulation that operates at a speed that is responsive in interactive timeframes. For example a weather model that predicts the path of a hurricane would need to operate at something close to 1000x real time. This would run through a day in ~1.5 minutes allowing an operator to run the simulation over several days multiple times with different parameters in a single sitting to understand the likelyhood of certain outcomes?</em></p>
<p><strong>Tish:</strong> Do you see a networked online collaborative virtual world being capable of being a visualization interface that allows meaningful interaction with the hurricane scenario you describe in the near future (next 6 to 18 months)?</p>
<p><em><strong>Wilf: </strong>I was using the hurricane example to explain the usage model not an imminent capability. Hurricane Simulation: Accurate hurricane simulations require multiscale models able to resolve the global forces working on the storm as well as the microforces that define precipitation. We can build useful weather models today that run faster than real time (anything slower is not useful for prediction) but we are a long way from the ideal.<br />
Visualization: There are excellent visualizations of weather systems but I have not yet seen a virtual world that can track a simulation and allow the scientist or team of scientists to see what is going on at both the macro scale and zoom in to see precipitation conditions. Today&#8217;s supercomputers are much better at this than they were a few years ago but they are a long way from ideal.</em></p>
<p><strong>Tish:</strong> Open Source Virtual World technologies are pretty diverse in their approaches, Croquet, Sun&#8217;s Wonderland and OpenSim are quite different and have different strengths and weaknesses. As you have become more familiar with OpenSim, what have you found about the technology that particularly lends itself to this project &#8211; ScienceSim (Mic mentioned Crista&#8217;s hypergrid code for example, modularity is another feature often cited).</p>
<p><em><strong>Wilf: </strong>We have found OpenSim&#8217;s client server model is well suited to the visualization model and the ability to put the server next to the supercomputer producing the visualization data is critical. We are however very interested in other environments and encourage papers, demonstrations and research on any of these platforms at the conference.</em></p>
<h3>Interview with John A. Hengeveld</h3>
<p><strong>Tish Shute:</strong> OpenSimâ€™s dependence on Second Life based viewers is sometimes cited as a limitation, and sometimes as a strength. What are your views on this?Â  What would a strong open viewer project directed at science applications bring to the picture?</p>
<p><em><strong>John Hengeveld:</strong> There may be more than one strong open viewer project required for opensim compatible experiences.Â  The strength of the Hippo viewer, for example, is availability and its weakness is the size of the client.Â  We would love a ubiquitous, client.. that runs on all platforms, but each hardware platform brings tradeoff and restrictions of its own.Â  Today, probably all of the folks innovating in the space can deal with the size of a very fat rich client ap.. they have big computers anyway.Â  But as we get into more 3D entertainment and augmented reality applications.. virtual mall, collaboration apps.. etcâ€¦ there is a great deal of room to optimize for the specific experience.Â  Balancing visual experience with bandwidth and compute performance available .. tying into standard browsers, etcâ€¦ people have done some of this work.. and I think all of it adds to the usefulness of these worlds.</em></p>
<p><strong>Tish:</strong> Integrating highend game engines and OpenSim opens up new possibilities. But licensing issues have been an obstacle. Could a project like ScienceSim get a non-commercial license on a high end game engine?Â  What would that bring to the picture?</p>
<p><em><strong>John: </strong>Anything is possible. Game engines can give a great deal of design power for high value experiences, but the programming of these experiences must be simplified.Â  Mainstream adoption in enterprise can&#8217;t be premised on the programming model of studio gamesâ€¦ thatâ€™s a big step to get over I think.Â  There are very interesting possibilities when we take that step tho.Â  Simulation, training, agents of various types (I just finished watching â€œThe Matrixâ€ for like the billionth timeâ€¦ I think agents are coolâ€¦)</em></p>
<p><strong>Tish:</strong> Where does Larabee fit into the picture of ScienceSim and next generation virtual worlds?</p>
<p><em><strong>John:</strong> We are all very excited about the Larrabee architecture and its application to work loads like next generation virtual worlds, both in the client.. delivering immersive reality.. and someday potentially in a distributed architecture simulating and producing these worlds.Â  For Intel CVC is an all play.Â  Atom will be used in strong mobile clients.Â  Core will be used in Enterprise PCs, Laptops and DesktopsÂ Â  Xeon will be simulating these environments and handling the data communication, and Whatever we brand Larrabeeâ€¦ will be enabling compelling visual experiences. Oh.. and our software products (Havoc, tools and others) will be building blocks in knitting all this together.Â  Larrabee is a part, but there are a lot of other pieces in our visionâ€¦</em></p>
<p><strong>Tish:</strong> If the kind of rapid data movement that scientific visualization needs is achieved in virtual worlds, this will be quite a game changer for business applications of VWs too. Also it will blurr the boundaries between what we call virtual worlds and mirror worlds. It seems to me this kind of rapid data movement is a vital step towards what Mic described to me as Intelâ€™s vision of CVC: â€œConnected Visual Computing is the union of three application domains: mmog, metaverse, and paraverse (or augmented reality).â€ It almost seems to me that if you achieve your goals for ScienceSim you will change how we think about virtual worlds in general? What do you think?</p>
<p><em><strong>John:</strong> I certainly hope so..Â  Part of our goal is to stimulate innovation in the technology and usage models that will enable broad mainstream adoption of CVC based applications (what we categorize as immersive connected experiences).Â Â  By tackling the scientific visualization problem, we hope to find the key technology barriers and encourage the ecosystem to solve them.</em></p>
<p><strong>Tish: </strong>To me virtual worlds and augmented reality should be complimentary and connected experiences. How do you see this connection evolving?</p>
<p><em><strong>John:</strong> We certainly see them as related.Â  In the long term, there are many common building blocks.. but they arenâ€™t united per se.Â  Its about the user experience, and in some usages these two are almost identicalâ€¦Â  in some.. they donâ€™t look or feel at all alikeâ€¦ the viewer is distinct by a lot.Â  Our approach is to enable building blocks that people can quickly build out usages that are robust.</em></p>
<p><strong>Tish: </strong>What is Intelâ€™s vision for ubiquitous mobile computing and an internet of objects?Â  How can high performance computing be an enabler for this vision?</p>
<p><em><strong>John: </strong>Mobile computing is a central part of our life, culture and community in economically enabled economies.Â  It feeds the data of our decisions, it connects us to entertainment, it is the access point to our soapboxes, pulpits, economy and families.Â  This creates a massive increase in data, a massive increase in interactions, transactions and visualizations.Â  While many HPC applications will be behind the scenes (finance, health, energy, visual analytics and others), HPC will emerge as a part of a scale solution to serving some of this increaseâ€¦ particularly that part where interactions and visualizations are complex or compelling.. or where scale enables the usage per se .. I talked about my love of agents earlier.. and some of that comes in here.Â  Compute working behind the scenes to help managed the data complexity, manage some of the base interactions between ourselves and technology.Â  The other thing we talk internally about the â€œHannah Montana usageâ€ where millions of people use their mobile devices to access and participate (using the sensors in the device) with an interactive live concert.Â  When Mylie hears the applause of a virtual interactive audienceâ€¦ and can scream back at them.. weâ€™re there.Â  Access to ubiquitous compute will be mobile, and interactive experiences will be complex.. and HPC can help make that real.Â  Watch out for the mental trap that HPC is always high end super compute clusters thoâ€¦ the â€œmainstream HPCâ€.. smaller clustersâ€¦ high threads, etcâ€¦ will play a key part in all of this as well.</em></p>
<p>Interesting that John ended on this point as this just came in from <a href="http://blog.wired.com/gadgets/2009/02/intel-fights-re.html" target="_blank">Wired. </a><em><br />
</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
	</channel>
</rss>
