<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; ubiquitous computing</title>
	<atom:link href="https://www.ugotrade.com/tag/ubiquitous-computing/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Shaping Play with Connected Stuff: IoToaster a prize winner in the YCombinator Upverter Hackathon!</title>
		<link>https://www.ugotrade.com/2013/03/10/shaping-play-with-connected-stuff-iotoaster-a-prize-winner-in-the-ycombinator-upverter-hackathon/</link>
		<comments>https://www.ugotrade.com/2013/03/10/shaping-play-with-connected-stuff-iotoaster-a-prize-winner-in-the-ycombinator-upverter-hackathon/#comments</comments>
		<pubDate>Sun, 10 Mar 2013 01:00:29 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Adam Wilson]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[Connected Stuff]]></category>
		<category><![CDATA[Dave Bisceglia]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[hardware startups]]></category>
		<category><![CDATA[Parsing Reality]]></category>
		<category><![CDATA[Parsing Reality Shaping Play with Connected Stuff]]></category>
		<category><![CDATA[Phu Nguyen]]></category>
		<category><![CDATA[robotic gaming systems]]></category>
		<category><![CDATA[robots]]></category>
		<category><![CDATA[robots and play]]></category>
		<category><![CDATA[romotive. orbotix]]></category>
		<category><![CDATA[smart phones and robots]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social games]]></category>
		<category><![CDATA[SXSW]]></category>
		<category><![CDATA[SXSW interactive]]></category>
		<category><![CDATA[the future of play]]></category>
		<category><![CDATA[The Tap Lab]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[YCombinator]]></category>
		<category><![CDATA[YCombinator Upverter Hackathon]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6580</guid>
		<description><![CDATA[We had so much fun at the YCombinator Upverter Hackathon. I was honored to be part of &#8220;the beatles&#8221; team Â (Sam Cuttriss, Josh Cardenas, Jason Appelbaum, Lauren Elliott, Tish Shute, Otto Leichliter III &#38; IV) that produced the prize winning IoToaster. Rick Merritt did an awesome write up in EE Times, Slideshow: Y Combinator hackathon&#8217;s [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>We had so much fun at the <a href="http://upverter.com/hackathons/yc-hackathon-2013/" target="_blank">YCombinator Upverter Hackathon</a>. I was honored to be part of &#8220;the beatles&#8221; team Â (Sam Cuttriss, Josh Cardenas, Jason Appelbaum, Lauren Elliott, Tish Shute, Otto Leichliter III &amp; IV) that produced the prize winning IoToaster. Rick Merritt did an awesome write up in EE Times, <a href="http://www.eetimes.com/electronics-news/4408238/Slideshow--Toaster-burns-in-Instagrams-at-hackathon?pageNumber=0" target="_blank">Slideshow: Y Combinator hackathon&#8217;s prize-winning designs</a>.   If you want to hear more about hardware startups shaping play with connected stuff, I hope you will stop by, <a href="http://schedule.sxsw.com/2013/events/event_IAP5412" target="_blank">Parsing Reality: Shaping Play with Connected Stuff</a>, Tuesday March 12th, 12.30pm -1.30pm, Raddison Town Lake Ballroom, Austin, SXSW 2013.  I&#8217;m delighted to join, Adam Wilson Founder, Chief Software Architect <a href="https://www.gosphero.com/company/" target="_blank">Orbotix</a>, Dave Bisceglia Co-Founder &amp; CEO <a href="http://thetaplab.com/" target="_blank">The Tap Lab</a>,  Phu Nguyen Founder <a href="http://romotive.com/" target="_blank">Romotive Inc</a> to talk about shaping play with connected stuff &#8211; <a href="http://schedule.sxsw.com/2013/events/event_IAP5412" target="_blank">more details here.</a></p>
<p>Meanwhile enjoy Rick Merritt&#8217;s great write up of IoToaster (<a href="http://www.eetimes.com/electronics-news/4408238/Slideshow--Toaster-burns-in-Instagrams-at-hackathon?pageNumber=0" target="_blank">reprinted from EE Times</a>).</p>
<blockquote>
<h2><span style="font-weight: normal;">&#8220;Y Combinator hackathon&#8217;s prize-winning designs&#8221;</span></h2>
</blockquote>
<p>&#8220;An Internet Toaster, two pair of faux Google glasses and two novel electronic gloves emerged from a hackathon organized by Upverter and hosted by Y Combinator.Â <span style="font-family: Arial;">SAN JOSE, Calif. â€“ Imagine sending an Instagram to your Internet toaster and printing itâ€”on whole wheat or white bread. Imagine creating your own vision for a variant of Google&#8217;s Project Glass.</span></p>
<p>Those were among the 32 projects from more than 130 designers at a recent all-day event organized by Upverter.com and hosted by Y Combinator, a startup incubator in Mountain View, Calif.</p>
<p>Winners took home iPads, Pebble watches, Arduino kits and Raspberry Pi boards after dedicating about 10 hours of their Saturday to hacking on their best ideas. Some took with them hopes of products that could make it to the market or new-formed teams that could be the heart of a new startup. Others just had a good time.</p>
<p>Hereâ€™s a look at some of the winners.</p>
<div><img src="http://m.eet.com/media/1179469/1%20glasses%20with%20woman.jpg" alt="" /></div>
<div>
<p><strong><span style="font-family: Arial;">Two teams worked on variants of Googleâ€™s $1,500 glasses-mounted computer. One team (above) used laser-cut medium-density fibreboard and embedded LEDs that could indicate when the wearer faced north. Another team (below) created Prism, a more thorough knock-off of Googleâ€™s concept complete with an embedded display and gesture recognition.</span></strong></p>
<p><strong> </strong><strong><img src="http://m.eet.com/media/1179470/1%20thanh%20with%20glasses%20x%20420.jpg" alt="" /><br />
</strong></p>
</div>
<div><span style="font-family: Arial;"><strong>Photos courtesy of Kuy Mainwaring and Sam Wurzel of Octopart.</strong></span></div>
<p><strong>Printing on whole wheat or white</strong></p>
<div><img src="http://m.eet.com/media/1179471/1%20toast.jpg" alt="" /></div>
<p>The IO Toaster (above) is sort of the Reeseâ€™s Peanut Butter Cup of social electronics. Itâ€™s an Internet-connected combo toaster/printer that creators say can â€œbring the cloud to your breakfast.â€</p>
<p>The team adapted code from an LED matrix to control heat transmission down to the pixel level. They hope to present the device at the Augmented World Expo at SXSW as well as at other hackathons and hardware meetups.</p>
<p>The team included Sam Cuttriss, Josh Cardenas, Tish Shute, Lauren Elliott, Jason Appelbaum and both Otto Leichliter III and IV.</p>
<div><img src="http://m.eet.com/media/1179472/1%20toaster%20engineer.jpg" alt="" /></div>
<p><strong>Peripherals and apps for the IO Toaster</strong></p>
<div><img src="http://m.eet.com/media/1179473/1%20toast%20face%20x%20420.jpg" alt="" /></div>
<p>The potential for the IO Toaster is great, said team members who brainstormed spin off products including:</p>
<ul>
<li>FaceToast: Your friendsâ€™ Facebook status messages pop up automatically at breakfast.</li>
<li>Instagram Toast: Patented sepia tone filters add artistic textures to photos (above). Too grainy?</li>
<li>Toasted, Augmented Reality: Toast revitalizes boring QR codes (below).</li>
<li>Pop Tweets: Twitter toaster pastries. Follow your favorite fruit flavor.</li>
<li>FlipToast: Create an edible FlipBook with a carb-hinge technology in development.</li>
<li>Angry Toast: A hyper sling and gimble add on hurls slices at kids trying to leave for school without breakfast.</li>
</ul>
<p><img src="http://m.eet.com/media/1179474/1%20toast%20q%20code%20x%20420.jpg" alt="" /><br />
<strong>Touch screen toaster displays</strong><br />
<iframe width="640" height="360" src="http://www.youtube.com/embed/OOSM8y7vuvA?feature=player_embedded" frameborder="0" allowfullscreen></iframe><br />
Designers of the IO Toaster created this animation to show the romantic possibilities of their product.</p>
<p><strong>Grand prize was a real grabber</strong></p>
<div><img src="http://m.eet.com/media/1179475/1%20hand%20thing.jpg" alt="" /></div>
<div><strong>The Tactilus is a haptic feedback glove for interacting with 3-D environments. A series of cables applies pressure to the wearer&#8217;s fingers to resist their motion in response to pushing against a virtual object.</strong></div>
<div><img src="http://m.eet.com/media/1179476/1%20hand%20thing%202.jpg" alt="" /></div>
<p><strong>Meet the Tactilus team</strong></p>
<div><img src="http://m.eet.com/media/1179477/1%20tactilous%20team.jpg" alt="" /></div>
<div><strong>Jack Minardy had the idea to create a haptic glove. Five strangers who stopped by his table and liked the idea became a virtual team for the day, bringing Tactilus to life. They are (from left) Matt Bigarani, Nick Bergseng, Jack Minardy, Neal Mueller and Tom Sherlock. Not pictured: Oren Bennett.</strong></div>
<p><strong>Fitness glove has something up its sleeve</strong></p>
<div><img src="http://m.eet.com/media/1179478/1%20glove.jpg" alt="" /></div>
<div><strong>The Body API is a comprehensive metric-gathering device that gives the sports enthusiast a big data boost.</strong></div>
<p><strong>Baby gets a robo rocker</strong></p>
<div><img src="http://m.eet.com/media/1179479/1%20rocker.jpg" alt="" /></div>
<div><strong>One team prototyped its invention for an automatic baby rocker using an electric can opener. Parents can control it visa a mobile app.<br />
</strong></div>
<p><strong>And other winners were&#8230; </strong><br />
At the end of the day, 30 groups took two minutes each to pitch their hack (below), some of which judges pitches in the circular file. A handful of others got various levels of recognition.</p>
<p>The winner in the most marketable category was the DIYNot, a plug that fits between your recharging device and the socket to turn off the two amp energy flow anytime you want. The Window Blind Controller, a clip on device that keeps streetlight out in the night and lets sunlight in during the day, got a nod from judges.</p>
<p>Judges also liked the Walkmen, an ultrasound virtual walking stick with haptic feedback for guiding disabled people. A team from Electric Imp got the Corporate Shill Award for a networked dispenser that spits out M&amp;Ms in response to tweets. Another group added Wi-Fi links to home switches opening a circuit for new kinds of remote controlsâ€”and pranks.</p>
<div><img src="http://m.eet.com/media/1179480/1%20presentations.jpg" alt="" /></div>
<p><strong>From here to China and back</strong></p>
<div><img src="http://m.eet.com/media/1179481/1%20zak%20and%20matt.jpg" alt="" /></div>
<div><strong>Zack Hormuth of Upverter.com (left), organizer for the event, helps hacker Matt Sarnoff. UpverterÂ <a href="http://www.eetimes.com/electronics-news/4405202/Slideshow--Hangin--at-a-hardware-hackathon">led a hackathon</a> at Facebookâ€™s Open Compute Summit. It also has hackathons in the works for New York City and Shenzhen.&#8221;</strong></div>
<div><strong><br />
</strong></div>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2013/03/10/shaping-play-with-connected-stuff-iotoaster-a-prize-winner-in-the-ycombinator-upverter-hackathon/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interview with Vernor Vinge: Smart phones and Empowering Aspects of Social Networks &amp; Augmented Reality Still Massively Underhyped</title>
		<link>https://www.ugotrade.com/2011/05/10/interview-with-vernor-vinge-smart-phones-and-the-empowering-aspects-of-social-networks-augmented-reality-are-still-massively-underhyped/</link>
		<comments>https://www.ugotrade.com/2011/05/10/interview-with-vernor-vinge-smart-phones-and-the-empowering-aspects-of-social-networks-augmented-reality-are-still-massively-underhyped/#comments</comments>
		<pubDate>Tue, 10 May 2011 18:21:06 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial Life]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[evolutionary technologies]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[gestrural interface]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[ipad]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Open Data]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[technological singularity]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[A Fire Upon the Deep]]></category>
		<category><![CDATA[AR Vision]]></category>
		<category><![CDATA[augmented cognition]]></category>
		<category><![CDATA[Augmented Reality Contact Lenses]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality social networks]]></category>
		<category><![CDATA[augmented social experiences]]></category>
		<category><![CDATA[augmented vision]]></category>
		<category><![CDATA[bottom up social networking]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Daemon]]></category>
		<category><![CDATA[Daniel Suarez]]></category>
		<category><![CDATA[digital gaia]]></category>
		<category><![CDATA[Fast Times at Fairmount High]]></category>
		<category><![CDATA[Freedom (TM)]]></category>
		<category><![CDATA[HUDs]]></category>
		<category><![CDATA[intelligence amplification]]></category>
		<category><![CDATA[Maneki Neko]]></category>
		<category><![CDATA[Rainbows End]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[The Singularity]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Vernor Vinge]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[wearable computing]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6277</guid>
		<description><![CDATA[Interview with Vernor Vinge Tish Shute: Many of the pioneers of the emerging AR industry who will be speaking at, and attending Augmented Reality Event, consider &#8220;Rainbows End&#8221; one of their key inspirations. [Note: If you want to attend ARE2011 readers of this post can use my discount code TISH295 ($295 for two days, or [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/04/Screen-shot-2011-04-13-at-12.51.38-PM.png"><img class="alignnone size-medium wp-image-6200" title="Screen shot 2011-04-13 at 12.51.38 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/04/Screen-shot-2011-04-13-at-12.51.38-PM-200x300.png" alt="" width="200" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/VernorVinge_RainbowsEnd.jpg"><img class="alignnone size-medium wp-image-6314" title="VernorVinge_RainbowsEnd" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/VernorVinge_RainbowsEnd-196x300.jpg" alt="" width="196" height="300" /></a></p>
<h3>Interview with Vernor Vinge</h3>
<p><strong>Tish Shute: </strong> Many of the pioneers of the emerging AR industry who will be speaking at, and attending <a href="http://augmentedrealityevent.com/" target="_blank">Augmented Reality Event,</a> consider <a href="http://www.amazon.com/Rainbows-End-Novel-Foot-Future/dp/0312856849" target="_blank">&#8220;Rainbows End&#8221;</a> one of their key inspirations. [Note: If you want to attend ARE2011 readers of this post can use my discount code <a href="http://augmentedrealityevent.com/register/" target="_blank">TISH295</a> ($295 for two days, or for one day only <a href="http://augmentedrealityevent.com/register/" target="_blank">TISH1DAY11</a> for $149]</p>
<p>What is the best and worst, in your view, about the way Augmented Reality is emerging from science fiction into science fact?</p>
<p><strong>Vernor Vinge:</strong> <strong>Progress that sets the stage:<br />
The worldwide market penetration of cellphones in the era 2000-2010 was of a size and speed that would have counted as foolish implausibility even in science-fiction of earlier times. More than half the human race suddenly had access to knowledge and comms. Being in the middle of this firestorm of progress, we can&#8217;t really judge ultimate effects, but I expect that smart phones and the empowering aspects of social networks and AR are still massively underhyped. (This is not to say that individual innovation enterprises can&#8217;t fail; the treasure is there for those who dare, and ultimately the whole human race can benefit.)</strong></p>
<p><strong>But I can still whine:<br />
Some &#8212; mostly political/legal &#8212; issues are disappointing. These affect AR but also the broad range of our progress with technology:<br />
o Software patents and some styles of cloud computing are blunting the ability of average people to innovate. In the 2010-2020 era, average people should have the building blocks to empower them to create (and throw away at the end of the workday) tools that in olden times would have been the whole purpose of a business startup.<br />
Unfortunately, some companies restrict and compartmentalize their releases like we&#8217;re still living in the twentieth century.<br />
There are also some mostly tech issues that I&#8217;m impatient with (speaking as a never-satisfied consumer and fan:)<br />
o The low pixel counts in contemporary head up displays.<br />
o The poor position coordination in current HUDs.<br />
o The lack of mass market acceptance of HUDs.<br />
o The lack of progress in distributed store-and-forward between<br />
mobile devices (sub-femtocell, ad hoc and transitory forwarding).<br />
o The lack of progress in uniform solutions to centimeter-scale<br />
localization.</strong></p>
<p><strong>Tish Shute:</strong> What do you feel will be the most impactful application of AR in people&#8217;s everyday lives?</p>
<p><strong>Vernor Vinge: There are nebulous and fairly high likelihood answers: AR apps that let each person/team see those aspects of physical reality that are important for their current activity. Pointing technologies that coordinate with that AR vision. The combination is a revolution of interfaces, and the probable physical disappearance of more and more of the gadgets that twentieth century people associated with high tech.</strong></p>
<p><strong> </strong></p>
<p><strong>There are also more specific, spectacular, and necessarily uncertain impacts (that depend on social acceptance and the development of network infrastructure for consensual sharing of local imagery).<br />
o Economic disruption of the trend toward huge, expensive display devices.<br />
o Bottom up social networking, arising from GPL&#8217;d tools. I see this as very disruptive, in good, bad and arguable ways, as illustrated by descriptive terms such as &#8220;consumer protection clubs&#8221;, &#8220;belief circles&#8221; and &#8220;lifestyle cults&#8221;. Some of these could be as public as our topdown social networks. Some might be quiet and widespread, perhaps growing out of pre-existing groups that already have a lot of intermember trust. (See:<a href="http://www-rohan.sdsu.edu/faculty/vinge/C5/index.htm" target="_blank">http://www-rohan.sdsu.edu/faculty/vinge/C5/index.htm</a>)<br />
o More farfetched, but in the tradition of the last 50 years: the digitization of external visual design: building architecture could give less priority to physical appearance and more to cheap physical strength, network access support, and physical modifiability.</strong></p>
<p><strong>Tish Shute: </strong>I interviewed Bruce Sterling earlier this week &#8211; <a href="http://www.ugotrade.com/2011/05/06/augmented-reality-transitioning-out-of-the-old-fashioned-legacy-internet-interview-with-bruce-sterling/" target="_blank">http://www.ugotrade.com/2011/05/06/augmented-reality-transitioning-out-of-the-old-fashioned-legacy-internet-interview-with-bruce-sterling/</a>.Â  And, I&#8217;m really looking forward to your &#8220;fireside chat&#8221; with Bruce at the end of Augmented Reality Event to sum up the event [<a href="http://augmentedrealityevent.com/schedule/" target="_blank">see the full schedule for ARE2011 here</a>].Â  But was there anything that particularly rung a bell for you in my conversation with Bruce?</p>
<p><strong>Vernor Vinge:</strong> <strong>Bruce says:Â  <em>&#8220;&#8230; it&#8217;s pretty clear that the people who would weep for joy to have Augmented Reality are people whose reality is already damaged. People who need reality augmented as a prosthetic &#8230;&#8221;</em> This really rings a bell with me. And social networks with AR may have a special impact at small sizes, even just _two_ players. At such a scale, they might be better called &#8220;joint entities&#8221; than &#8220;social networks&#8221;. For example, two differently disabled persons, where one is mobile. There&#8217;s a lot more that could be said about this, including applications that could be done (maybe are being done) already.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/ar-contact1.jpg"><img class="alignnone size-medium wp-image-6319" title="ar-contact1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/ar-contact1-300x279.jpg" alt="" width="300" height="279" /></a><br />
</strong></p>
<p><em><a href="http://spectrum.ieee.org/biomedical/bionics/augmented-reality-in-a-contact-lens/0">Picture via IEEE Spectrum: Augmented Reality in a Contact Lens</a></em></p>
<p><strong>Tish Shute: </strong>As <a href="http://augmentedrealityevent.com/2010/08/25/are2010-keynote-by-jesse-schell-augmented-reality-will-define-the-21st-century/" target="_blank">Jesse Schell pointed out last year at ARE2010</a>, &#8220;The whole point of AR is to see things from a different point of view &#8230; How can there be a more powerful art form than one that actually changes what you see?&#8221;</p>
<p>The magic lens of the smart phone, screens &#8211; large and small, projection, audio and sensory devices are mediating our AR experiences today.  Bruce pointed out last year in his opening keynote, that these less immersive forms of AR have their own merits.</p>
<p>But eyewear has always been integral to the big vision of AR.  Do you see some interesting futures for AR without eyewear?  And, How long before AR eyewear is part of our everyday lives?<br />
<strong>Vernor Vinge: This importance of vision is a visionist claim :-), but for the majority of us who have sight, binocular vision is by far the highest bitrate input we have, and we have enormously sophisticated wetware for analyzing what we see. Current display tech is far short of fully exploiting this input channel.</strong></p>
<p><strong> </strong></p>
<p><strong>Along the way to this goal, I expect we&#8217;ll pass through mini-eras of exploiting the best-available tech. Right now, that is the tablet and the smartphone. Sometimes I almost wish for slower progress: in the nineteenth century, you could profitably spend your tech lifetime mastering one mechanism (for instance, black-and-white silver halide photography). The whole world would benefit from your career. Now, we rattle through the mini-eras so fast that we never fully exploit what&#8217;s zooming past before we&#8217;re on to the next stage.</strong></p>
<p><strong>How fast (or if) HUDs like in Rainbows End show up will probably depend on network and localizer tech as much as the HUDs themselves, with clear generational differences within such eyeware. In fact, it&#8217;s fun to imagine the mini-eras you could get with different combinations of HUDs tech, localization, and networking.</strong></p>
<p><strong>(Aside, a quibble: I think AR should not be restricted to visual only. There are tactile and kinesthetic possibilities, at least.)</strong></p>
<p><strong>(Aside, a whine: If only we had an output channel with the bitrate and flexibility of vision! Wearables plus voice and gesture could do some of that. Going further might involve scary human re-engineering. In  <a href="http://www.fictionwise.com/ebooks/eBook4380.htm" target="_blank">Fast Times at Fairmont High</a>, I speculated that a small re-engineering (eidetic memory) could give a form of highrate output,<br />
simply by allowing selection from very large menus.)</strong></p>
<p><strong>Tish Shute:</strong> Augmented Reality and Ubiquitous Computing are intimately connected. Is a distinction between AR and Ubicomp still useful? (This recent PARC blog post: <a href="http://blogs.parc.com/blog/2010/03/defining-ubiquitous-computing-vs-augmented-reality/" target="_blank">http://blogs.parc.com/blog/2010/03/defining-ubiquitous-computing-vs-augmented-reality/</a> takes a look at the definitions.)</p>
<p><strong>Vernor Vinge: In a literal sense there is a distinction, and there is enough technical challenge in AR to justify specialists spending all their time with AR. But Augmented Reality&#8217;s importance to humanity is in its role as a portal to the power of ubicomp and human cooperation.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/TechnologicalSingularity.jpg"><img class="alignnone size-medium wp-image-6317" title="TechnologicalSingularity" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/TechnologicalSingularity-200x300.jpg" alt="" width="200" height="300" /></a><br />
</strong></p>
<p><strong>Tish Shute:</strong> Augmented Reality, as we understand it now, is a human centered experience.  But even now some of the most important aspects of our lives are governed by machine to machine intelligences that operate for the most part beyond the reach of human perception, e.g., the trading bots of Wall Street.  What role can augmented reality play in better mediating between human intelligence and machine to machine intelligence?  Does AR hasten the arrival of the technological singularity?</p>
<p><strong>Vernor Vinge: I see four or five concurrently active paths to the Singularity:<br />
a) Artificial Intelligence: We create superhuman artificial intelligence in computers.<br />
b) Digital Gaia: The worldwide network of embedded microprocessors, sensors, effectors, and localizers becomes a superhumanly intelligent entity.<br />
c) Internet Scenario: Humanity with its networks, computers, and databases becomes a superhuman being. (Bruce&#8217;s story <a href="http://www.amazon.com/Good-Old-Fashioned-Future-Bruce-Sterling/dp/0553576429" target="_blank">&#8220;Maneki Neko&#8221;</a> is a beautiful and subtle illustration of this possibility.)<br />
d) Intelligence Amplification: We enhance individual human intelligence through human-to-computer interfaces.<br />
e) Biomedical: We directly increase our intelligence by improving the neurological function of our brains. (I regard this last item to be the weakest of the possibilities.)</strong></p>
<p><strong>AR is central to progress with possibilities (c) and (d).<br />
If we humans want to keep our hand in the game, AR is an important thing to pursue.</strong></p>
<p><strong>Tish Shute: </strong>Powerful computer vision apps are emerging for smart phones and face recognition technologies are beginning to appear in consumer apps.  Do you think we need a major shift in the way we handle data ownership?   And, is &#8220;there is a real risk of our augmented reality world being owned by interests which are not our own?&#8221; (see my conversation with Anselm Hook last year. <a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook" target="_blank">http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook</a></p>
<p><strong>Vernor Vinge: Yes, there is such a risk. (See also my political/legal comments in response to your question (1).)<br />
More broadly, I see DRM and the Law being used to reify our intellectual heritage as permanent private property. If this could work, it would be the biggest grab in history &#8212; and a major roadblock on human progress.</strong></p>
<p><strong>But even setting aside all the open/closed/free ideological questions, there is another important issue here: anytime laws are passed making popular and easily accomplished behavior illegal, things get very ugly. It may seem frivolous to compare this to the first stages of the War on Drugs, but that&#8217;s where serious enforcement would lead.</strong></p>
<p><strong>Tish Shute:</strong> We have seen gestural interfaces go mainstream in the last year.  What are the most interesting innovations with gestural interfaces that you have seen in recent months? What sessions will you go to at ARE this year?</p>
<p><strong>Vernor Vinge: I&#8217;m way behind the curve as to what is happening right now. Collecting data points on real hardware and applications is a high priority for me in attending ARE 2011.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/the-children-of-the-sky.jpg"><img class="alignnone size-medium wp-image-6322" title="the-children-of-the-sky" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/05/the-children-of-the-sky-196x300.jpg" alt="" width="196" height="300" /></a><br />
</strong></p>
<p><strong>Tish Shute:</strong> Are you reading/writing any new fictional literature about AR?  And/or, What design fictions for AR are most interesting to you in the moment?</p>
<p><strong>Vernor Vinge: As to writing: My novel The Children of the Sky should come out this October from Tor Books. It&#8217;s set in the far future and is the sequel to <a href="http://www.amazon.com/Fire-Upon-Deep-Vernor-Vinge/dp/0812515285" target="_blank">A Fire Upon the Deep</a>. Alas, the story has only indirect connections to our present technological interests.</strong></p>
<p><strong>As to reading: I got a big kick out of Daniel Suarez&#8217;s duology <a href="http://www.goodreads.com/book/show/4699575-daemon" target="_blank">Daemon</a> and <a href="http://search.barnesandnoble.com/Freedom/Daniel-Suarez/e/9780525951575" target="_blank">Freedom(TM)</a>.</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2011/05/10/interview-with-vernor-vinge-smart-phones-and-the-empowering-aspects-of-social-networks-augmented-reality-are-still-massively-underhyped/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Platforms for Growth and Points of Control for Augmented Reality: Talking with Chris Arkenberg</title>
		<link>https://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/</link>
		<comments>https://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/#comments</comments>
		<pubDate>Wed, 27 Oct 2010 09:14:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR and html 5]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR eyewear for smart phones]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented foraging]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality on tablets]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[cloud computing and AR]]></category>
		<category><![CDATA[EarthMine]]></category>
		<category><![CDATA[gartner hype cycle]]></category>
		<category><![CDATA[Gary Hayes]]></category>
		<category><![CDATA[John Battelle]]></category>
		<category><![CDATA[Kevin Slavin]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[mobile social augmented reality]]></category>
		<category><![CDATA[MUVEdesign]]></category>
		<category><![CDATA[NVidia augmented reality demo]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Platforms for Growth]]></category>
		<category><![CDATA[Points of Control Map]]></category>
		<category><![CDATA[Porthole]]></category>
		<category><![CDATA[QR codes]]></category>
		<category><![CDATA[Qualcomm SDK for AR]]></category>
		<category><![CDATA[real time analytics and AR]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[The Battle for the Internet Economy]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[transmedia story telling]]></category>
		<category><![CDATA[trasmedia]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[vision based AR]]></category>
		<category><![CDATA[W3C group on augmented reality]]></category>
		<category><![CDATA[Wave in a Box]]></category>
		<category><![CDATA[Web 2.0 Expo]]></category>
		<category><![CDATA[web standards based browser for AR]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5924</guid>
		<description><![CDATA[The Points of Control map is interactive, so please click here or on the image above for the full experience. Today at 4pm EST, 1pm PDT John Battelle and Tim O&#8217;Reilly will discuss the Points of Control map and The Battle for the Internet Economy in a Free Webcast: &#8220;More than any time in the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://map.web2summit.com/"><img class="alignnone size-medium wp-image-5931" title="Screen shot 2010-10-27 at 1.56.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM-300x181.png" alt="Screen shot 2010-10-27 at 1.56.15 AM" width="300" height="181" /></a></p>
<p><em>The Points of Control map is interactive, so please <a href="http://map.web2summit.com/" target="_blank">click here </a>or on the image above for the full experience.</em></p>
<p><em> </em>Today at 4pm EST, 1pm PDT John Battelle and Tim O&#8217;Reilly will discuss the <a href="http://map.web2summit.com/" target="_blank">Points of Control</a> map and The Battle for the Internet Economy <a href="http://oreilly.com/emails/poc_web2summit-webcast-prg.html" target="_blank">in a Free Webcast</a>:</p>
<p><strong>&#8220;More than any time in the history of the Web, incumbents in the network  economy are consolidating their power and staking new claims to key  points of control. It&#8217;s clear that the internet industry has moved into a  battle to dominate the Internet Economy.</strong></p>
<p><strong>John Battelle and Tim O&#8217;Reilly will debate and discuss these shifting  points of control as the board becomes increasingly crowded. They&#8217;ll map  critical inflection points and identify key players who are clashing to  control services and infrastructure as they attempt to expand their  territories. They&#8217;ll also explore the effect these chokepoints could  have on people, government, and the future of technology innovation.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM.png"><img class="alignnone size-medium wp-image-5932" title="Screen shot 2010-10-27 at 2.01.38 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM-300x124.png" alt="Screen shot 2010-10-27 at 2.01.38 AM" width="300" height="124" /></a></p>
<p><em> </em>I&#8217;ve been wanting to start a discussion on theÂ  <a href="http://map.web2summit.com/">Points of Control map </a>in the Augmented Reality community for a while now, and Chris&#8217; recent post on <a href="http://www.gartner.com/it/page.jsp?id=1447613" target="_blank">the latest edition of the Gartner Hype Cycle</a>, <a href="http://www.urbeingrecorded.com/news/2010/10/13/is-ar-ready-for-the-trough-of-disillusionment/" target="_blank">&#8220;Is AR Ready for the Trough of Disillusionment?&#8221; </a>and this post by Mac  Slocum, <a href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" target="_blank">â€œHow Augmented Reality Apps Can Catch On,&#8221;</a> and the conversation in the comments between Mac, Raimo (one of the founders of <a href="http://www.layar.com/" target="_blank">Layar)</a>, and Chris, all prompted me to get a conversation started&#8230;(see below for all that followed!).Â  Chris put me on the hot seat back in June when he did <a href="http://www.boingboing.net/2010/06/17/tish-shute---augment.html" target="_blank">this very generous interview with me on Boing Boing</a>, so it was time to turn the tables.</p>
<p>Tim O&#8217;Reilly, in hisÂ <a href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" target="_blank"> keynote for Web 2.0 Expo,</a> pointed out there is both a fun and a dark side to the Points of Control map.Â  There are companies on this map, he noted, that rather than &#8220;growing the pie,&#8221; are  trying to divide up the pie, and they are forgetting to think about  creating a sustainable ecosystem. I expect the conversation between Tim O&#8217;Reilly and John Battelle to dig deep into this Battle for the Internet Economy.Â  If, like me, you have another engagement at the time of the webcast, you can register on the site to receive the recording.</p>
<p>AR is still too young to figure in the battles of the giants, but there will be a lot to be learned from this conversation.Â  And, The Points of Control map is good to think with from the POV of AR in many ways.Â  As Chris Arkenberg observed:</p>
<p><strong>&#8220;When I look at this map, the points of control map, itâ€™s  really interesting to me, because what it says to me with respect to AR  is each of these little regions that they have drawn out would be a  great research project. So every single one of these should be  instructive to AR.</strong></p>
<p><strong>In other words, we should be able to look at social networks,  the land of search, or kingdom of ecommerce, and apply some very  rigorous critical thinking to say, â€œHow would AR add to this engagement,  this experience of gaming, or ecommerce, or content?â€</strong></p>
<p><strong>Looking at each of these individually and really meticulously  saying, â€œOK, well yes, it can do this but how is that different from  the current screen media experience, the current web experience that we  have of all these types of things?â€ Â  You know, how can augmented  reality really add a new layer of value and experience to these? And I  think that process would really trim a lot of the fat from the hopes and  dreams of AR and anchor it down into some very pragmatic avenues for  development.Â   And then you could start looking at, â€œWell, OK, what  happens when we start combining these?â€ When we take gaming levels and  plug that into the location basin, as you suggested.&#8221;</strong></p>
<p>Chris Arkenberg is a technology professional with a focus on product strategy &amp; development, specializing in 3D, augmented reality, ubicomp and the social web. He uses research, scenario planning, and foresight methodologies to help organizations anticipate change and adopt a resilient and forward-looking posture in the face of unprecedented uncertainty. His personal work is collected at <a href="http://urbeingrecorded.com " target="_blank">urbeingrecorded</a>, and his <a href="http://www.linkedin.com/in/chrisarkenberg" target="_blank">professional profile is here.</a></p>
<p>He is also one of the founder/organizers of <a href="http://ardevcamp.org" target="_blank">AR DevCamp</a> which is currently scheduled for Dec. 4th (somewhere in SF or The Valley!)Â  Chris said, &#8220;No further details atm (still trying to find a venue and get sponsors) but please direct people to http://ardevcamp.org for upcoming information.&#8221;</p>
<h3>Talking with Chris Arkenberg</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ChrisArkenberg.jpg"><img class="alignnone size-medium wp-image-5929" title="ChrisArkenberg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ChrisArkenberg-300x199.jpg" alt="ChrisArkenberg" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> I know some people thought <a href="http://www.gartner.com/it/page.jsp?id=1447613" target="_blank">the positioning of AR by Gartner near the peak of the hype cycle </a>was misguided, and based on a very narrow understanding of AR as used in marketing apps. But reading your post I thought you made a lot of good points.</p>
<p><strong>Chris Arkenberg:  Itâ€™s tracking hype, right?  Itâ€™s not necessarily tracking the growth of the technologies or their maturation so much as itâ€™s tracking the general attention level.  And whatâ€™s interesting to me is that tends to affect the amount of money that goes into those technologies.</strong></p>
<p><strong>Tish Shute:</strong> I was particularly interested in your post because I have been writing a post about two recent Oâ€™Reilly events in NYC, <a href="http://makerfaire.com/newyork/2010/" target="_blank">Maker Faire</a>, <a href="http://www.web2expo.com/">Web 2.0 Expo</a>, and then <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a>, where Tim gave a very interesting 45 minute keynote.Â Â  AR was pretty low profile at all three events.Â Â <a href="../../augmented%20reality%20at%20web%202.0%20http://www.flickr.com/photos/bdave2007/5036397168/in/photostream/" target="_blank"> But the NVidia augmented reality demo attracted a lot of attention at the sponsors expo, </a> and Usman Haque, Founder of <a href="http://www.pachube.com/" target="_blank">Pachube</a> announced in<a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" target="_blank"> his presentation</a>,  they are working on an augmented reality interface for Pachube called Porthole, its designed for  facilities management and, â€œas a consumer-oriented application that  extends the universe of Pachube data into the context of AR â€“ a  â€˜portholeâ€™ into Pachubeâ€™s data environments.. &#8220;Â  Usman also mentioned, when I talked to him, that he is contributing to the AR standards discussion and on the program committee now <a href="http://www.w3.org/2010/06/16-w3car-minutes.html#item02" target="_blank">for the W3C group on augmented reality</a>.Â  For more on this standards discussion and the Pachube AR interface, see Chris Burmanâ€™s paper for the W3C, <a href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" target="_blank">Portholes and Plumbing: how AR erases boundaries between â€œphysicalâ€ and â€œvirtual.&#8221;</a></p>
<p>I think pioneers in the augmented reality commmunity should pay attention to these wider conversations about the Battle for the Internet Economy, and the exploration of theÂ  â€œPlatforms for Growthâ€ theme at <a href="http://www.web2expo.com/">Web 2.0 Expo</a> is very important- this is a course also a nudge to read my upcoming post on these O&#8217;Reilly events!</p>
<p>Also I have another project I have been chewing on that I would like to talk to you about. Â   I want to start an AR conversation about the wonderful <a href="http://map.web2summit.com/">Points of Control map</a> produce for Web 2.0 summit by <a href="http://battellemedia.com/" target="_blank">John Battelle</a>. [ Note there will be, "Battle for the Internet Economy" free Web2Summit webcast w/ @johnbattelle &amp; @timoreilly Wed 10/27 at 1pm PT http://bit.ly/b46cmb #w2s]</p>
<p>Up to this point, understandably given the immaturity of the technology, AR has little role in the â€œBattle for the Internet  Economy.â€Â    But this doesnâ€™t mean that the map isnâ€™t good for AR visionaries, enthusiasts, entrepreneurs, and developers to think with. Â   And both you and Tim have pointed out the potential for AR to leverage the giant data subsystems in the sky. Â  I have to say the positioning of Cloud Computing on the brink of heading down into the trough of disillusionment in this recent rendition of the Gartner Hype Cycle seems ridiculous!</p>
<p>Cloud Computing is already ubiquitous hardly seems credible that it is headed for a trough of disillusionment!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.48.30-AM.png"><img class="alignnone size-medium wp-image-5940" title="Screen shot 2010-10-27 at 2.48.30 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.48.30-AM-300x199.png" alt="Screen shot 2010-10-27 at 2.48.30 AM" width="300" height="199" /></a></p>
<p><strong>Chris Arkenberg:  Yeah, itâ€™s ubiquitous so why even talk about it when itâ€™s your fundamental infrastructure?</strong></p>
<p><strong>Tish Shute:</strong> Yeah and I seriously doubt it is  imminently headed for a  trough of disillusionmentâ€¦.and this brings me back to the Points of Control Map which as John Batelle points out,  â€œaims to  identify key players who are battling to control the services and infrastructure of a websquared worldâ€ in which the â€œWeb and the world intertwine through mobile and sensor platforms.â€Â   This instrumented world, of course, creates a great deal of opportunity for augmented reality.  Have you seen that, that points of control map?</p>
<p><strong>Chris Arkenberg:  I think I have, actually.</strong></p>
<p><strong>Tish Shute: </strong> There has been much debate about how this intertwining of the web and  the world will play out in augmented reality.Â Â  Chris Burman points out in his position paper for W3C,Â  <a href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" target="_blank">Portholes and Plumbing: how AR erases boundaries between â€œphysicalâ€ and â€œvirtualâ€</a>, that &#8220;trying to draw parallels between a browser based web and the possibilities of AR may solve issues of information distribution in the short-term,&#8221;Â  but it must not have a limiting effect in the long-term.Â Â  But now we at least have one <a href="https://research.cc.gatech.edu/polaris/" target="_blank">web standards-based browser for AR</a> thanks to the work of Blair MacIntyre and the Georgia Tech team.Â  But  I think the discussion in the comments of Mac Slocumâ€™s recent post, <a href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" target="_blank">â€œHow Augmented Reality Apps Can Catch Onâ€</a> is an interesting starting point from which to think about platforms of growth for AR.Â   I am not sure if I am stretching his meaning but I think Raimo, <a href="http://www.layar.com/" target="_blank">Layar</a>, is suggesting that what the Point of Control map call the the Plains of Media content is very important to the growth of the fledgling AR industry right now.   And I would agree with this, and add that the neighboring terrain of gaming levels will be pretty key as one of my other favorite AR start ups <a href="http://ogmento.com/" target="_blank">Ogmento</a> hopes to reveal in the near future!  But what do you think was most important in this brief but pithy dialogue between you Raimo and Mac?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.56.02-AM.png"><img class="alignnone size-medium wp-image-5941" title="Screen shot 2010-10-27 at 2.56.02 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.56.02-AM-300x179.png" alt="Screen shot 2010-10-27 at 2.56.02 AM" width="300" height="179" /></a></p>
<p>[The screenshot above isÂ <a title="MuveDesign" href="http://www.muvedesign.com/"></a>a teaser video the <a title="Gary Hayes" href="http://www.personalizemedia.com/future-of-location-based-augmented-reality-story-games/?utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=Feed:+PersonalizeMedia+%28PERSONALIZE+MEDIA%29" target="_blank">Gary Hayes</a> from <a title="MuveDesign" href="http://www.muvedesign.com/">MUVEdesign</a> for his upcoming (2011 release date), game called Time Treasure.Â  See Gary&#8217;s <a title="Gary Hayes" href="http://www.personalizemedia.com/future-of-location-based-augmented-reality-story-games/?utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=Feed:+PersonalizeMedia+%28PERSONALIZE+MEDIA%29" target="_blank">blog</a> for more and Gary&#8217;sÂ <a href="http://www.personalizemedia.com/16-top-augmented-reality-business-models/" target="_blank"> post from over a year ago</a> on AR Business models.Â  Thomas K. Carpenter, <a href="http://gamesalfresco.com/2010/10/25/time-treasure-future-tablet-game/" target="_blank">on Games Alfresco notes</a>, &#8220;I think this is a terrific idea and I find it interesting heâ€™s planning this on a tablet rather than a smartphone.&#8221;</p>
<p><strong>Chris Arkenberg:  The way I took itâ€¦And to give a little bit of context, I came from sort of this apprehension of augmented reality as an expression of the existing Internet.  So as sort of a visualization layer that allows you to kind of draw out data, and then, with all the affordances of being able to anchor it to real world things.</strong></p>
<p><strong>And my own sort of path has led me to want to really try to understand that and refine it, particularly with respect to the sort of Internet of things and the smarter planet idea of just having embedded systems everywhere.  And specifically, what is the value-add  for augmented reality as a visualization layer of an instrumented world?</strong></p>
<p><strong>And so thatâ€™s caused me to be a bit biased towards that side of AR.  And the way I took Raimoâ€™s comment was that he was saying that, â€œYou know, really what weâ€™re interested in is media.â€  That he was effectively saying that AR for them is really just about that space between the screen and the the world, or between your eyes and the world, and what you can do there.</strong></p>
<p><strong>Certainly I had considered it in the past, but I hadnâ€™t really focused on it or assumed that it was a priority as a business model.  And so he kind of reminded me that, actually, thereâ€™s a lot of entertainment applications.  Thereâ€™s a lot of, obviously, advertising and marketing applications.<br />
And so I felt that I was being a little narrow in my focusâ€¦</strong></p>
<p><strong>Tish Shute: </strong> Yes this comes to the heart of what I am interested in about the role AR can play in opening up new relationships to the world of data that we live, not just making it more accessible and useful to us when and where we need it, but AR as a road to reimaginingÂ  it..</p>
<p>Have you seen any interesting work yet to explore these great data economies in the cloud through AR.  I mean can you think of any others &#8211; there is <em><em><a href="http://www.planefinder.net/" target="_blank">planefinder.net</a> </em></em> but others?</p>
<p><strong>Chris Arkenberg:  Iâ€™ve seen a few just sort of skunk works type applications that people have been playing around with, again, to try and reveal things.  One of them was similar to the aircraft, but it was more for military use and being able to identify things of interest in the sky.  Iâ€™ve seen a couple other for navigation, so being able to identify mountain peaks on a visual plane, for example, but this isnâ€™t so much about revealing an instrumented world.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, I think that was from the Imagination right?  I know thatâ€™s an interesting one. Usman at Web 2.0 Expo, <a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" target="_blank">in his presentation,</a> mentioned the work Pachube is doing on an Augmented Reality interface.  I interviewed Usman again as my last long interview with him was nearly 18 months ago now and Pachube is well on the way to becoming the Facebook of Data or the analogy that Usman prefers &#8211; the Twitter of sensors!</p>
<p><strong>Chris Arkenberg:  Hmm, interesting.</strong></p>
<p><strong>Tish Shute:</strong> And to go back to your comments on Augmented Reality not getting caught in some of the traps that have made virtual worlds lose relevancy I think that is vital that AR developers understand the strategic possibilities of key points of control in the internet economy because the isolation and Balkanization of virtual worlds were certainly a factor in their rapid slide into the trough of disillusionment &#8211; although many would argue that a fundamental flaw in the kind of virtual experience that Second Life and other virtual worlds constructed was really the fatal flaw (see James Turner&#8217;s interview with Kevin Slavin <a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_self">Reality has a gaming layer</a>).</p>
<p>But Second Lifeâ€™s isolation from the other great network economies of the internet was certainly a limiting factor.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s been exactly my sense, and Iâ€™ve, over the years, tried to encourage development in that direction for virtual worlds.  I did work, through Adobe, to help develop Atmosphere 3D back in the the early 2000â€™s.  And we did a lot of work to try and understand the marketplace and the specific value-add of doing things in 3D over 2D.</strong></p>
<p><strong>And this is kind of why I keep referring back to VR and VWâ€™s with respect to augmented reality, is that with immersive worlds, there was this ideaâ€¦there was this big rush.  Everybody was so excited about it.  It was obviously the next cool thing.  And everybody wanted to try to do everything in it.  You could do your shopping in virtual worlds. You could have meetings in virtual worlds.</strong></p>
<p><strong>Tish Shute:</strong> and  shopping, yes ..that didn&#8217;t work out so well!</p>
<p><strong>Chris Arkenberg:  And everybody was very excited in developing these things.  And what it really came down it is, â€œYeah, you can, but itâ€™s actually a lot better to do those things on a flat plane or in person.â€  Meeting Place, WebEx, TelePresence &#8211; those tools generally do a much better job at facilitating TelePresence meetings than a virtual world does. The same with TelePresent Education. There are only very specific things that both VR and AR are really good at.</strong></p>
<p><strong>And thatâ€™s where I find myself with augmented reality right now, trying to really pick through that and critically look at which uses are really appropriate for an AR overlay. And again, I think thatâ€™s why the hype cycle is important, because it reflects back this desire that AR is going to be the next big thing &#8211; the be-all, end-all of interacting with data in the cloud &#8211; and forces us all to take a critical look at why we should do things in AR instead of on a screen.</strong></p>
<p><strong>AR is not going to work well for most things but itâ€™s going to be very good for certain uses.  Right now Iâ€™m very keen at trying to understand what those things might be.</strong></p>
<p><strong>Tish Shute:</strong> I had this wonderful conversation (more in an upcoming post) with Kevin Slavin one of the founders of <a href="http://areacodeinc.com/" target="_blank">Area/Code</a> at Web 2.0 Expo and I think some of what he describes about the data brokerages of High Frequency trading have some interesting implications for ARâ€™s role, say, in ubiquitous computing.  The trading markets are now pretty much dominated by machine to machine intelligence; machine to machine brokerages.  They are basically game economies on the scale that we can barely wrap our heads around where the speed that bots and algo traders can access the network is the key.  We really have no clue what is going on  until we lose our houseâ€¦</p>
<p>Kevin was also<a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_blank"> interviewed by James Turner on Oâ€™Reilly Radar.</a> He talked about how much of the interesting work in location based mobile social apps is defined in opposition to the model of Second Life.  He also talked to me about  how we are seeing â€œfirst lifeâ€ take on the qualities of â€œsecond life.â€  What goes on the trading floor is largely a performance secondary to a more important world of machine intelligence with giant co-located servers  and bots fighting for trading advantages measured in fractions of seconds.</p>
<p>He pointed out how we draw on all these tropes from sci-fi movies, these HUDs based on ideas of machine intelligence where the robot talks to the other robot in English through an English HUD!Â  Many of our current visual tropes for AR are perhaps just as inadequate for the kind of data driven world we live in.</p>
<p>Of course, when you are thinking of having fun with  dinosaurs, or illustrated books, or whatever, this is not, perhaps, an issue.Â  But if you are thinking of augmented reality interfaces as being important in a battle for network economy, and platforms for growth,Â  how this new interface helps us live better in a world of data is an important issue.</p>
<p><strong>Chris Arkenberg:  Now, does that indicate that the UI just needs more overhaul and innovation, or more that the visual interface for those experiences shouldnâ€™t really leave the screen?  It shouldnâ€™t move on to the view plane?</strong></p>
<p><strong>Tish Shute: </strong> Yes we have a few concept videos that try and explore this ..</p>
<p><strong>Chris Arkenberg:  Well, and I think this will happen at the level of human-computer interface.  I mean thatâ€™s always been its role, in making coherent the sort of machine mind, for lack of a better term, making it coherent to the human mind. So I mean there is a lot of this sort of machine intelligence, the semantic Web 3.0 revolution, where it really is about enabling machines, and agents, and bots to understand the content that weâ€™re feeding them.</strong></p>
<p><strong>But at the end of the day, they, for now, need to be providing value to us human operators. So thereâ€™s always going to be a role for  human-computer interface and user experience design to make this stuff meaningful.</strong></p>
<p><strong>I mean, if you look at the revolution in visualization &amp; data viz, this is of incredible value because it takes a tremendous amount of data and collates it into a glanceable graphic that you can look at and immediately comprehend massive amounts of data because itâ€™s delivered in a handy, visual way.</strong></p>
<p><strong>So I see that as a fascinating design challenge, how the user experience of the data world can be translated into meaningful human interaction.</strong></p>
<p><strong>Tish Shute:</strong> Yeah.  And when we see <a href="http://stamen.com/" target="_blank">Stamen Design</a> pursuing a big idea in AR, thatâ€™s when we might start to rock and roll, right?</p>
<p><strong>Chris Arkenberg:  Yeah. In my article, I sort of jokingly suggested that Apple will create the iShades.  But, theyâ€™ve got the track record of being way ahead of the curve and delivering the future in very bold forms.</strong></p>
<p><strong>Tish Shute:</strong> A key part for the battle for the network economy is to bring the complexity of data into the human realm in a way that increases human agency.  Kevin suggests that the giant robot casinos of markets should actually lift off into total abstractions as theses machine-driven trades get back into the human realm in ways that are so damaging to our lives &#8211;  a lost house or job!  The notion of a counterveillence society where people have more agency over the important aspects of their lives, health, housing, job (which I discussed with Kevin &#8211; interview upcoming) has gotten pretty tricky!</p>
<p>But I think we will begin to see AR eyewear for specific applications (gaming and industrial) get more common fairly soon &#8211; possibly as smart phone accessories.</p>
<p>And it is clear that AR is going to be, increasingly,Â  a part of our entertainment smorgesborg in coming months. Itouch has a camera (although lower resolution),  Nintendoâ€™s are AR-ready and many aspects of the AR vision of hands-free spatial interfaces will go mainstream through Natal.</p>
<p>But we are yet to see an app/platform emerge for  mobile. Social AR games that turn every bar and cafe and ultimately the whole city into a gaming venueÂ  -although I think Ogmento and MUVE aim to lead the way here!  Will an AR company achieve Zynga level success by using the Foursquare, for example?</p>
<p>My feeling is that the lesson of Zynga is pretty important for mobile social AR games.  Could Flash social gaming have taken off without Facebook?</p>
<p><strong>Chris Arkenberg:  And thatâ€™s the real driver.  And again, as you mentioned with Second Life, and this was exactly my own sense, is that they stuck to the closed garden model and didnâ€™t get the power of social and collaboration.  They attempted to add some of those affordances within the world, but, you know, ultimately most people arenâ€™t in virtual worlds, and most people arenâ€™t using augmented reality.  So leveraging the really predominate platforms like Twitter and Facebook and Foursquare, being able to leverage those affordances, that connectivity, into a platform like augmented reality, I think, is really critical. Because again, you get nothing unless you have the masses, unless you have people present.</strong></p>
<p><strong>Tish Shute:</strong> In AR research there is a long history of the  notion of powerful AR-dedicated devices, but smart phones and tablets are good enough,Â  and can launch augmented reality into the heart of the internet economy.  I thinkÂ  the elusive AR eyewear will come to us initially as a smart phone accessory for specific apps.Â  But, for the moment, most AR apps make little attempt to play in the wider internet economy.</p>
<p><strong>Chris Arkenberg:  And I think itâ€™s actually much lower hanging fruit, really, to do gaming, marketing, transmedia.  Because then you donâ€™t really care about the cloud, or maybe you only really care about a little part of it that your gaming property is addressing. Then it becomes much more about entertainment, and much more about persuasion, and sensationalism.  And if youâ€™ve got dancing dinosaurs on your street, great!  Itâ€™s entertaining, itâ€™s cool, itâ€™s new. That stuff is fairly straightforward.</strong></p>
<p><strong>I keep coming back to this idea of, you know, the instrumented city.  What sort of data trails do you get out of a fully instrumented city?  So maybe you get traffic patterns, maybe you get geo-local movements of masses, maybe you get energy usage, that sort of thing, all the, sort of  heat maps you can generate from a city. But then what good does it do to be able to have that on an augmented reality layer versus just looking at it on a mobile device or looking at it on your laptop?</strong></p>
<p><strong>Tish Shute:</strong> Of course the use cases for â€œmagic lensâ€ AR are different from the kind of hands free, 360 view with tightly registered media, that a full vision of AR has always promised.  The 360 view is  quite a different metaphor from the web and mobile rectangular screens.</p>
<p><strong>Chris Arkenberg:  Yes, yes.</strong></p>
<p><strong>Tish Shute:</strong> Did you see that <a href="http://laughingsquid.com/tweet-it-ipads-vs-iphones-a-parody-of-michael-jacksons-beat-it/" target="_blank">great parody of Michael Jackson&#8217;s</a> â€œBeat Itâ€ with the iPads versus the iPhones, right?</p>
<p><strong>Chris Arkenberg:  Oh, really?</strong></p>
<p><strong>Tish Shute:</strong> I tweeted it cos i thought it was quite funny and a little close to the bone!<br />
[laughter]</p>
<p>&#8220;ur wanna an ipatch 2 b the new fad?&#8221; #AR gets cameo in Twitter, iPads &amp; iPhone&#8217;s Michael Jackson-Inspired Parody via @mashable</p>
<p>It is hard to get away from the importance of eyewear when discussing AR!</p>
<p><strong>Chris Arkenberg: Yes, so the hardware, to me, is a big stumbling point right now, or itâ€™s a large gating factor, I think, for realizing what an augmented reality vision could really be like.  That it really does need to be heads up.  This holding the phone up in front of you is fun to demonstrate that itâ€™s possible, and itâ€™s valuable in some waysâ€¦</strong></p>
<p><strong>Tish Shute:</strong> And itâ€™s particularly nice in some applications like the planes app, the Acrossair subway app where you hold the phone down and get the arrow, right?</p>
<p><strong>Chris Arkenberg:  Yeah, the way-finding stuff I think is really valuable&#8230;</strong></p>
<p><strong>Tish Shute:</strong> Sixth Sense really caught peopleâ€™s imagination because it managed to deliver the gesture interface with cheap hardware, even if projection has limited uses (no brightly lit spaces or privacy for example!).</p>
<p>The other important and as yet unrealized part of the AR dream is  real-time communications.  Many interesting uses cases would require this. As you know that is my chief excitement, along with federation,  in the Google Wave Servers for (which should soon be released at <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a>) for <a href="http://www.arwave.org/" target="_blank">ARWave</a>.</p>
<p><strong>Chris Arkenberg:  Well my sense of Wave is that it was a ChromeOS protocol that they instantiated, or that they exhibited in the public deployment of Google Wave.  That that was a proof of their sort of low level architectural solution.  Because, you know, theyâ€™ve been rumored to be working on this cloud OS for some time. And so my sense is that Wave is actually one of their core components of that cloud OS, and that it just happened to incarnate for the public in a test run as Google Wave.</strong></p>
<p><strong>Tish Shute:</strong> I do hope that Wave  In the Box will lower the barriers to entry to people experimenting with this technology.  The FedOne server was just way too hard for most people to take the time to set up.  Of course, it is the brilliance of the Wave Operational Transform work that also poses problems in terms of ease of use. But Wave Federation Protocol is pretty innovative. And could even play an important role in a real time communications for AR eyewear connected to smartphones. The challenges that Wave takes on re real-time communications, federation, permissions and filters are pretty important ones for ARâ€¦</p>
<p><strong>Chris Arkenberg:  Especially when youâ€™re trying to federate a lot of permissions and filter a lot of data, which all of that gets even more important when you have a visual layer between you and the real world.</strong></p>
<p><strong>Tish Shute:</strong> You got it.  Yeah!</p>
<p><strong>Chris Arkenberg:  I think thatâ€™s really valuable real estate, both for third parties that want to get access to your eyes, as well as for you, as the user, who still needs to navigate through the phenomenal world and not be occluded by massive amounts of overhead data.</strong></p>
<p><strong>Tish Shute:</strong> Yes, I am sure Google has big plans for the next level of cloud computing and Wave looks at some key challenges.  I suppose federation poses some key business problems.  I think it was Michael Jones who said to me that it was a bit like socialism in that you have to be willing to give something up for the greater good.</p>
<p>Perhaps federation does not present enough appeal because of its challenges re business models?</p>
<p><strong>Chris Arkenberg:  Well, I wonder.  I mean thereâ€™s got to be some value for their ad platform as ads are moving more towards this personalized experience.  Advertising is becoming less of a shotgun blast and more of a very precise, surgical strike. So being able to track user data to such a fine degree to mobilize the appropriate ads around them wherever they are, on any platform, is certainly very valuable to Google and their ad ecology.</strong></p>
<p><strong>Tish Shute:</strong> Many people have high hopes that HTML 5 by lowering the barrier of entry forÂ  browser style AR could also pave the way for some interesting AR work..</p>
<p><strong>Chris Arkenberg:  Well, as much as I would hope that all the different players are going to come together and establish some shared set of standards, really, whatâ€™s happening is itâ€™s a rush to the finish line to be the firstâ€¦to get the most penetration in the marketplace so that Layar, for example, can say, â€œItâ€™s official.  Weâ€™re the platform.â€  And then the consolidation that will follow, where the Googles and the other big players like Qualcomm say, â€œOK, itâ€™s mature enough.  Weâ€™ll start buying up all the smaller companies.â€</strong></p>
<p><strong>And thatâ€™s where the real challenge is right now is that there are no standards.  Itâ€™s such an immature technology that you have a lot of different players trying to establish the ground rules.  And again, this is one of the challenges that faced public virtual worlds, is that you had a lot of different virtual worlds that werenâ€™t talking to each other in any particular way, and that they each had their own development platform. And so you end up with a very fractured ecosystem or set of competing ecosystems, which is kind of whatâ€™s happening with AR right now, where a developer has to choose between a number of different new platforms or hedge by deploying across multiple platforms. Basically, the web browser wars are set to be recapitulated by the AR browsers.</strong></p>
<p><strong>Among them, Layar and Metaio seem to be getting the most traction.  But thereâ€™s still not a really strong case for a unified development ecosystem to emerge.</strong></p>
<p><strong>Tish Shute:</strong> So a discussion of ecosystem development brings us back to the Points of Control Map I think. So what do you see as key points of interest for AR developers to watch in the  Points of Control Map? And where do you want to sort of put your bets, right?  We are still really waiting for mobile social AR to emerge into the mainstream.</p>
<p><strong>Chris Arkenberg:  Yes.  And thatâ€™s primarily the shortcoming of  the hardware itself, but also of the accuracy of current GPS technology.  Thatâ€™s another kind of gating factor, because again, AR wants to be able to express the data within a distinct place or object.</strong></p>
<p><strong>So in a lot of ways, other than kind of what weâ€™ve allowed for the broader entertainment purposes, for AR to really work, there needs to be more resolution in GPS location.  So for it to be truly locativeâ€¦because itâ€™s OK to tell Foursquare that youâ€™re in Bar X.  But if you want to be able to draw data directly on a wall within that bar, or do advertising over the marquee on the front, you need more factors to accurately register those images on a discreet location. So thatâ€™s another, sort of, aspect of the immaturity of AR, is that itâ€™s still very hard to register things on discreet locations without employing a number of diverse triangulation methods.</strong></p>
<p><strong>Tish Shute:</strong> Right.  The mobile AR games we see at the moment are really just faking a relationship to the physical world unless they rely on markers or some limited form of natural feature recognition which is really just a more sophisticated form of markers.  But the Qualcomm  SDK does offer some opportunities to tie AR media to the world more tightly as does the Metaio SDK. But in terms of a mobile social AR game that could be like the Cape of Zynga to FourSquare in Location Basin [see the <a href="http://map.web2summit.com/">Points of Control map</a>]&#8230; We havenâ€™t seen anything close yet.</p>
<p>AR should be able to bring the check-in mode to any object in our environment.</p>
<p><strong>Chris Arkenberg:  Yes, yes.  And thatâ€™s actually one of the early interests I had in the notion of social augmented reality. I wanted a way to tag my community with invisible annotations that only certain people could read, and found pretty quickly that thatâ€™s very difficult to do.  I mean you can kind of do some regional tagging, like on a  beach, for example, but if you wanted to tag the bench that was on the cliff above the beach, itâ€™s very difficult to do that using strictly locative reckoning.</strong></p>
<p><strong>Thereâ€™s all sorts of really cool social engagement that can be revealed when people are allowed to attach things to the world around them, to the streets they normally pass through, or the points of interest that they normally engage in. To be able to author on the fly on the streets and attach it discreetly to an object effectively.</strong></p>
<p><strong>Tish Shute:</strong> And yes we do have all kinds of markers and QR codes.  But Erick Schonfeld of Tech Crunch<a href="http://techcrunch.com/2010/10/18/likify-qr-code/" target="_blank"> made a good point that QR codes</a>: &#8220;Until QR code scanners become a default feature of most smartphones and  they start to become actually useful enough for people to go through the  trouble to scan them, they will remain a gee-whiz feature nobody uses.&#8221;</p>
<p><strong>Chris Arkenberg:  So again, this gets back to competing standards and who gets access to the phone stack, the bundle. Who gets the OEM dealâ€¦?</strong></p>
<p><strong>Tish Shute:</strong> Yes, the battles for the networks on the Handset Plains are pretty important for AR!<br />
[laughter] I think Layar have made some smart moves on The Handset Plains.</p>
<p>And there are a lot of acquisitions of nearfield technology to look at.Â   If I remember rightly Ebay bought the Red Laser tech from Occipital &#8211; now thereâ€™s any interesting company. Their panorama stuff rocks!</p>
<p><strong>Chris Arkenberg:  Right. Thereâ€™s a lot of nearfield stuff thatâ€™s supposed to hit all of the major mobile platforms in the next year or so.</strong></p>
<p><strong>I mean I think where this is heading, in my mind, is basically smart motes.  You know, little nearfield wide-range RFIDâ€™s that are the size of a small, tiny square that you could attach to just about anything and then program it to be a representative of your establishment or of an object, that then you can start to tag just about anything. I mean you canâ€™t rely on geo to do it, but if you have a Nearfield chip there that costs maybe like two cents to buy in bulk, and you can flash program it, then you can start to attach data to just about anything.</strong></p>
<p><strong>Tish Shute:</strong> Yes &#8216;cos some things still remain very difficult for near field image recognition technologies like Google Goggles.</p>
<p><strong>Chris Arkenberg:  Well, if your phone can interrogate for Nearfield devices, and it detects a chip in its near field, it can then interrogate that chip.  The chip may contain flash data on itself, or it may contain the local server in the establishment, or it may go to the cloud and get that data back.</strong></p>
<p><strong>Tish Shute:</strong> Yes there is moverment from the top and open source hardware like Arduino has created an opportunity for all sorts of creativity with instrumented environments.Â  And the handheld sensors in our pockets &#8211; our smart phones create a lot of opportunity for bottom up innovation too.</p>
<p><strong>Chris Arkenberg:  I mean thatâ€™s my guess.  If you look at what IBM is doing with their Smarter Planet initiative, theyâ€™re partnering with a lot of municipalities, and obviously with a lot of businesses and their global supply chains.</strong></p>
<p><strong>But theyâ€™re basically working with municipalities and all these stakeholders to instrument their territory, their business, or their city, as it were. So theyâ€™re working to provide embedded sensors and the software necessary to read them out and run reports &amp; viz.  And presumably that software can extend to include some sort of mobile device to interrogate the sensors and read the data.</strong></p>
<p><strong>Thatâ€™s kind of a top-down approach of a very large global company working with top-down governance bodies to do this. Simultaneously you have the maker crowd experimenting with Arduino and such to build from the grassroots, the bottom up approach.</strong></p>
<p><strong>And thatâ€™s primarily gated by the amount of learning it takes to be able to program these devices, to be able to hack them.  Typically, the grassroots creators who make these devices donâ€™t have the luxury of very large budgets to make things highly usable and Wizywig.</strong></p>
<p><strong>So the bottom up community is a sandbox to create tremendous amounts of innovation, because they are unconstrained by the very real financial needs of the top down innovators.  And so you get a lot of fascinating innovation, a very rich ecology from the bottom-up approach, but you donâ€™t get a lot of wide distribution.  But that does filter up to and inform the top down approach that has a lot more money to put into this stuff.  And it ultimately has to respond to the needs of the marketplace.</strong></p>
<p><strong>I mean if thereâ€™s an answer to the question of whether something like AR will succeed through the bottom-up grassroots approach or the top-down industry approach, I would say it would be both.  That handsets will be hacked to read the bottom up innovations of the maker community, and handsets will be preprogrammed to read the top down efforts of the IBMs of the world.</strong></p>
<p><strong>Tish Shute:</strong> Yes but i have to say it is very time-consuming hacking phones (I have just seen a few days suck up in this myself so that I could upgrade my G1 to try out the new ARWave client!).  I mean Android has obviously been the platform of choice because of openness but the business model of iPhone and its market share in the US sure make it important for developers.Â   Itâ€™s like you donâ€™t exist if you donâ€™t have an iphone app for what you are doing.</p>
<p><strong>Chris Arkenberg:  Yeah, and thatâ€™s the challenge, because at the end of the day developers prefer not to work for free and a solid, reliable mechanism to monetize their efforts becomes very appealing.</strong></p>
<p><strong>When I look at this map, the points of control map, itâ€™s really interesting to me, because what it says to me with respect to AR is each of these little regions that they have drawn out would be a great research project. So every single one of these should be instructive to AR.</strong></p>
<p><strong>In other words, we should be able to look at social networks, the land of search, or kingdom of ecommerce, and apply some very rigorous critical thinking to say, â€œHow would AR add to this engagement, this experience of gaming, or ecommerce, or content?â€</strong></p>
<p><strong>Looking at each of these individually and really meticulously saying, â€œOK, well yes, it can do this but how is that different from the current screen media experience, the current web experience that we have of all these types of things?â€  You know, how can augmented reality really add a new layer of value and experience to these? And I think that process would really trim a lot of the fat from the hopes and dreams of AR and anchor it down into some very pragmatic avenues for development.  And then you could start looking at, â€œWell, OK, what happens when we start combining these?â€ When we take gaming levels and plug that into the location basin, as you suggested.</strong></p>
<p><strong>Tish Shute: </strong> Some of the important platforms for AR donâ€™t appear to have spots on the map like Google Street View and other mapping technologies that hold out so much hope for AR, or am I missing something?</p>
<p><strong>Chris Arkenberg:  You mean on the map?</strong></p>
<p><strong>Tish Shute:</strong> Yes for the full vision of AR we need sensor integration, computer vision and cool mapping technologies to come together. Do you see where Google Maps and Google Street View&#8230; Where would they be?</p>
<p><strong>Chris Arkenberg:  Yeah, I mean itâ€™s certainly content, itâ€™s locationâ€¦</strong></p>
<p><strong>Are you familiar with Earthmine?</strong></p>
<p><strong>Tish Shute:</strong> Yes, yes I am, definitely.<a href="http://www.earthmine.com/index" target="_blank"> Earth Mine</a>, <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, Google Street View, user generated internet photo sets like  Flickr all of these could be very important to AR, potentially.</p>
<p><strong>Chris Arkenberg:  Well, and the interesting thing about Earthmine is that theyâ€™re effectively trying to do an extremely precise pixel to pixel location mapping.  So theyâ€™re taking pictures of cities just like Street View, except theyâ€™re using the Z axis to interrogate depth and then using very precise geolocation to attach a GPS signature to each pixel that theyâ€™re registering in their images. Effectively, you get a one-to-one data set between pixels and locations.  And so you can look at something like Google Street View, and if you point to the side of a building, in theory, it should know exactly where that is.</strong></p>
<p><strong>Theyâ€™re rolling this out with the idea of being able to tag augmented reality objects in layers directly to surfaces in the real world.  So thatâ€™s another approach to trying to get accurate registration and to try and create what are essentially mirror worlds. Then your Google Street View becomes a canvas for authoring the blended world, because if you plop a 3D object into Street View on your desktop, and then you go out to that location with your AR headset, youâ€™ll see that 3D object on the actual street.</strong></p>
<p><strong>Tish Shute:</strong> There was some experimental work with Google Earth as a platform for a kind of simulated AR but I suppose Google Earth doesnâ€™t figure in the battle for the network economy as it never got developed as a platform.</p>
<p><strong>Chris Arkenberg:  It hasnâ€™t tried to become a platform, to my  knowledge.  I mean I know some people are doing stuff with it, but as far as I know, Google owns it, they did it the best because they have the best maps, and thereâ€™s not a huge ecosystem of development thatâ€™s based around it other than content layers.</strong></p>
<p><strong>And my sense of everything else on the Points of Control map is theyâ€™re looking more at these sort of platform technologies thatâ€¦</strong></p>
<p><strong>Tish Shute:</strong> Yes, re platforms for growth for AR. Gaming consoles will probably emerge as a significant platform for AR this year.</p>
<p><strong>Chris Arkenberg:  There will be much more of a blended reality experience in the living room for sure, and with interactive billboards. Digital mirrors are another area.  So I mean if we kind of extend AR to include just blended reality in general, you know, this is moving into our culture through a number of different points. As you mentioned, it will be in the living room, it will be in our department stores where you can preview different outfits in their mirror. Weâ€™re already seeing these giant interactive digital billboards in Times Square and other areas.</strong></p>
<p><strong>Itâ€™s funny.  I mean for me, the sort of blended reality aside, the augmented reality, to me, is actually a very simple proposition in some respects.  When I look at this map, augmented reality is just an interface layer to this map in my mind, just as itâ€™s an interface layer to the cloud and itâ€™s an interface layer to the instrumented world. Itâ€™s a way to get information out of our devices and onto the world.</strong></p>
<p><strong>Tish Shute:</strong> The importance of leveraging existing platforms has become pretty clear but it is interesting Facebook definitely gave Zynga the opportunity but would Facebook be so big without Zingaâ€™s social gaming boost?</p>
<p><strong>Chris Arkenberg:  I feel that Zynga has definitely helped its growthâ€¦But I think Zynga has benefited a lot more from Facebook than Facebook has from Zynga.</strong></p>
<p><strong>Tish Shute:</strong> Zynga certainly proved you  could build a profitable business on Facebookâ€™s API!</p>
<p><strong>Chris Arkenberg:  They did.  And they also really validated the Facebook ecosystem and the platform.  They really extended itâ€¦ Zynga benefited from the massive social affordances that Facebook had already architected and developed. They brought gaming directly into Facebook, and particularly, this emerging brand of lightweight social gaming that when you sit it on top of a massive global social network like Facebook, it suddenly lights up.</strong></p>
<p><strong>Tish Shute: </strong>AR pioneers should quite carefully go through this map. There is so much to think about here. Iâ€™m a kind of fanatic about  Streams of  Activity in AR.  Real time brokerages and their potential for AR is something I am fascinated by.  That is one reason I love the ARWave project.</p>
<p>Anselm Hook, to me, is one of the great thinkers in this area of real time brokerages &#8211; with his project Angel, and the work of <a href="http://www.ushahidi.com/" target="_blank">Ushahidi,</a> which is now the platform <a href="http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/" target="_blank">for augmented foraging (see here)</a>.  Anselm is now working on AR at PARC which is exciting.</p>
<p><strong>Chris Arkenberg:  Well, there are some challenges working with data streams. Presentation and filtering I think is a big challenge with any sort of stream.  Because obviously, you have a lot of potential data to manage, to parse, and to make valuable and comprehensible. So I think this is bound very closely to being able to personalize experiences, or having very discreet valuable experiences.  Disaster relief, for example, I think is an interesting idea that ties into the Pachube type of work. Where, if you had the headset and you were a relief worker, and you had immediate lightweight, non-intrusive, heads up alpha channel overlay, waypoint markers showing you all of the disaster locations or points of need, AR becomes extremely valuable, because itâ€™s a primarily hands-free environment.  This is why the military stuff is so interesting.</strong></p>
<p><strong>Tish Shute:</strong> Ha!  We are running  into the eye patch/shades/goggles/sexy specs thing again.  But filtering and making streams of activity relevant will be very interesting for  AR.Â  Again that it why I love the Wave Federation Protocol work because what they have built into their XMPP extensions.  You can have your real-time personal data streams, or community streams, or broadcast publicly &#8211; the permissions are built.</p>
<p>And Thomas Wrobelâ€™s original vision of these layers and channels is only fully expressed if you have the eyewear.</p>
<p><strong>Chris Arkenberg:  Well, and it becomes redundant if itâ€™s on a mobile. To use a very basic example, Twitter, obviously thereâ€™s an app you can view those streams of activity on the camera stream. But you can view that real time data on the screen.  Why do you need to see it heads up?</strong></p>
<p><strong>The reason I really pay attention to what the military is investing in, one, because they have a ton of money, but also because they tend to represent the core bio survival needs of the speciesâ€¦So, when I look at computing, I see this very obvious trend of computers getting smaller and smaller and closer and closer to us because theyâ€™re so valuable to our success.  They give us so much valuable information for engaging our world on a moment by moment basis.  So, of course now we have these tiny little handheld devices that give us access to the global knowledge depositories of human history, because itâ€™s so useful to have that stuff right at hand.</strong></p>
<p><strong>The only impediment now is that it takes one of our hands, if not both of them, to access it.  So if you are in the natural world, which we are all always in the natural world, ultimately, you want your hands free in order to engage with the world on a physical level.</strong></p>
<p><strong>I see computation, or rather, our access to computation is just going to get thinner and thinner, and weâ€™ll very soon move into eyewear, and inevitably, weâ€™ll move into brain computer interface in some capacity.</strong></p>
<p><strong>So when youâ€™re the disaster worker, or a deployed soldier, or the extreme mountain biker, or the heli-skier, or just an adventurer, there are a lot of very practical reasons to have access to information on a heads-up plane. I see AR as being so profound and so valuable, but weâ€™re getting a glimpse of it in its infancy, and itâ€™s got a ways to go to be able to really contain what it is weâ€™re reaching for.</strong></p>
<p><strong>Tish Shute:</strong> I agree.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s been a big criticism Iâ€™ve had with all the existing AR implementations that Iâ€™ve seen, is that the UI really needs a revolution.  Itâ€™s very heavy handed.  It is not dynamic, even though itâ€™s supposed to be.  It does not take advantage of transparencies.  It treats the screen like a screen.  It doesnâ€™t treat the screen like a window onto the real world. When youâ€™re looking on the real world, you donâ€™t want a lot of occlusion.  You want very soft-touch indicators of a data shadow behind something that you can then address and then have it call out the information thatâ€™s important to you.</strong></p>
<p>Tish Shute:  Now, thatâ€™s a very nice kind of image youâ€™ve conjured for me there.  Do you see that more could be done on the smartphone than is being done within that?  Or are we like waiting for the old ishades?</p>
<p><strong>Chris Arkenberg:  I think thereâ€™s definitely a lot of room for improvement on the smartphone UI.  Nobodyâ€™s really played around with it much. And again, I think thatâ€™s in part that there hasnâ€™t been a really established platform with enough money to fund interesting UI work. We see it in some of the concept demos that float around every now and then.</strong></p>
<p><strong>I guess itâ€™s both a blessing and curse that Iâ€™m always five steps ahead of where Iâ€™m trying to get to.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, I am familiar with that feeling!</p>
<p><strong>Chris Arkenberg:  So Iâ€™m always trying to reach for the vision even though itâ€™s a bit distant. I think thereâ€™s going to be a lot of development on the handsets.  But again, I think we need a lot of refinement.  We need a lot of real critical analysis of why this is a good thing.</strong></p>
<p><strong>To get back to the original point of Raimoâ€™s comment, it struck me.  And I knew it, but I just had set it aside as gimmickry. But heâ€™s right.  Content is a huge driver for this.  Just stuff thatâ€™s engaging, and fun, and cool, and shows off the technology so they can get enough money to make it through whatever Trough of Disappointment may be waiting.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, donâ€™t underestimate the Planes of Content!Â  They are a great place to get interest and money to keep AR technology  moving on, right?</p>
<p><strong>Chris Arkenberg:  Yeah, yeah.  Because, you know, thereâ€™s a lot of freedom there.  And you can piggyback on all the rest of the content thatâ€™s out there and jump on memes and marketing objectives, etc&#8230;</strong></p>
<p><strong>And thereâ€™s a lot of stuffâ€¦Iâ€™m blanking on some of the names, but some of these historical recreations of city streets.  Thereâ€™s a street in London where they overlaid historical photos in a really compelling experience. [Museum of London - http://www.museumoflondon.org.uk/] Again, Iâ€™m completely forgetting the attributions, but hose are the type of things that can really be pursued on the existing platforms.  There is stuff thatâ€™s really compelling and really cool.</strong></p>
<p><strong>I heard of another interesting use case &#8211; and I should say that I canâ€™t find attributions to this anywhere on the web and I may be paraphrasing or mis-representing the actual work, but I think the concept is worth exploring anyway. But the idea was that you could take the locations of border checkpoints and conflict sites in Palestine and Israel and visually overlay them on an AR layer in San Francisco.  And it would do some sort of transposition where you could virtually view these things in San Francisco with the same locational mapping superimposed. So you could see where the checkpoints where.  You could see where the wall was.  You could see where suicide bombings were and where there had been conflicts.</strong> <strong>[I cannot find any citations for this!]</strong></p>
<p><strong>Tish Shute: </strong> But with an AR view?  But why would you use an AR view if you  are in San Francisco, then?</p>
<p><strong>Chris Arkenberg:  Because it superimposes two realities, translating the Gaza conflict into San Francisco as you are walking around. You can interrogate the world. Thereâ€™s a discoverability aspect where youâ€™re using the headset to reveal things, or the handset rather, to reveal things that you could not see otherwise in your city. It was done as an art piece, but as a provocative, obviously political art piece.</strong></p>
<p><strong>Tish Shute: </strong>Very interesting.  Iâ€™d love to see that. Because thatâ€™s interesting to get away from this idea that you actually have to sort of have this one to one relationship between the data and the world is kinda nice, isnâ€™t it?  Well, not one to one, but a very literalâ€¦getting away from that literalness is kind of good.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s a possibility of virtual reality and augmented reality merging, that maybe virtual reality is actually going to do best by coming out of the box and writing itself over our reality, so that as you are walking around, you are no longer seeing San Francisco, but you are seeing part of Everquest or World of Warcraft.</strong></p>
<p><strong>Tish Shute: </strong> Well this is where Bruce Sterling gets to that point he made in <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">his keynote for are2010</a>, that if we actually have viable AR eyewear, then you get the gothic stepsister of AR, VR rising from the grave!Â  He asks whether the very charm of augmented reality, is in fact that it adds rather than subtracts from your engagement with the world and that getting get sucked back into the black hole of VR might not be so great.</p>
<p><strong>Chris Arkenberg:  And then you get all sorts of interesting challenges to social cohesion if you have a lot of different people experiencing very different worlds, effectively.  That if there is no real consensual reality and a majority of your local populous is, in fact, experiencing very different and unique versions of the world, what does that do to social cohesion?  How does that reinforce tribalism, for example, when only you and certain others get to opt in to a particular layer view of the world?</strong></p>
<p><strong>Tish Shute:</strong> Yes Jamais Cascio wrote an interesting piece on that issue on AR and social cohesion a while back.</p>
<p>An eye patch is a more logical vision than the goggles in many ways but I suppose the loss is stereo vision?</p>
<p><strong>Chris Arkenberg:  And actually, there were developments in military helicopter technology many years ago that used a single pane square of glass over the eye mounted to the helmets of pilots.  And then they drew various bits of heads-up information on it. So that ensures that youâ€™re having a real strong engagement with the real world, which, obviously, when youâ€™re a helicopter pilot is quite important.  But you still have access to the data layer of  the invisible world.</strong></p>
<p><strong>Tish Shute:</strong> I just went to <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a> and I have to say, I was awestruck about how big thatâ€™s got.  I mean <a href="http://hadoop.apache.org/" target="_blank">Hadoop</a> has gone from like zero to huge in just a few years.  I mean itâ€™s just like now everyone has the power of the Google big table at their fingertips.</p>
<p>Whatâ€™s the play for AR in the land of search?</p>
<p>I could imagine Hadoop being very powerful tool for AR analytics?</p>
<p>Have you got any thoughts on the land of search and AR? Of course visual search is proceeding at a fast pace and there is a lot of promise for integrations with AR in the future but the latency for visual search is still pretty high?</p>
<p><strong>Chris Arkenberg:  In the near term, not a lot.  In the medium term, thereâ€™s a larger trend towards virtual agents that you can program or teach to keep watch over things for you as an effort to scale down the data overload.  So search is something thatâ€™s going to become more personalized and more active.  Thereâ€™s a movement to make it so people can essentially deputize these agents to be always searching for them; to be out there looking for the things that they have told these agents are important to them.</strong></p>
<p><strong>So active search for AR I think presents some challenges, obviously because you need to do text input, typically, or voice input.  Voice input, I think, is much more achievable than text input for AR.  But I can certainly imagine an AR layer that is being serviced by these agents that we have roaming around the web for us reconciling their visual view of the world with our personalizations. AR apps are contextually aware so it knows that if youâ€™re downtown, itâ€™s not going to be giving you a ton of information about Software as a Service infrastructure, or what have you.  But that, instead, itâ€™s going to be handing you little tidbits about a particular clothing brand youâ€™ve opted in to follow and information about  music venues &amp; schedules, for example.  Or perhaps youâ€™ll be on the lookout for other users that have opted in to publicly tag themselves as a member of this or that affinity.</strong></p>
<p><strong>I keep coming back to this idea of AR as really just a simple visualization layer that all of these other technologies can potentially feed into.  So in that sense, search becomes a passive thing that AR is just simply presenting to you in a heads-up, hands-free, or potentially hands-free environment.</strong></p>
<p><strong>Tish Shute:</strong> Yes, the big challenge is the stepping stones to that point! Small steps that keep interest going into developing the underlying technology (and not just in research labs!) that will bring us that interface.Â  We have seen some movement already with Qualcomm.</p>
<p><strong>Chris Arkenberg:</strong> And there are bandwidth issues as well, as we can see with the Google Goggles, which is a great idea of visual search.  But you have to take a picture and send it to the cloud and wait for your results.  Itâ€™s not a real-time dynamic interrogation of the world.</p>
<p><strong>Tish Shute:</strong> Yes we are really only at the very beginning of  AR being ready for prime time.. it would be interesting to ask AR developers how many of them use AR on a daily basis.</p>
<p><strong>Chris Arkenberg:  I think a lot of us, weâ€™re just informed by the sci-fi myths and fascinated with the potential now thatâ€™s itâ€™s starting to become real. But I think we all kinda get that itâ€™s still extraordinarily young.  I mean the web is extraordinarily young. And AR is itself far younger in a lot of ways in its implementations.</strong></p>
<p><strong>Everybody has a lot of excitement about all of the great potentials that are being unleashed by this great wave of the Internet and the web and ubiquitous mobile computing.  So thatâ€™s why, you know, you look at that map and we talk about AR and you canâ€™t talk about any of the stuff without talking about all of it, in a lot of ways, particularly with something like AR where itâ€™s so ultimately agnostic and could be completely pervasive across all of these layers.</strong></p>
<p><strong>So my fascination is with the future, and I measure our progress towards it by the young nascent offerings from the platform players and the developers. And yeah, a lot of it isâ€¦itâ€™s akin to getting that first triangle on the screen in 3D.  You know, when the renderer finally works and you get a triangle on the screen, and you go, â€œOh my God, it renders.â€  And then you can start to really build polygons and build objects, and start doing boolian operations, and get light and rendering in there, and textures, and on, and on, and on.<br />
So Iâ€™m fascinated by the Layars and the Metaioâ€™sâ€¦<br />
[laughter]</strong></p>
<p><strong>Tish Shute:</strong> Yes and hats off to all the players in the emerging industry, Layar, Metaio, Ogmento, Total Immsersion, and all the others who are finding clever ways to bring fun aspects of  AR into the mainstream, and fuel interest to take the technology to the next level.</p>
<p><strong>Chris Arkenberg:  Absolutely.  And the hype cycle is very valuable.  It has really helped launch the AR industry.  Itâ€™s brought a lot of eyes, and itâ€™s brought a lot of money into the industry.  And itâ€™s forcing people like us to have these conversations to understand how to refine its growth and really focus on the potential in all these different venues, whether itâ€™s trying to save lives, or better understand your city, or have really compelling entertainment experiences.</strong></p>
<p><strong>Everybodyâ€™s excited, and everybodyâ€™s sharing, and everybodyâ€™s trying to move it forward in a way thatâ€™s the most productive.</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</title>
		<link>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/</link>
		<comments>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/#comments</comments>
		<pubDate>Fri, 20 Nov 2009 04:53:07 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AR browsers]]></category>
		<category><![CDATA[AR Dev camp]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[calo]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[real time internet]]></category>
		<category><![CDATA[siri]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[The Copenhagen Wheel]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the outernet]]></category>
		<category><![CDATA[the sentient city]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Web Squared]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4869</guid>
		<description><![CDATA[The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now! View more presentations from Tish Shute. Click on the image below or here to watch this presentation and others from Momo13]]></description>
				<content:encoded><![CDATA[<div id="__ss_2542526" style="width: 425px; text-align: left;"><a style="font:14px Helvetica,Arial,Sans-serif;display:block;margin:12px 0 3px 0;text-decoration:underline;" title="The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!" href="http://www.slideshare.net/TishShute/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526">The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!</a><object style="margin:0px" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="355" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" /><param name="allowfullscreen" value="true" /><embed style="margin:0px" type="application/x-shockwave-flash" width="425" height="355" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<div style="font-size: 11px; font-family: tahoma,arial; height: 26px; padding-top: 2px;">View more <a style="text-decoration:underline;" href="http://www.slideshare.net/">presentations</a> from <a style="text-decoration:underline;" href="http://www.slideshare.net/TishShute">Tish Shute</a>.</div>
<p>Click on the image below or <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">here to watch</a> this presentation and others from <a href="http://www.mobilemonday.nl/">Momo13</a></div>
<p><a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"><img class="alignnone size-medium wp-image-4876" title="Screen shot 2009-11-20 at 1.32.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-20-at-1.32.24-PM-300x167.png" alt="Screen shot 2009-11-20 at 1.32.24 PM" width="300" height="167" /></a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Toward the Sentient City: The Future of the Outernet and How to Imagine it?</title>
		<link>https://www.ugotrade.com/2009/11/09/toward-the-sentient-city-the-future-of-the-outernet-and-how-to-imagine-it/</link>
		<comments>https://www.ugotrade.com/2009/11/09/toward-the-sentient-city-the-future-of-the-outernet-and-how-to-imagine-it/#comments</comments>
		<pubDate>Mon, 09 Nov 2009 21:09:00 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[3rd cloud]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[aesthetics of distributed participation]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[architectures of participation]]></category>
		<category><![CDATA[asynchronous city]]></category>
		<category><![CDATA[Benjamin H. Bratton]]></category>
		<category><![CDATA[Breakout!]]></category>
		<category><![CDATA[Conflux 2009]]></category>
		<category><![CDATA[Dan Hill]]></category>
		<category><![CDATA[Dharma Dailey]]></category>
		<category><![CDATA[distributed open AR]]></category>
		<category><![CDATA[Enrique Ramirez]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[human electric hybrid]]></category>
		<category><![CDATA[hybrid social netoworks]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Laura Forlano]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[Mark Shepard]]></category>
		<category><![CDATA[Martijn de Waal]]></category>
		<category><![CDATA[Matthew Fuller]]></category>
		<category><![CDATA[Mimi Zeiger]]></category>
		<category><![CDATA[Natalie Jeremijenko]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[new architectures of participation]]></category>
		<category><![CDATA[Nicolas Nova]]></category>
		<category><![CDATA[Omar Khan]]></category>
		<category><![CDATA[Open AR]]></category>
		<category><![CDATA[outernet]]></category>
		<category><![CDATA[Philip Beesley]]></category>
		<category><![CDATA[real time communication]]></category>
		<category><![CDATA[real time web]]></category>
		<category><![CDATA[real-time database enable city]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Sentient City Survival Kit]]></category>
		<category><![CDATA[Situated Technologies]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[social mobility]]></category>
		<category><![CDATA[social mobility and the 3rd cloud]]></category>
		<category><![CDATA[synchronous internet of things]]></category>
		<category><![CDATA[The Copenhagen Wheel]]></category>
		<category><![CDATA[The Living Architecture Lab]]></category>
		<category><![CDATA[the social negotiation of Technology]]></category>
		<category><![CDATA[Too Smart City]]></category>
		<category><![CDATA[Toward the Sentient City]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[urban sustainability]]></category>
		<category><![CDATA[urbanware]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Web Squared]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4758</guid>
		<description><![CDATA[Amphibious Architecture &#8211; &#8220;submerges ubiquitous computing into the waterâ€”that 90% of the Earthâ€™s inhabitable volume that envelops New York City but remains under-explored and under-engaged.&#8221; Toward the Sentient City, brought &#8220;architects and urban designers into a conversation that until now has been limited largely to technologists,â€ and created an extraordinary opportunity to investigate distributed architectures [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.sentientcity.net/exhibit/?p=603" target="_blank"><span id="n.6p" title="Click to view full content"> </span></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-12.03.40-AM.png"><img class="alignnone size-medium wp-image-4783" title="Screen shot 2009-11-06 at 12.03.40 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-12.03.40-AM-300x200.png" alt="Screen shot 2009-11-06 at 12.03.40 AM" width="300" height="200" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/dhj5mk2g_404g3prc6dc_b.jpg"><img class="alignnone size-medium wp-image-4759" title="dhj5mk2g_404g3prc6dc_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/dhj5mk2g_404g3prc6dc_b-300x199.jpg" alt="dhj5mk2g_404g3prc6dc_b" width="300" height="199" /></a><br />
<span id="ot:x" title="Click to view full content"> </span></p>
<p><em><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank"><span id="it_d" title="Click to view full content">Amphibious </span>Architecture</a> &#8211; &#8220;submerges ubiquitous computing into the waterâ€”that 90% of the Earthâ€™s inhabitable volume that envelops New York City but remains under-explored and under-engaged.&#8221;</em></p>
<p><a href="http://www.sentientcity.net/exhibit/">Toward the Sentient City</a>,<span id="ju31" title="Click to view full content"> brought </span> &#8220;architects and urban designers into a conversation that until now has been limited largely to technologists,â€ and <span id="hb:z" title="Click to view full content">created an extraordinary opportunity to investigate distributed architectures of participation of what we might call the &#8220;outernet.&#8221;Â  This is a</span><span id="hb:z" title="Click to view full content"> timely conversation as &#8220;web squared,&#8221;Â  &#8220;smart things,&#8221; the &#8220;internet of things,&#8221; or the &#8220;outernet,&#8221;</span><span id="g6ad" title="Click to view full content"> and their popular &#8220;ambassador&#8221; augmented reality are rapidly becoming everyone&#8217;s &#8220;business.&#8221;</span><span id="eb9y" title="Click to view full content"> From </span><span id="b265" title="Click to view full content">&#8220;evil&#8221; marketers, to global corporations, </span><span id="sq48" title="Click to view full content">environmentalists, artists and community activists -Â  everyone, it seems, is</span><span id="mqn_" title="Click to view full content"> interested in the possibilities of this new frontier.</span></p>
<p><span id="ot:x" title="Click to view full content">It is a challenging task to respond to, </span><a href="http://www.sentientcity.net/exhibit/">Toward the Sentient City</a><span id="ot:x" title="Click to view full content">, an exhibition whose backdrop includes a series of conversations on Situated Technologies &#8211; published by the Architectural League, from a circle of people who have been thinking, writing, and speaking on networked urbanism for many years now, including: Adam Greenfield, </span><span id="vjks" title="Click to view full content"> Mark Shepard, Matthew Fuller, Usman Haque, Benjamin H. Bratton, Natalie JeremiJenko, Laura Forlano, Dharma Dailey,Â  Philip Beesley, Omar Khan, Julian Bleeker, Nicolas Nova</span><span id="o7yp" title="Click to view full content">.Â  And the exhibition itself has a very thoughtful group of respondents, see posts from: <a href="http://www.sentientcity.net/exhibit/?p=595" target="_blank">Dan Hill</a>, <a href="http://www.sentientcity.net/exhibit/?p=659" target="_blank">Martijn de Waal,</a> <a href="http://www.sentientcity.net/exhibit/?p=622" target="_blank">Enrique Ramirez</a>, and <a href="http://www.sentientcity.net/exhibit/?p=603" target="_blank">Mimi Zeiger.</a></span><a href="http://www.sentientcity.net/exhibit/?p=603" target="_blank"><span id="n.6p" title="Click to view full content"> </span></a></p>
<p>But one ofÂ  Toward the Sentient City&#8217;s key accomplishments was to go beyond the rhetorical, and to put practical examples out into the world to<span id="ijgh" title="Click to view full content"> organize a discussion on some of the ideas and possibilities of ubiquitous computing that have barely begun to emerge from academic research, and entrepreneurial blue skying.Â  As curator, </span><a href="http://www.andinc.org/v3/" target="_blank">Mark Shepard</a><span id="ijgh" title="Click to view full content">, explained:<br />
</span></p>
<p><strong><span id="fqkh" title="Click to view full content">&#8220;The </span></strong><strong><span id="tq6_" title="Click to view full content"><span>aim is to provide concrete examples in the present around which to organize a discussion about just what kind of future we might want. Whether theyâ€™re prototypes or not, these commissions are concrete examples. Theyâ€™re not abstract ideas. And we can go stand next to each other and look at and interact with something which is out there in the world behaving in the way it behaves, performing as it does, and we can then begin to have a discussion about it that is less dependent upon powers of rhetoric.</span> So itâ€™s not about me persuading you about an idea but itâ€™s about us evaluating something thatâ€™s living and existing in this world. And that was really the intention of the show.â€</span></strong></p>
<p><span id="ijgh" title="Click to view full content">The commissioned works </span><span id="d4-:" title="Click to view full content">-<a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank"> Amphibious Arc</a></span><span id="d4-:" title="Click to view full content"><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank">hitecture</a>, <a href="http://www.sentientcity.net/exhibit/?p=53" target="_blank">Breakout!</a>, <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a>, <a href="http://www.sentientcity.net/exhibit/?p=59" target="_blank">Too Smart City</a>, and <a href="http://www.sentientcity.net/exhibit/?p=31" target="_blank">TrashTrack,</a> </span><span id="xnxp" title="Click to view full content">that were the hub of Toward the Sentient City&#8217;s </span><span id="g.08" title="Click to view full content"> events, themes and texts, provided a unique glimpse</span><span id="j-jh" title="Click to view full content"> at </span><span id="pa9i" title="Click to view full content">some of the possible dystopian and utopian futures of a &#8220;smart&#8221; city.Â  But, most importantly,Â  all the works questioned what might be new </span><span id="ijgh" title="Click to view full content">architectures of participation for a sentient city. </span></p>
<h3>New Architectures of Participation: Hybrid Social Networks with Human and Non-human Participants .</h3>
<p>Of the five works, Amphibious Architecture and Natural Fuse were particularly fascinating to me because they explored the possibilities of sensor networks to create new forms of distributed participation in networked ecosystems that connected the experience/trajectories of human and non human actors &#8211; fish, plants,Â  and people.</p>
<p>Both Amphibious Architecture, andÂ  &#8220;Natural Fuse&#8221; &#8211; from Usman Haque and <a href="http://www.haque.co.uk/" target="_blank">Haque Design + Research,</a> gave exhibition attendees the chance to experience at a personal level our relationships with our non-human neighbors.</p>
<p><a href="http://www.sentientcity.net/exhibit/?p=5" target="_blank"><span id="it_d" title="Click to view full content">Amphibious </span>Architecture</a> from the The Living Architecture Lab at Columbia University Graduate School of Architecture, Planning and Preservation (Directors David Benjamin and Soo-in Yang) and Natalie Jeremijenko, Environmental Health Clinic at New York University, <span id="w.m9" title="Click to view full content">used a sensor array to &#8220;pierce the reflective </span><span id="ud4u" title="Click to view full content">surface of the water&#8221; that</span> separates us from the underwater ecosystem below.Â  <span id="kfwr" title="Click to view full content">The sensor arrays just below the surface of the East River andÂ  floating light array</span> (see picture on left opening this post) create a new interface between people and fish whose movements and water quality are transmitted in light.</p>
<p>One could also SMS the fish and the single beaver that lives in the rivers surrounding NYC to find the conditions they were experiencing.<span id="cehj" title="Click to view full content"> But t</span><span id="y9m6" title="Click to view full content">urning the city&#8217;s &#8220;back stories,&#8221; like the movements of &#8220;Yo beaver,&#8221; and the oxygen levels and water quality of the rivers into &#8220;fore stories,&#8221; is only one of the many ways Natalie JeremiJenko explores how we can engender the empathy necessary for humans and non humans to live in harmony and mutual benefit.</span></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/nataliefishandmicrochips.jpg"><img class="alignnone size-medium wp-image-4802" title="nataliefishandmicrochips" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/nataliefishandmicrochips-300x199.jpg" alt="nataliefishandmicrochips" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/fishfoodpost.jpg"><img class="alignnone size-medium wp-image-4803" title="fishfoodpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/fishfoodpost-300x199.jpg" alt="fishfoodpost" width="300" height="199" /></a></p>
<p><span id="y9m6" title="Click to view full content"> </span>Toward the Sentient City also held workshops/presentations in conjunction with <a href="http://confluxfestival.org/2009/" target="_blank">Conflux 2009</a>. After her Conflux presentation, Natalie Jeremijenko of Amphibious Architecture (which is also a collaborative project between <a href="http://www.environmentalhealthclinic.net/">xClinic</a>, <a href="http://www.thelivingnewyork.com/">The Living</a><span id="wz9v" title="Click to view full content">, </span>&#8220;and other intelligent creatures on the East River&#8221;)Â  invited participants to enjoy a lunch of cross-species foods at the East River site.Â  <span id="k2u." title="Click to view full content"> </span></p>
<p><span id="k2u." title="Click to view full content">The cross-species lunch takes </span><span id="x0h." title="Click to view full content"> an existing interaction pattern through which people and fish are already communicating, </span><span id="tkk5" title="Click to view full content">i.e., people going to the river â€“ the waterfront,Â  and feeding the fish</span><span id="vct4" title="Click to view full content"> Wonder Bread (which is bad for humans and fish); and transforms this desire to feed the fish into something which actually can remove the mercury content from the fish and our bodies by removing it from the food chain, so a previously inharmonious connection between people and fish, is redirected into a productive interaction benefitting both species.Â  As it turns out, food that is good for Fish (see pictures above), and removes mercury from their bodies can also be nutritious and tasty for humans. </span></p>
<p><a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a>, from team members, Usman Haque, creative director, Nitipak â€˜Dotâ€™ Samsen, designer, Ai Hasegawa, designer, Cesar Harada, designer, Barbara Jasinowicz, producer, used sensors toÂ <span id="oenx" title="Click to view full content"> link humans and plants in network where we are accountable for how our behavior effects others in your ecosystem. </span></p>
<p><span id="oenx" title="Click to view full content">If you brought an ordinary plant to the exhibition, you could take home an electronically assisted plant and become part of a social network of humans and plants. This network of humans and electronically assisted plants is also a carbon sink and ifÂ  more energy is consumed than the total number of plants in the social network can offset, plants begin to die giving immediate feedback and consequences to being greedy about energy consumption. </span><span id="ijgh" title="Click to view full content">For more about joining the Natural Fuse network see<a href="http://www.naturalfuse.org" target="_blank"> here.</a><br />
</span></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/naturalfusepres.jpg"><img title="naturalfusepres" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/naturalfusepres-300x199.jpg" alt="naturalfusepres" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/naturalfusetakehome.jpg"><img title="naturalfusetakehome" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/naturalfusetakehome-300x199.jpg" alt="naturalfusetakehome" width="300" height="199" /></a></p>
<p><span id="pa9i" title="Click to view full content"> </span><span id="w-ed" title="Click to view full content">We are in the pre-dawn ofÂ  sensor networks like those Natural Fuse and Amphibious Architecture created &#8211; social</span><span id="n.6p" title="Click to view full content"> networksÂ  that link human and non human participants in entirely new ways are largely an uncharted territory. </span><span id="o7yp" title="Click to view full content">(Note: T</span><span id="zr9t" title="Click to view full content">he upcoming <a href="http://www.situatedtechnologies.net/" target="_blank">Situated Technologies</a> Pamphlet 6</span><span id="ijgh" title="Click to view full content"> &#8211; <strong>&#8220;Micro Public Places,&#8221; </strong>Marc Bohlen and Hans Frei, indicates it will continue the journey with an investigation ofÂ  &#8220;transparent and distributed participation.&#8221;)</span></p>
<h3>Where Does the Social Negotiation ofÂ  Technology Happen?</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/markshepardpost.jpg"><img class="alignnone size-medium wp-image-4825" title="markshepardpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/markshepardpost-199x300.jpg" alt="markshepardpost" width="199" height="300" /></a></p>
<p>Frequent questions that came up at the presentations given by the teams that produced the commissioned works were: Does this idea scale?Â Â  Does it close the loop in that you <span>get answers to the questions asked?Â  How does the conversation gain agency?Â  And where does the social negotiation of technology happen?Â  (These last two questions were asked by <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> at Mark Shepardâ€™s presentation at Conflux: â€œ</span><a id="ktb-" title="Sentient City Survival Kit" href="http://survival.sentientcity.net/" target="_blank"><span>Sentient City Survival Kit</span></a><span>.â€ â€“ see picture above)Â  I think it is fair to say that these questions for the most part remain unanswered. But Toward the Sentient city was alive with ideas and practical examples about ways we can explore these questions more deeply.</span></p>
<p><span id="oenx" title="Click to view full content">Usman Haque in response to the question, &#8220;Does this experiment scale?,&#8221; replied:</span></p>
<p><strong>&#8220;it would, but at an individual level because it has to remain at the individual level because it is about the individual in relationship to the wider social context as opposed to building a forest to offset a city it is about each individual making choices of their own about what they do andÂ  having some kind of knowledge about the effect they are having on other people because most of the time we are quite complacent &#8211; we are able to do whatever we want because we are not necessarily aware how our intrusions effect both human and non-human neighbors&#8230;.&#8217;</strong></p>
<p>So how does this close the loop?Â  Usman explains that one of the key aspects for him is that if you do take home a plant you become part of a system in which you are no longer anonymous and if a plant is threatened (plants get three lives) you have the opportunity to email the person in the system who has threatened your plant.Â  Usman noted that one of the interesting things that happened in the context of the exhibition, where there was a single unit, was that 90% of the time people switched it on to selfish mode &#8211; presumably because they were anonymous.Â  Another aspect of Natural Fuse that raises interesting questions is that as more people decide to join the network the risk of a plant being harmed by any particular individual&#8217;s selfishness lessens.Â  As <a href="http://www.sentientcity.net/exhibit/?p=659" target="_blank">Martijn de Waal</a>,<span id="gi2_" title="Click to view full content"> in his response that unpacks some of the deeper philosophical, epistemological, and ethical questions that Natural Fuse addresses, observes:</span></p>
<p><strong>&#8220;The concept of a commons thus assumes cooperation and mutual accommodation. Could Sentient Technology play a role in the allocation of limited resources between citizens? Could it lead to the emergence of some sort of peer-to-peer governance model, that could prevent overusage of scarce resources?&#8221;</strong></p>
<h3><strong><br />
New Aesthetics of Distributed Participation</strong></h3>
<p><span id="nqx:" title="Click to view full content">The works of, </span><span id="nqx:" title="Click to view full content"><span> &#8220;Toward the Sentient City&#8221; point to possibilities for a new aesthetics of distributed participation in which users and system are no longer separated but instead â€œdevelop joint forms of observing and knowing that neither [...] is capable on its own.â€ (quote from upcoming, <a href="http://www.situatedtechnologies.net/" target="_blank">Situated Technologies Pamphlets</a></span> 6: Micro Public Places, Marc Bohlen and Hans Frei).Â  Natural Fuse and Amphibious Architecture examine the new transactional realities of the Sentient City.</span></p>
<p><span id="po-s" title="Click to view full content"> But there are many questions left unanswered.Â  We know a lot about the power of generativity from the </span>internet (see Zittrain)-Â  the ur<strong> &#8220;architecture of participation.&#8221;</strong> <span id="hri-" title="Click to view full content">As Zittrain points out, the &#8220;generativity&#8221; of the internet is &#8220;the engine that has catapaulted the internet from backwater to ubiquity.&#8221; </span> Tim O&#8217;Reilly coined the phrase, &#8220;architecture of participation,&#8221; to &#8220;describe the nature of systems that are designed for user contribution,&#8221;<span id="o7et" title="Click to view full content"> such that &#8220;participants extend the reach/increase the value of the system.&#8221;Â  But as Tim O&#8217;Reilly put it in his recent talk, &#8220;<a href="http://www.slideshare.net/timoreilly/state-of-the-internet-operating-system" target="_blank">State of the Internet Operating System:&#8221;</a></span></p>
<p><span title="Click to view full content"><strong>&#8220;Web 2.0 is about finding meaning in user-generated data, and turning that meaning into real-time user facing services.Â  &#8220;Web Squared&#8221; takes that same concept to real-time sensor data.&#8221;</strong><br />
</span></p>
<p><span id="o7et" title="Click to view full content">We know little yet about what constitutes generativity for the &#8220;outernet,&#8221; particularly for the kind ofÂ  hybrid social networks that Natural Fuse and Amphibious Architecture present.Â  Social Networks that connect people and place, humans and non humans, challenge dichotomies of man and nature, and machine and user in new and unexpected ways.</span></p>
<p>At the moment, the internet is going through a metamorphosis with the emergence of real time technologies like XMPP, PubHubSubBub and Google Wave and the coming of age of mobile computing.Â Â  While these shifts were not investigated specifically in any of the commissioned works I think all the worksÂ  begged the question,Â  What is a common platform for social interaction in the &#8220;outernet,&#8221; or sentient city?Â  I was not entirely satisfied, from this point of view, with a web interface for Natural Fuse or SMS as a mobile interface for Amphibious Architecture.</p>
<p><a href="http://www.media.mit.edu/people/dpreed" target="_blank">David P. Reed</a> points to the relationship between social mobility what he describes as the 3rd cloudÂ  and the need for a common platform (see <a href="http://www.slideshare.net/venicesessions/david-reed-social-mobility-and-the-3rd-cloud" target="_blank">David Reed &#8211; Social Mobility and the 3rd Cloud</a>. Hat tip to <a href="http://twitter.com/srenan" target="_blank">@srenan</a> for pointing me to David&#8217;s presentation).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-11.11.25-PM.png"><img class="alignnone size-medium wp-image-4826" title="Screen shot 2009-11-06 at 11.11.25 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-11.11.25-PM-300x222.png" alt="Screen shot 2009-11-06 at 11.11.25 PM" width="300" height="222" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-11.16.59-PM1.png"><img class="alignnone size-medium wp-image-4828" title="Screen shot 2009-11-06 at 11.16.59 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-06-at-11.16.59-PM1-300x222.png" alt="Screen shot 2009-11-06 at 11.16.59 PM" width="300" height="222" /></a></p>
<p><em>Slides above are from David P. Reed&#8217;s presentation,Â <a href="http://www.slideshare.net/venicesessions/david-reed-social-mobility-and-the-3rd-cloud" target="_blank"> Social Mobility and the 3rd Cloud</a></em><a href="http://www.slideshare.net/venicesessions/david-reed-social-mobility-and-the-3rd-cloud" target="_blank"></a></p>
<p>What is an architecture of participation for mobile, social interaction? This is something I am very interested in.</p>
<p>Recently I began a project with a small group of augmented reality developers and enthusiasts to use Google Wave Federation Protocol as a transport system for open distributed, social augmented experiences (lots more to come on this soon &#8211; you can see the back story in my posts <a href="http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>).Â  Wave has introduced an open federated architecture of participation that <strong style="font-weight: normal;">combines asynchronous &amp; synchronous data,Â  bringingÂ  together the advantages of real-time communication with the persistent hosting of collaborative data (like wikis). </strong><strong> </strong></p>
<p>Augmented Reality puts who you are, where you are, and what you are doing center stage, and is an interface for &#8220;communications embedded in context&#8221; and &#8220;enabled by identity&#8221; &#8211; two key qualities of what David <span>P. Reed calls the 3rd cloud.Â  An open, distributed framework for augmented reality could createÂ  an interconnected sense of AR, one that fuses augmentation, data overlays, and varied media with location/time/place and crucially, social networking.Â  Such an interface would open up many possibilities for the new transactional realities that could </span>integrate real-time cloud based data with a human perspective and social networking.Â  I am using the term,<span> transactional realitiesÂ  to suggest an extension into social augmented experiences ofÂ  what, Di-Ann Eisnor, </span><a id="s050" title="Platial" href="http://www.platial.com/"><span>Platial</span></a><span>, describes as,Â  &#8220;</span><span><span><span>transactional cartography&#8221; &#8211; &#8220;the movement from map providing entertainment/information to map as enabling action&#8221; (see </span><a id="h6.r" title="Human as Sensors" href="http://www.youtube.com/watch?v=Di285pgcZRE&amp;feature=PlayList&amp;p=F664D8C553A57C93&amp;index=3"><span>Human as Sensors</span></a><span>).</span></span></span></p>
<p>We have only just got a glimpse ofÂ  how real time technologies and &#8220;communications embedded in context&#8221; will transform social interaction and our cities.Â  This post on <a id="r3ow" title="Writing as Real-Time Performance" href="http://snarkmarket.com/2009/3605">Writing as Real-Time Performance</a> that looks at the Google Wave playback feature is a brilliant example of how real time technology turns familiar practices like writing inside out, and catapaults us into new time trajectories. And, if you haven&#8217;t already seen Matt Jones of BERG&#8217;s, brilliant look at, <a href="http://berglondon.com/blog/2009/10/26/all-the-time-in-the-world-talk-at-design-by-fire-2009-utrecht/" target="_blank">&#8220;All the time in the world&#8221; </a>- from the &#8220;soft time&#8221; and &#8220;squishy time&#8221; ofÂ  cell phone culture, to their anticedents in real-time computing, go now!Â  Also see Dan Hill&#8217;s work on <a href="http://cityofsound.com" target="_blank">&#8220;time based notation,&#8221;</a> and Tom Carden&#8217;s work for mysociety.org</p>
<p><span> </span></p>
<h3>Transactional Realities Between the &#8220;Asynchronous City&#8221; and the &#8220;Synchronous Internet ofÂ  Things&#8221;</h3>
<p><span> </span><span id="nqbb" title="Click to view full content"><span>Out of Toward the Sentient City&#8217;s five commissioned works,</span><span> only</span></span><span id="n:_n" title="Click to view full content"><span> </span></span><span> </span><a href="http://www.sentientcity.net/exhibit/?p=31" target="_blank"><span>Trash Track</span></a><span> </span><span id="nqbb" title="Click to view full content"></span><span> </span><span id="n:_n" title="Click to view full content"><span>focused on the â€œsynchronized Internet of Things.â€ </span></span><a href="http://www.sentientcity.net/exhibit/?p=31" target="_blank"><span id="n:_n" title="Click to view full content"><span> </span></span></a><span id="n:_n" title="Click to view full content"><span>Trash Track asks what can we learn from the aggregated data streams of â€œsmartâ€ trash about</span></span><span> the infamous path of trash from cities of privilege to rivers of want,Â  rather than</span><span id="rkuc" title="Click to view full content"><span> exploring the the particular transactional realities of a social network that linked people with their trash</span></span><span id="n.6p" title="Click to view full content"> </span></p>
<p><span id="ft58" title="Click to view full content"><br />
<span> </span></span><span id="ft58" title="Click to view full content"> </span><span id="n.6p" title="Click to view full content"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/TrashTrack2.jpg"><img title="TrashTrack2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/TrashTrack2-300x199.jpg" alt="TrashTrack2" width="300" height="199" /></a></span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/TrashTrack2.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrack4.jpg"><img title="trashtrack4" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrack4-300x199.jpg" alt="trashtrack4" width="300" height="199" /></a><span id="ft58" title="Click to view full content"><span> </span></span></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrack3.jpg"><img class="alignnone size-medium wp-image-4768" title="trashtrack3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrack3-300x199.jpg" alt="trashtrack3" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrackpost.jpg"><img class="alignnone size-medium wp-image-4782" title="trashtrackpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/trashtrackpost-300x199.jpg" alt="trashtrackpost" width="300" height="199" /></a></p>
<p><span id="ft58" title="Click to view full content"><span>The goals of </span></span><span id="ft58" title="Click to view full content"><span>Trash Track </span></span><span id="ft58" title="Click to view full content"><span>were</span></span><span id="ft58" title="Click to view full content"><span>, Assaf </span></span><span id="ft58" title="Click to view full content"><span>Biderman explained during his presentation:</span></span></p>
<p><span id="ft58" title="Click to view full content"><span> <strong>â€œto learn about the removal chain, to see if knowing more cou</strong></span></span><strong><span id="f:mt" title="Click to view full content"><span>ld promote behavioral change, and investigate if smart tagging could one day lead to 100% recycling.â€ </span></span></strong></p>
<p><strong><span id="f:mt" title="Click to view full content"> </span></strong><span>The team from SENSEable City Laboratory, MIT included &#8211; Carlo Ratti: Director, Assaf Biderman: Associate Director, Rex Britter: Advisor, Stephen Miles: Advisor, Kristian Kloeckl Project Leader, Musstanser Tinauli, E Roon Kang, Alan Anderson, Avid Boustani, Natalia Duque Ciceri, Lorenzo Davolli, Samantha Earl, Lewis Girod, Sarabjit Kaur, Armin Linke, Eugenio Morello, Sarah Neilson, Giovanni de Niederhausern, Jill Passano, Renato Rinaldi, Francisca Rojas, Louis Sirota, Malima Wolf.</span></p>
<p><span>However, Assaf,Â  in his presentation, presented another project from SENSEable City Laboratory in partnership with the City of Copenhagen, </span><a href="http://senseable.mit.edu/copenhagenwheel/" target="_blank">The Copenhagen Wheel</a>.Â  <span>This project seems to work brilliantly at the intersection of the &#8220;asynchronous city&#8221; (Bleeker and Nova) and the &#8220;synchronized internet of things&#8221;Â  The &#8220;smart&#8221; wheel &#8211; a low cost, open source, human electric hybrid is:</span></p>
<p><strong>&#8220;an electric bicycle wheel that can be easily retrofitted into any regular bicycle and location and environmental sensors which are powered by the bike wheel and in turn provide data for a variety of applications.&#8221;</strong></p>
<p>This project, that aims to promote urban sustainability through smart biking, opens up many possibilities for a bottom up architecture of participation for the sentient city (<a href="http://senseable.mit.edu/copenhagenwheel/">see video here</a>). <strong><br />
</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-08-at-7.18.45-PM.png"><img class="alignnone size-medium wp-image-4838" title="Screen shot 2009-11-08 at 7.18.45 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-08-at-7.18.45-PM-300x218.png" alt="Screen shot 2009-11-08 at 7.18.45 PM" width="300" height="218" /></a><br />
</strong></p>
<p><a href="http://www.andinc.org/v3/" target="_blank">Mark Shepard</a> describes something he calls &#8220;propagativeÂ  urbanism:&#8221;</p>
<p><strong>&#8220;a way of thinking about shaping the experience of urban space in terms of a bottom-up, participatory approach to the evolution of cities.&#8221; </strong></p>
<p>And, in the most recent pamphlet in the <a href="http://www.situatedtechnologies.net/" target="_blank">Situated Technologies pamphlets </a><span><a href="http://www.situatedtechnologies.net/" target="_blank">series, #5 â€œAsynchonicity Design Fictions for Asynchronous Urban Computing,â€ </a>Julian Bleeker and Nicolas Nova invert an emphasis in the so-called â€œreal-time database enabled cityâ€ with its synchronized Internet of Thingsâ€¦.Â  and speculate on the existence of an â€œasynchronous city.â€Â  They &#8220;forecast situated technologies based on weak signals that show the importance of time on human perspectives.â€Â  They ask:</span></p>
<p><span><strong>&#8220;why, besides &#8216;operational efficiency,&#8217; would we want a ubiquitously computed environment?Â  What are the measures of &#8216;better&#8217; that we want to count as meaningful?&#8221;</strong></span></p>
<p><span>They explain:</span></p>
<p><span><strong>..we are trying to think through what &#8220;urbanwares might be &#8211; urban operating systems &#8211; if they were less about synchronization, top-down construction and connected channels of information and databases and so forth, and more about asynchronized, decentralized things.Â  Software, data, time out of alignment, incongruities, tiles and imbrications of the geographic, spatial parameters into a delicious kind of lively peasant&#8217;s stew.&#8221; </strong><br />
</span></p>
<p><span>One takeaway, perhaps, from Toward the Sentient City is that it&#8217;s at the intersection ofÂ  theÂ  â€œasynchronous cityâ€Â  and theÂ  â€œreal-time database enabled cityâ€ where many new transactional realities of the sentient city will arise.</span></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/11/09/toward-the-sentient-city-the-future-of-the-outernet-and-how-to-imagine-it/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>ISMAR 2009: An Augmented Reality &#8220;Top Chef&#8221; Coopetition</title>
		<link>https://www.ugotrade.com/2009/10/24/ismar-2009-an-augmented-reality-top-chef-coopetition/</link>
		<comments>https://www.ugotrade.com/2009/10/24/ismar-2009-an-augmented-reality-top-chef-coopetition/#comments</comments>
		<pubDate>Sat, 24 Oct 2009 22:26:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Acrossair]]></category>
		<category><![CDATA[AR Sketch]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[ARhrrr]]></category>
		<category><![CDATA[augmented reality at VW]]></category>
		<category><![CDATA[avatars and people together in physical spaces]]></category>
		<category><![CDATA[Avilus]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Chetan Damani]]></category>
		<category><![CDATA[Christine Perey]]></category>
		<category><![CDATA[cloud computing]]></category>
		<category><![CDATA[Dirk Groten]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[Georg Klein]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR Competition]]></category>
		<category><![CDATA[HMDs]]></category>
		<category><![CDATA[Humans as Sensors]]></category>
		<category><![CDATA[industrial augmented reality]]></category>
		<category><![CDATA[Institut Graphische Datenverarbeitung]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[ISMAR 2010]]></category>
		<category><![CDATA[ISMAR09]]></category>
		<category><![CDATA[Jay Wright]]></category>
		<category><![CDATA[Joe Ludwig]]></category>
		<category><![CDATA[Junaio]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Mark Billinghurst]]></category>
		<category><![CDATA[Markus Tripp]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Michael Goesele]]></category>
		<category><![CDATA[Microsoft and augmented reality]]></category>
		<category><![CDATA[Mobile Monday]]></category>
		<category><![CDATA[Mobilizy]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[Noah Zerking]]></category>
		<category><![CDATA[Noora Guldemond]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open distributed AR]]></category>
		<category><![CDATA[open hardware]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[participatory sensing]]></category>
		<category><![CDATA[Pattie Maes]]></category>
		<category><![CDATA[Peter Meier]]></category>
		<category><![CDATA[Platial]]></category>
		<category><![CDATA[PTAM on an iphone]]></category>
		<category><![CDATA[Put a Spell. Thomas Carpenter]]></category>
		<category><![CDATA[RoomWare]]></category>
		<category><![CDATA[Sean White]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented realities]]></category>
		<category><![CDATA[standards for augmented reality]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[Technische Universitat Munchen]]></category>
		<category><![CDATA[The RoomWare Project]]></category>
		<category><![CDATA[The Zerkin Glove]]></category>
		<category><![CDATA[tracking and mapping in mobile augmented reality]]></category>
		<category><![CDATA[transactional cartography]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Vernor Vinge]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Volkswagen augmented reality group]]></category>
		<category><![CDATA[Vuzix]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave enabled augmented reality]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Yuri van Geest]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4670</guid>
		<description><![CDATA[ISMAR 2009 -Â  was an extraordinary mix ofÂ  high geek, academic eminence, gungho Dutch Cowboy entrepreneurial spirit, German engineering and industry, brilliant artistry, and invention, all fueled by a sense, and a very active presence in the case of Diamond Sponsor &#8211; Qualcomm, that the big technology players are waking up to augmented reality. In [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/MetaioLayarpost.jpg"><img class="alignnone size-medium wp-image-4674" title="Metaio&amp;Layarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/MetaioLayarpost-300x199.jpg" alt="Metaio&amp;Layarpost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/DirkseesDirkonJunaiopost.jpg"><img class="alignnone size-medium wp-image-4676" title="DirkseesDirkonJunaiopost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/DirkseesDirkonJunaiopost-300x199.jpg" alt="DirkseesDirkonJunaiopost" width="300" height="199" /></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dirkwatchesdirkvcupost.jpg"><img class="alignnone size-medium wp-image-4675" title="dirkwatchesdirkvcupost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dirkwatchesdirkvcupost-300x199.jpg" alt="dirkwatchesdirkvcupost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/metaiodinasaurpost.jpg"><img class="alignnone size-medium wp-image-4678" title="metaiodinasaurpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/metaiodinasaurpost-299x201.jpg" alt="metaiodinasaurpost" width="299" height="201" /></a></p>
<p><a href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> -Â  was an extraordinary mix ofÂ  high geek, academic eminence, gungho Dutch Cowboy entrepreneurial spirit, German engineering and industry, brilliant artistry, and invention, all fueled by a sense, and a very active presence in the case of Diamond Sponsor &#8211; Qualcomm, that the big technology players are waking up to augmented reality.</p>
<p>In the picture sequence above (click on photos to enlarge),Â  <a href="http://twitter.com/metaioUS" target="_blank">Noora </a><span><span><a href="http://twitter.com/metaioUS" target="_blank">Guldemond</a></span></span><span><span>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, demonstrates <a href="http://www.junaio.com/" target="_blank">Junaio</a> (coming to an iphone near you Nov 2nd) to <a href="http://twitter.com/dirkgroten" target="_blank">Dirk Groten</a>, CTO of<a href="http://layar.com/" target="_blank"> Layar</a> (top left photo).Â  One of the nice social features of Junaio is that users can share the 3D augmented scenes they have created.Â  Noora is demoing this capability to </span></span><span><span>Dirk, and as you can see he cracks up when he sees theÂ  scene Noora has stored on her phone.Â  Dirk and I both recognize that this cute little dinosaur augmentation (close up above on bottom left) must have been created by <a href="http://www.metaio.com/company/" target="_blank">Peter Meier, CTO of Metaio</a>, during the Interoperability and Standards workshop earlier that day.Â  Metaio it seems were discussing standards while enjoying some 3D augmented back chat.<br />
</span></span></p>
<p><span><span> Both Dirk and I were active participants in the workshop too.Â  But little did we know that Peter Meier had introduced his little 3D dinosaur into our discussion while we diligently, and sometimes heatedly, debated the merits of XMPP, Wave Federation Protocol,Â  KML, ARML, VRML, X3D, andÂ  more!Â  The photo I took is on the bottom right of the four pics above. It was probably taken very shortly after Peter&#8217;s augmented Junaio scene.Â  Of course there is no little dinosaur in my pic ofÂ  Dirk Groten with <a href="http://twitter.com/JoeLudwig" target="_blank">Joe Ludwig</a> and <a href="http://twitter.com/markustripp" target="_blank">Markus Tripp of Mobilizy</a> who were discussing AR standards oblivious to Peter&#8217;s virtual pet in our midst.<br />
</span></span></p>
<p><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/MarkusTrippPeterMeier.jpg"><img class="alignnone size-medium wp-image-4685" title="MarkusTrippPeterMeier" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/MarkusTrippPeterMeier-300x199.jpg" alt="MarkusTrippPeterMeier" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Thereisawillingnesstostandardizepost.jpg"><img class="alignnone size-medium wp-image-4686" title="Thereisawillingnesstostandardizepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Thereisawillingnesstostandardizepost-300x199.jpg" alt="Thereisawillingnesstostandardizepost" width="300" height="199" /></a><br />
</span></span></p>
<p><span><span>I must say I had noticed an impish look on Peter Meier&#8217;s face (see photo above on the left &#8211; Peter is wearing glasses and holding a phone).Â  And Markus Tripp, of MobilizyÂ  revealed a little bit of gaming of his own, when he let out that, in part, ARML is a provocation.Â  But Peter was clearly unfazed and enjoying himself.Â  Dirk, tasked to summarize our discussion, stalwartly maintained an optimistic but serious tone fitting for a standards discussion:Â  &#8220;There is a willingness to standardize&#8230;.,&#8221; he began (pic above on left &#8211; click to enlarge and read text). </span></span></p>
<p><span><span> But it was a little 3D dinosaur that, perhaps appropriately, had the last laugh. Fitting, as I am not sure whether anything anyone says about AR standards at the moment will hold up.Â  But, as Ori commented in <a href="http://gamesalfresco.com/2009/10/23/ismar-2009-epilogue-a-new-augmented-reality-world-order/" target="_blank">his great post &#8211; an epilogue for ISMAR 2009,</a> the vibe was &#8220;Peace and Love&#8221; in AR Browser land (</span></span>although Chetan Damani of <a href="http://gamesalfresco.com/?s=%22acrossair%22" target="_blank">Across Air</a> was not in the standards discussion because he attended the UX/content? workshop instead)<span><span>.Â  But as they say, &#8220;all&#8217;s fair in love and war.&#8221;Â  And it is my feeling the games have barely begun!Â  There are many players (<a href="http://www.youtube.com/watch?v=KI4lB00Ht9o&amp;feature=player_embedded#" target="_blank">virtual pets </a>included) waiting in the wings. I met some at ISMAR, and they are just itching to join the frey.<br />
</span></span></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/coopetitionpost.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/ARConsortiumpost2.jpg"><img class="alignnone size-medium wp-image-4701" title="ARConsortiumpost2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/ARConsortiumpost2-300x188.jpg" alt="ARConsortiumpost2" width="300" height="188" /></a><img class="alignnone size-medium wp-image-4690" title="coopetitionpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/coopetitionpost-300x185.jpg" alt="coopetitionpost" width="300" height="185" /></p>
<p><span><span>Ori Inbar, <a href="http://ogmento.com/" target="_blank">Ogmento </a>and Robert Rice, <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, both founders of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>, made great efforts to set our young industry off on the right foot -Â  in theÂ  spirit of <a href="http://en.wikipedia.org/wiki/Coopetition" target="_blank">coopetition </a>(</span></span>a <a title="Neologism" href="http://en.wikipedia.org/wiki/Neologism">neologism</a> coined to describe <a title="Co-operation" href="http://en.wikipedia.org/wiki/Co-operation">cooperative</a> <a title="Competition" href="http://en.wikipedia.org/wiki/Competition">competition)</a><span><span>. See </span></span><a href="http://gamesalfresco.com/2009/10/23/ismar-2009-epilogue-a-new-augmented-reality-world-order/" target="_blank">Curious Raven for </a><a href="http://curiousraven.squarespace.com/home/2009/10/23/ismar-09-observations-and-comments.html" target="_blank">Robert&#8217;s conference observations</a>, and <span><span><a href="http://gamesalfresco.com/2009/10/23/ismar-2009-epilogue-a-new-augmented-reality-world-order/" target="_blank">Ori&#8217;s post on Games Alfresco</a> for more about </span></span>Mobile Augmented Reality at ISMAR 2009.Â  The Mobile Augmented Reality Workshops were driven by an indomitable spokesperson for the new AR industry, <a href="http://www.perey.com/" target="_blank">Christine Perey</a>.Â  Christine not only helped motivate discussion on the issue of oxygen to the system, i.e. business value, but also she was a very generous connector at the conference.</p>
<p><span><span><br />
</span></span></p>
<h3>What&#8217;s Next From Augmented Reality&#8217;s Top Chefs?</h3>
<p><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-7.15.58-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-7.12.35-PM.png"><img class="alignnone size-medium wp-image-4692" title="Screen shot 2009-10-24 at 7.12.35 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-7.12.35-PM-300x196.png" alt="Screen shot 2009-10-24 at 7.12.35 PM" width="300" height="196" /></a><br />
</span></span></p>
<p>As Ori pointed out, <a href="http://www.imdb.com/name/nm0218033/" target="_blank">Kent Demaine</a>, <a href="http://www.ooo-ii.com/" target="_blank">oooii</a> (pic above is from the oooii web site), Minority report VFX designer was hanging out at ISMAR 2009 and he came to the panel I was on: &#8220;Augmented Reality in Sports,Â  Entertainment and Advertising.&#8221;Â  We chatted afterwords about instrumented environments and how this is such a key to development interesting augmented experiences.Â  Also I mentioned how back in the day I was involved in some of the early development of motion control software.Â  And it was great to hear Kent say they were still finding motion control cool at <a href="http://www.ooo-ii.com/" target="_blank">oooii</a>.Â  As Ori notes, he is the &#8220;guy with the most enviable AR credentials in the world (the guy who designed VFX for minority report)<strong>,&#8221;</strong><strong> </strong>and <a href="http://www.ooo-ii.com/" target="_blank">oooii</a> is busy and hiring.</p>
<p>One of the highlights of the Arts, Media and Humanities track for me was meeting <a href="http://jarrellpair.com/" target="_blank">JarrellÂ  Pair.</a> He really brought the best out in panelists with his well tuned questions.Â  The recording of ISMAR was comprehensive and videos should be up next week.Â  I will post the slides on Ugotrade of my presentation:Â  &#8220;The Next Wave of AR: Shared Augmented Realities and Remix Culture.&#8221;.</p>
<h3>&#8220;Mixed and Augmented Reality: &#8216;Scary and Wondrous&#8217;&#8221; &#8211; <a href="http://en.wikipedia.org/wiki/Vernor_Vinge" target="_blank">Vernor Vinge</a></h3>
<p><strong>&#8220;Imagine an environment where most physical objects know where they are, what they are, and can, (in principle) network with any other object. With this infrastructure, reality becomes its own database.Â  Multiple consensual virtual environments are possible, each oriented to the needs of its constituency.Â  If we also have open standards, then bottom-up social networks and even bottom up advertising become possible. Now imagine that in addition to sensors, many of these itsy-bitsy processors are equipped with effectors.Â  Then the physical world becomes much more like a software construct.Â  The possibilities are both scary and wondrous.&#8221;</strong> (<a href="http://en.wikipedia.org/wiki/Vernor_Vinge" target="_blank">Vernor Vinge</a> -Â  intro to ISMAR 2009)</p>
<p>Vernor Vinge&#8217;s short intro to ISMAR 2009 (which can be downloaded with the <a href="http://www.ismar09.org/" target="_blank">ISMAR 2009 schedule here)</a> captures the essence of the &#8220;Scary and Wondrous&#8221; dawn of the age of ubiquitous computing and mixed and augmented reality.Â  It is definitely worth a moment to download.Â  The future of augmented and mixed realities, as Vernor Vinge points out, is tied up in a &#8220;tension between centralized and distributed computing&#8221; that &#8220;will continue long into the future.&#8221; One ofÂ  my fascinations with Wave is that it offers a tantalizing opportunity to explore augmented reality in an open distributed architecture.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM.png"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>At ISMAR, I talked with as many people as possible about the AR Wave project &#8211; <a href="../../2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">see my post here for more about Wave enabled AR</a>.Â  Many people were very enthusiastic to join the AR wave and the only thing I really lacked was about 100 invites to hand out!</p>
<h3>&#8220;Everything, Everywhere &#8211; making visible the invisible&#8221;</h3>
<p>Some of the areas that I would have liked to see given more attention on at ISMAR were sensor networks, data curation, and user experience.Â  Not that these areas were entirely neglected with Pattie Maes, MIT as a keynote speaker, and Mark Billinghurst presenting on some fascinating work on social augmented experiences and user experience.Â  I highly recommend catching up on these and other ISMAR presentations when the videos go up.</p>
<p><a href="http://www1.cs.columbia.edu/~swhite/" target="_blank"><img class="alignnone size-medium wp-image-4716" title="Screen shot 2009-10-25 at 12.28.25 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-12.28.25-PM-300x57.png" alt="Screen shot 2009-10-25 at 12.28.25 PM" width="300" height="57" /></a></p>
<p>And, I was very happy to meet and talk to <a href="http://www1.cs.columbia.edu/~swhite/" target="_blank">Sean White</a> whose work at Columbia University is one of my inspirations (for more <a href="http://www1.cs.columbia.edu/~swhite/" target="_blank">about Sean&#8217;s work see here</a> or click image above):</p>
<p><strong>&#8220;the confluence of powerful connected mobile devices, advances in computer vision and sensing, and techniques such as augmented reality (AR) enables exciting new opportunities for interacting with this hidden network of dynamic information and shifts the locus of interaction from the desktop computer to the world around us&#8221;</strong></p>
<p>And I had several very interesting conversationsÂ  at ISMAR about developing social augmented experiences that connect us to a physical world that is becoming &#8220;much more like a software construct&#8221; (Vernor Vinge).Â  Dirk Groten, CTO of Layar mentioned a few interesting projects Layar has up their sleeves, including somethingÂ  Layar may be cooking up with <a href="http://www.roomwareproject.org/" target="_blank">The RoomWare Project.</a></p>
<p><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-10.03.00-PM.png"><img class="alignnone size-medium wp-image-4697" title="Screen shot 2009-10-24 at 10.03.00 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-10.03.00-PM-300x231.png" alt="Screen shot 2009-10-24 at 10.03.00 PM" width="300" height="231" /></a><br />
</span></span><br />
The picture above is of RoomWare&#8217;s Social RFID Installation for Media Plaza in Utrecht (<a href="http://blog.roomwareproject.org/2008/10/06/social-rfid-installation-for-media-plaza/">read more here</a>).</p>
<h3>Demos Galore!</h3>
<p>In the demo rooms,<a rel="cc:attributionURL" href="http://augmentation.wordpress.com/2009/10/24/ismar-ismar-ismar-where-to-start/augmentation.wordpress.com"> Noah Zerkin</a> (pic below left) pretty much single handedly carried the AR flag for a growing community of augmented reality Makers and Hackers.Â  But his presence was much appreciated, and he tirelessly demoed <a href="http://zerkinglove.com/" target="_blank">The Zerkin Glove.</a> See <a href="http://augmentation.wordpress.com/2009/10/24/ismar-ismar-ismar-where-to-start/" target="_blank">the first of what may be several posts from Noah on ISMAR here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/noah2post.jpg"><img class="alignnone size-medium wp-image-4700" title="noah2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/noah2post-300x199.jpg" alt="noah2post" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/TishVuzixgogglespost.jpg"><img class="alignnone size-medium wp-image-4704" title="Tish&amp;Vuzixgogglespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/TishVuzixgogglespost-300x199.jpg" alt="Tish&amp;Vuzixgogglespost" width="300" height="199" /></a></p>
<p>And I got to try out the Vuzix goggles (picture above on right).Â Â  This was my first experience playing an AR game that was smart about real world gravity. It&#8217;sÂ  &#8220;an <span>augmented reality</span> marble game that uses gravity as a <span>game controller</span>&#8221; &#8211; see <a href="http://gamesalfresco.com/2009/08/09/augmented-reality-has-gained-gravity/" target="_blank">Ori Inbar&#8217;s write up here</a>.Â  It was a very compelling experience and I have to say I didn&#8217;t really notice the shortcomings of the Vuzix goggles while I was absorbed in the game. AndÂ  I turned out to be quite good at the game too. It is intuitive unlike the kind ofÂ  rule based games I never have time to learn properly.Â  But what is so special about this project is the tools that it is built with are open, and available for all, and affordable (see this <a href="http://gamesalfresco.com/2009/08/09/augmented-reality-has-gained-gravity/" target="_blank">list on Games Alfresco</a>).</p>
<p>It was a great pleasure to meet <a href="http://www1.cs.columbia.edu/~feiner/" target="_blank">Prof. Steven Feiner</a> (picture on below the left) who heads Columbia University&#8217;s brilliant AR research team at <a href="http://graphics.cs.columbia.edu/top.html" target="_blank">The Columbia University Graphics and User Interfaces Lab.</a></p>
<p>Ori Inbar (pic below on right) also spent a lot of time in the demo room showing off Ogmento&#8217;s lovely AR learning game that delighted attendees, <a href="http://ogmento.com/"><strong>â€œPut a Spell: Learn to Spell with Augmented Reality.â€</strong></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/TishVuzixpost.jpg"><img class="alignnone size-medium wp-image-4703" title="TishVuzixpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/TishVuzixpost-199x300.jpg" alt="TishVuzixpost" width="199" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Ogmentopost.jpg"><img class="alignnone size-medium wp-image-4702" title="Ogmentopost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Ogmentopost-199x300.jpg" alt="Ogmentopost" width="199" height="300" /></a></p>
<p>For a round up ofÂ  what&#8217;s next for augmented reality head mounted displays check out, <a href="http://gamesalfresco.com/2009/10/23/ismar-2009-epilogue-a-new-augmented-reality-world-order/" target="_blank">Games Alfresco here</a>, and Thomas Carpenter&#8217;s excellent review of the <a href="http://thomaskcarpenter.com/2009/10/21/ismar09-hmd-review/">head mounted displays.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/GeorgandBlairpost.jpg"><img class="alignnone size-medium wp-image-4712" title="GeorgandBlairpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/GeorgandBlairpost-300x199.jpg" alt="GeorgandBlairpost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/cypherpost.jpg"><img class="alignnone size-medium wp-image-4713" title="cypherpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/cypherpost-300x199.jpg" alt="cypherpost" width="300" height="199" /></a></p>
<p><strong>Ori Inbar on Games Alfresco asks is &#8220;Microsoft â€“ the new big player to watch</strong>?&#8221;Â Â  &#8220;<a href="http://www.robots.ox.ac.uk/%7Egk/" target="_blank">Georg Klein</a>, inventor of <a href="http://www.youtube.com/watch?v=pBI5HwitBX4" target="_blank">PTAM-on-an-iPhone</a> (and the smartest Computer Vision guy on the block)&#8221; has joined Microsoft to make Mobile AR.</p>
<p>The picture on the left above shows Georg trying out <a href="http://www.youtube.com/watch?v=Cix3Ws2sOsU&amp;feature=player_embedded" target="_blank">ARhrrr</a> with Blair MacIntyre.Â Â  And on the right Blair is demoing his marker card pack to Senior Vice President of Cypher Entertainment, David Elmekies.Â  Yes ISMAR was abuzz with demos. See<a href="http://compscigail.blogspot.com/2009/10/ismar09-few-demos.html" target="_blank"> </a><a href="http://compscigail.blogspot.com/2009/10/ismar09-few-demos.html" target="_blank">this post</a> from Gail Carmichael for more video demos.</p>
<h3>Next Year ISMAR 2010 in Korea!</h3>
<p><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/ISMARBanquet.jpg"><img class="alignnone size-medium wp-image-4693" title="ISMARBanquet" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/ISMARBanquet-300x199.jpg" alt="ISMARBanquet" width="300" height="199" /></a></span></span></p>
<p><span style="font-weight: normal;"><span style="font-weight: bold;"><span style="font-size: 0.800001em;"> </span></span></span>At the banquet, I managed to find a seat at a table with Sean White (at left in photo above with Christine Perey to his right) and the Columbia University team.Â  The banquet culminated with the â€œPast and Future of ISMARâ€ Panel chaired valiantly by Jay Wright of Qualcomm.Â  We were asked to offer our input for ISMAR 2010.Â  I offered up an idea that I have been nurturing for a while now -Â  to stage a &#8220;Green Tech AR Competition.&#8221;Â  Perhaps, I suggested, we could <span id="zx-." title="Click to view full content">base the competition around a conference (ISMAR 2010 in Korea?) and set up a target rich, instrumented environment for the occassion.Â  I think the Arduino open hardware community and AR developers have a synergy that is just waiting to be explored!Â  And, if we add the innovators of data curation to the mix, e.g., Pachube, AMEE, and Path Intelligence&#8230;(Markus Tripp left ISMAR to speak on a <a href="http://www.web2summit.com/web2009" target="_blank">Web 2.0 Summit</a> panel, <a href="http://www.readwriteweb.com/archives/humans_as_sensors.php" target="_blank">&#8220;Humans as Sensors,&#8221;</a> which also included Path Intelligence, Deborah Estrin on <a href="http://research.cens.ucla.edu/people/estrin/" target="_blank">&#8220;participatory sensing,&#8221;</a> and the brilliant work of <a href="http://twitter.com/dianneisnor" target="_blank">Di-Ann Eisnor</a>, <a href="http://platial.com/" target="_blank">Platial</a>, on &#8220;Transactional Cartography&#8221;).Â  Anyway a big Green tech AR competition could get people working together across the broad spread of AR terrain on some of the sticky problems of user experience.Â  And, with a high level of support from Smart Phone companies, HMDs manufacturers and the chip makers we just might come up with some extraordinary magic.<br />
</span></p>
<p><span id="zx-." title="Click to view full content"> The devil of course will be in the details.Â  But a competition like this could not only motivate key players to come together in the spirit of coopetition but also be an opportunity to show the world the power of AR to make visible the invisible ecosystems that are so important to the health of our planet.<br />
</span></p>
<p>One of the notable presences at ISMAR 2009 was the Qualcomm team.Â Â  Jay Wright&#8217;s presentation (an exclusive for ISMAR) not only outlined AR for 2012, but Jay also talked about some &#8220;close to the metal&#8221; innovation that we will see from Qualcomm very, very soon!Â  I had some time in the press room with Jay and his team prompted by <a href="http://www.mobilemonday.nl/" target="_blank">MoMo&#8217;s </a>Yuri van Geest.Â  When I twittered about Qualcomm&#8217;s presentation at ISMAR, Yuri replied:<strong><br />
</strong></p>
<p><a href="http://twitter.com/vanGeest" target="_blank">vangeest</a> <a href="http://twitter.com/TishShute" target="_blank">&#8220;@tishshute</a>: good stuff, hopefully you will integrate the neat new solutions and ideas in your talk in November ;)&#8221;</p>
<p><strong> </strong>I will be presenting at <a href="http://www.mobilemonday.nl/" target="_blank">MoMo #13</a> on AR, open AR, future of AR and GeoWeb,Â  and hopefully will bring some good news from Qualcomm too.Â  Anyway Jay seemed to like the idea of a Green Tech AR Competition, even though I did stress that I thought it needed some serious sponsorship and BIG prizes.</p>
<p><strong><br />
</strong></p>
<h3>Where&#8217;s the beef? Tracking and Mapping at ISMAR 2009</h3>
<p>On the flight from NYC to Orlando and ISMAR&#8217;o9 I dozed (I had been up late preparing my presentation) and I watched the Dew Tour Pro Skateboard competition and Top Chef on the Food Channel.Â  In this particular episode of Top Chef, the aspiring chefs were all given a brown bag of ingredients by an already famous chef who then judged whether the contenders managed to make a delicious meal with their allotment which was notably lacking in key ingredients of haute cusine.</p>
<p>This metaphor ofÂ  trying to cook up a great meal while perhaps missing the staples is apt for the current early stage of commercial augmented reality.Â  And when I arrived in Orlando, not only were the Dew Tour pro skateboarders staying at the same hotel as ISMAR, but ISMAR itself felt remarkably like an Augmented Reality Top Chef Coopetition.</p>
<p>Much of ISMAR was dedicated to the task ofÂ  providing the meat and potatoes of Augmented Reality, solutions to mobile tracking, mapping and registration, particularly in the Science and Technology track.</p>
<p>Industrial and Military Augmented reality solutions I found out, typically, solve the tracking problems by using fixed mounts which clearly wouldn&#8217;t translate well into the AR everywhere with everything mobile consumer culture expects.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/DanielPustkapost.jpg"><img class="alignnone size-medium wp-image-4679" title="DanielPustkapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/DanielPustkapost-300x199.jpg" alt="DanielPustkapost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-2.41.56-PM.png"><img class="alignnone size-medium wp-image-4726" title="Screen shot 2009-10-25 at 2.41.56 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-2.41.56-PM-300x208.png" alt="Screen shot 2009-10-25 at 2.41.56 PM" width="300" height="208" /></a></p>
<p><em>In the picture on the left Fabian Doil stands by the VW engine that provided some of the outdoor targets for the ISMAR tracking competition.Â  On the right is a picture from the VW&#8217;s presentation on their research and development of AR.</em></p>
<p>I followed the tracking contest, organized by Daniel Pustka and Fabian Doil of Volkswagen, quite closely. And I learned a lot in the process. WhileÂ  it is clear there has been progress in AR mapping and tracking, we still have a ways to go.</p>
<p>But hanging around the Tracking Competition was a good way to find out the state of play of this crucial part of the AR dream.Â  For example,Â  a little tidbit I learned is that <a href="http://www.gris.informatik.tu-darmstadt.de/~mgoesele/" target="_blank">Michael Goesele </a>who has been reconstructing &#8220;high-quality geometry models from images collected from the internet (so called community photo collections, CPC)&#8221; is soon to be at the <a href="http://www.ini-graphics.net/ini-graphicsnet/members/fraunhofer-institut-fuer-graphische-datenverarbeitung-igd.html" target="_blank">Institut Graphische Datenverarbeitung</a> where top contenders in the tracking contest &#8211; Harald WuestÂ  and Folker Weintipper (in the foreground of the photo at the left and right respectively) are also to be found. [update Harold and Folker were the winning team <a href="http://docs.google.com/gview?a=v&amp;pid=gmail&amp;attid=0.1&amp;thid=1248dd2927becb21&amp;mt=application%2Fpdf&amp;url=http%3A%2F%2Fmail.google.com%2Fmail%2F%3Fui%3D2%26ik%3De77cfddae9%26view%3Datt%26th%3D1248dd2927becb21%26attid%3D0.1%26disp%3Dattd%26zw&amp;sig=AHBy-hbcqUsaRNjbqpHO8vAF_vJqfDrMig" target="_blank">see here for details of scoring and results</a>!] Otto Korkalo and Tuomas Kantonen of VTT, Finland, Augmented Reality team are in the background. They have been working on the joint IBM, Nokia and VTT project that brings, <a href="http://www.marketwatch.com/story/researchers-from-ibm-nokia-and-vtt-bring-avatars-and-people-together-for-virtual-meetings-in-physical-spaces-2009-10-19" target="_blank">Avatars and People Together for Virtual Meetings in Physical Spaces.</a></p>
<p>The picture on the right is another team that were doing very well. If my notes serve me well (and please forgive me if they don&#8217;t. I came back with my card wallet overflowing!) the photo on the right showsChristian Waechter (on the left) and Peter Keitler (on the right) of the <a href="http://portal.mytum.de/welcome" target="_blank">Technische Universitat Munchen</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/trackingcompetitionpost.jpg"><img class="alignnone size-medium wp-image-4672" title="trackingcompetitionpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/trackingcompetitionpost-300x199.jpg" alt="trackingcompetitionpost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Trackingcompetition2post.jpg"><img class="alignnone size-medium wp-image-4681" title="Trackingcompetition2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Trackingcompetition2post-300x199.jpg" alt="Trackingcompetition2post" width="300" height="199" /></a></p>
<p>Germany is certainly leading the way in industrial AR. And I learned how small businesses like Metaio get to work with top research institutions and big companies like VW, thanks to very strong German funding program for AR and VR. The current iteration of a series of funding programs isÂ  called<a href="http://www.avilus.de/" target="_blank"> Avilus</a>.Â  AvilusÂ  is putting 42 million Euros into AR and VR this year alone (click on the slide below to see more about Avilus ).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-1.08.48-AM.png"><img title="Screen shot 2009-10-24 at 1.08.48 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-1.08.48-AM-300x212.png" alt="Screen shot 2009-10-24 at 1.08.48 AM" width="300" height="212" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-2.04.50-AM.png"><img class="alignnone size-medium wp-image-4673" title="Screen shot 2009-10-24 at 2.04.50 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-24-at-2.04.50-AM-300x202.png" alt="Screen shot 2009-10-24 at 2.04.50 AM" width="300" height="202" /></a></p>
<p>I wish we had the equivalent of Avilus here in the US.Â  But there is no equivalent to Arvilus for AR here, andÂ  no AR isÂ  being developed by the US car industry either it seems.Â  But look at the slide above to get a taste of some of the cool stuff Metaio and other small AR and VR businesses do for VW through the Avilus project.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/VWtrackinggudrunpost.jpg"><img class="alignnone size-medium wp-image-4682" title="VWtrackinggudrunpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/VWtrackinggudrunpost-300x199.jpg" alt="VWtrackinggudrunpost" width="300" height="199" /></a></p>
<p>I also got to meet many people from one of the world&#8217;s most important AR hubs -Â  The Department of Informatics, <a href="http://portal.mytum.de/welcome" target="_blank">Technische Universitat Munchen</a>, including Prof. Gudren Klinker on the far right in pic above.Â  And from left to right, Fabian Doil (VW, co-organizer of contest), Sebastian Lieberknecht , Selim Ben Himane (Metaio), Tobias Eble (Metaio).Â  Prof. Klinker is the engine behind much of German innovation in AR.</p>
<p>Metaio was one of the few teams to rely mainly on markerless tracking which in this contest was very challenging because of the very different light conditions (see pics below) between the windowless interior and dazzling Florida sunshine outside (pic on the right shows targets under ideal lighting conditions).Â  Many people in the US may beÂ  familiar with Metaio&#8217;s consumer applications, like Junaio,Â  but thanks to Germany&#8217;s efforts to nurture augmented and virtual reality they are also respected software developers in industrial AR.Â  And I suspect that Metaio will spearhead markeless tracking in consumer AR too.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Trackingcompetition5post.jpg"><img class="alignnone size-medium wp-image-4740" title="Trackingcompetition5post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Trackingcompetition5post-300x199.jpg" alt="Trackingcompetition5post" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-7.47.44-PM.png"><img class="alignnone size-medium wp-image-4745" title="Screen shot 2009-10-25 at 7.47.44 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-7.47.44-PM-300x229.png" alt="Screen shot 2009-10-25 at 7.47.44 PM" width="300" height="229" /></a></p>
<p>This post as usual has already expanded to something much longer than I originally attended &#8211; pretty typical for me! There is much I have not been able to cover including some of the interesting contributions by augmented reality artists at ISMAR &#8211; again I recommend the upcoming videos.</p>
<p>But I cannot end without a hat tip to, Oriel, Nate et al. who won the best student paper award for AR Sketch &#8211; again please <a href="http://gamesalfresco.com/2009/10/23/ismar-2009-epilogue-a-new-augmented-reality-world-order/" target="_blank">see Games Alfresco for more on this</a> (pic below from Games Alfresco). AR Sketch, Ori notes, is featured &#8220;in our <a href="http://gamesalfresco.com/2009/10/16/ismar-2009-sketch-and-shape-recognition-preview-from-ben-gurion-university/" target="_self">top post</a> and popular <a href="http://www.youtube.com/watch?v=M4qZ0GLO5_A" target="_blank">video</a>.&#8221; And</p>
<p><strong>&#8220;Their work is revolutionizing the AR world by avoiding the need to print markers â€“ or any images whatsoever.&#8221;</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-1.58.35-PM1.png"><img class="alignnone size-medium wp-image-4719" title="Screen shot 2009-10-25 at 1.58.35 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-25-at-1.58.35-PM1-300x223.png" alt="Screen shot 2009-10-25 at 1.58.35 PM" width="300" height="223" /></a><br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/10/24/ismar-2009-an-augmented-reality-top-chef-coopetition/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>AR Wave: Layers and Channels of Social Augmented Experiences</title>
		<link>https://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/</link>
		<comments>https://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/#comments</comments>
		<pubDate>Tue, 13 Oct 2009 18:52:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR Browser]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[augmentaion]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Channels and Social Augmented Realities]]></category>
		<category><![CDATA[citi sensing]]></category>
		<category><![CDATA[citizen sensing]]></category>
		<category><![CDATA[Clayton Lilly]]></category>
		<category><![CDATA[cybernetics vs ecology and human waste]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[eco mapping]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geospatial web]]></category>
		<category><![CDATA[geospatial web and augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave as an AR enabler]]></category>
		<category><![CDATA[Google Wave enable augmented reality]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[green tech augmented reality]]></category>
		<category><![CDATA[immersive sight]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Layers]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[Life Clipper]]></category>
		<category><![CDATA[life streaming]]></category>
		<category><![CDATA[location based media]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Mannahatta]]></category>
		<category><![CDATA[map based augmentation]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[modulated mapping]]></category>
		<category><![CDATA[modulated napping]]></category>
		<category><![CDATA[multi-user]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[non euclidian geometry]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[Seanseable Labs]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality experiences]]></category>
		<category><![CDATA[sound augmentation]]></category>
		<category><![CDATA[Thomas K. Carpenter]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Wave as a platform for augmented reality]]></category>
		<category><![CDATA[Wave Blip]]></category>
		<category><![CDATA[Wave Bots]]></category>
		<category><![CDATA[Wave playback]]></category>
		<category><![CDATA[Wave playback feature]]></category>
		<category><![CDATA[Wave Robots]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4585</guid>
		<description><![CDATA[It is now nearly two weeks since the Google Wave preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the Google Wave Federation Protocol and servers (click on the image to see [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>It is now nearly two weeks since the <a href="http://wave.google.com/" target="_blank">Google Wave </a>preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a> and servers (click on the image to see the dynamic annotated sketch <a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank">or here</a>).</p>
<p>Even in the short time we have had to explore Wave, some very exciting possibilities are becoming clear. Thomas puts some of the virtues of Wave as an AR enabler succinctly when he writes:</p>
<p><strong>â€œWave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</strong></p>
<p><strong>These are all incredibly useful properties for any AR experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the userâ€¦..a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secureâ€¦data is only exchanged when necessary, and will stay local if no one else needs to view it.</strong></p>
<p><strong>Wave allows bots to run on itâ€¦allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</strong></p>
<p><strong>For all these reasons and more, Wave makes a great platform for AR.â€</strong></p>
<p>There will be much more <span>coming soon on Wave enabled AR because the Google Wave invites have begun to flow out to a wider community now. This week, many of our small ad-</span>hoc group looking at the development challenges and implications of Google Wave for AR actually got into Wave for the first time.</p>
<p>Many thanks to all the people who have contributed to this discussion so far including: Thomas Wrobel, Thomas K. Carpenter, Jeremy Hight, Joe Lamantia, Clayton Lilly, Gene Becker and many others.</p>
<p>We will be setting up some public AR Framework Development Waves this week.Â  If you have any trouble finding them, or adding yourself to it, please add Thomas and I to your contact list.Â  I am tishshute@googlewave.comÂ  Thomas is darkflame@googlewave.comÂ  The first two are currently called:<strong> </strong></p>
<p><strong><br />
AR Wave: Augmented Reality Wave Framework Development</strong> (developer forum)</p>
<p><strong>AR Wave: Augmented Reality Wave Development</strong> (for general discussion)</p>
<p>The discussion so far has been in two areas. On the one hand, it is gear-heady and focused on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a>, code, development challenges, and interfacing to mobile, while on the other hand people have been looking at use cases and questions of user experience.</p>
<p>Distributed, â€œshared augmented realities,â€ or â€œsocial augmented experiences&#8221; â€“ that not only allow mashups, &amp; multisource data flows, but dynamic overlays (not limited to 3d), created by users, linked to location/place/time, and distributed to other users who wish to engage with the experience by viewing and co-creating elements for their own goals and benefit &#8211; are something very new for us to think about.</p>
<p>As, Joe Lamantia, puts it, now:</p>
<p><strong>â€œthereâ€™s a feedback loop between which interactions are made easy by any given combo of device;/ hardware / software / connectivity, and the ways that people really work in real life (without any mediation / permeation by tech).â€</strong></p>
<p>Joe Lamantia whose term, <strong>â€œsocial augmented experiencesâ€</strong> I borrow for this post title, has done some thinking about <strong>â€œconcepts and models for understanding and contributing to shared augmented experiences, such as the social scales for interaction, and the challenges attendant to designing such interactions.â€ </strong>Check out <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia&#8217;s blog </a>for more on this later this week.</p>
<p>It is very helpful, as Joe points out, to shift the focusÂ  back and forth between the experience and the medium.</p>
<p>It is super exciting to have clear evidence that shared augmented realities are no longer merely possible, but highly probable and actually do-able now.</p>
<p>I shouldÂ  be absolutely clear about what Google Wave does to enable AR because obviously Wave plays no role in solving image recognition and tracking/registrations issues.Â  But, for example, Wave protocols and servers do provide a means to exchange, edit, and read data, and that enables distributed, social augmented realities.</p>
<p>Thomas explains how the newly named &#8220;AR Blip&#8221; works as:</p>
<p><strong>&#8220;An AR Blip is simply a Blip in wave containing AR data. Typically this would be the positional and url data telling a AR browser to position a 3d object at a location in space.</strong></p>
<p><strong>In more generic terms, an AR Blip allows data of various forms (meshes,text,sound) to be given a real-world position.&#8221;</strong></p>
<p>I have mentioned in other posts (<a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>) that Wave can be used for AR as precise or as loose as the current generation devices can handle. And as the hardware and software for the kind of AR that can put media out in the world to truly immerse you in a mixed space, the frameworkÂ  shouldÂ  be able to handle this too.</p>
<p>(a note on the Wave playback feature &#8211; this opens up a whole new world of possibilities.Â  Check out <a href="http://snarkmarket.com/2009/3605" target="_blank">this post</a> on some of the implications of playback for writing!)</p>
<p>The use cases we have been coming up with are too numerous to go into in detail this post<span>.Â  The open nature of an AR framework/Wave standard will lead to many new applications we have barely begun to imagine.Â  As Thomas points out, different client software can be made for browsing, potentially allowing for various specialist browsers, as well as more generic ones for typicalÂ  use. T</span>he multitudes of different kinds of data in/output that could be integrated in an open AR framework as it evolves are mind boggling.</p>
<p>But, for now, someÂ  obvious use cases do come to mind:<br />
eg.</p>
<p>- Historical environmental overlays showing how a city used to be/and how this vision may be constructed differently by different communities</p>
<p>- Proposed building work showing future changes to a structure/and the negotiations of this future (both the public and professionals could submit their own comments to the plans in context), seeing pipes, cables and other invisible elements that can help builders and engineers collaborate and do their work.</p>
<p>- Skinning the world with interactive fantasies</p>
<p>I asked Thomas to help people understand how Wave enables new interactions to data by explaining how Wave could enable citi sensing and citizen sensing projects (e.g.<a href="http://tinyurl.com/y97d5zr" target="_blank"> this one being pioneered by Griswold</a>):</p>
<p><strong><strong>&#8220;Sensors, both mobile and static could contribute environmental data into city overlays;</strong></strong></p>
<div><strong><strong>â€”temperature, windspeed, air quality (amounts of certain particles) water quality, amount of sunlight, Co2 emissions could all be feed into different waves. The AR Wave Framework makes it easy to see any combination of these at the same time.&#8221;</strong></strong></div>
<div><strong><strong><br />
</strong></strong></div>
<p><strong><strong> </strong></strong>Having these invisible aspects of the world made visible would create ways to improve sustainability, social equity, urban management, energy efficiency, public health, and allow communities to understand and become active participants in the ecosystems and infrastructure of their neighborhoods.</p>
<p>The key is reflecting thisÂ  kind of data back to people &#8220;making it not back story but fore story,&#8221; right where we are, right where it happens, as well as having it available for analysis.</p>
<p>As well asÂ  creating new opportunities to interact/respond to/and enhance data, making visible the invisible as <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko&#8217;s</a> work on <a href="http://www.amphibiousarchitecture.net/" target="_blank">Amphibious Architecture</a> and <a href="http://www.haque.co.uk/" target="_blank">Usman Haque&#8217;s</a> project <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> shows, can also create new connections/understandings between humans and the non human&#8217;s that share our world, e.g. fish, plants, waterways.</p>
<p>At a more prosaic levelÂ  potential buyers of property could see more clearly what they are buying, city planners could see better what needs to be worked on, and environmental researchers could see more clearly the impact people are having on an area.</p>
<p>Also Wave can provide some of the framework necessary to begin to begin to address tricky problems of privacy. Sensitive data can be stored on private waves, e.g. medical data for doctors and researchers, but the analysis of theÂ  data could still be of benefit to all, e.g., if it&#8217;s tied disease occurrences to locations andÂ  relationships between the environmental data and health wereâ€¦quite literallyâ€¦made visible.</p>
<p><strong>&#8220;The publication of energy consumption and making it visible as overlays, could help influence the public into supporting more energy efficiency companies and businesses. It could also help citizens to try to keep their own energy usage down, to try to keep their street in â€œthe green.â€</strong></p>
<p>Thomas notes:</p>
<p><strong>&#8220;With all of the above, it becomes fairly trivial to write persistent Wave-bots that automatically send notice when certain criteria are met (pollutants over a certain level, for example). On publicly readable waves, anyone can use the data in their local computers, process it, and contribute results back on a new wave. Alternatively, persistent remote severs could run Cron jobs, or other automated processing, using services such as App Engine to run wave robots.</strong></p>
<p><strong>All these possibilities become â€œfreeâ€ when using Wave as a platform for geographically tied data.&#8221;</strong></p>
<p>But of course this is just the beginning!</p>
<p><em>Recently, I talked at length with Jeremy Hight who has been thinking about, designing and creating shared augmented realities, that anticipate the kind of dynamic, real time, large scale architecture we now have available through Wave,Â  for quite some time now.Â Â  This is exciting stuff. </em></p>
<p><em><br />
</em></p>
<h3><strong>Modulated Mapping:</strong> Talking with Jeremy Hight about Layers, Channels andÂ  Social Augmented Experiences</h3>
<p><strong><strong> </strong></strong></p>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5.jpg"><img class="alignnone size-medium wp-image-4611" title="modulatedmapping5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5-230x300.jpg" alt="modulatedmapping5" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><strong><em><span>image from Volume Magazine (Hight/Wehby)</span></em></strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I know you have been involved in locative media from its early days. Perhaps we can talk about how AR continues the locative media journey?</p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> gave me this distinction, recently:<em> &#8220;AR is about systems that put media out in the world, and immerse you in a mixed space. Â Even the current &#8220;not really registered&#8221; mobile phone AR systems are still &#8220;sort of&#8221; AR (e.g., Layar, etc).</em></p>
<p><em>Locative media/ubicomp/etc are very different, in that they tend to display media on a device (phone screen) that is relevant to your context, but does not attempt to merge it with the world.<br />
The difference is significant, and making it clear helps people think about what they do and what they want to do, with their work. The locative media space though points toward future AR systems (when the technology catches up!).&#8221;</em></p>
<p><strong><strong>Jeremy Hight: The need is to finish the arc that locative media and early AR have started and to now truly return to the map itself, but as an internet of data, interactivity, channels of data , end user options like analog machines once were but in high end tools, a smart AI-ish ability for it to cull data for the user, and to allow social networking to be in real world places on the map both in building augmentation and in using and appreciating it..not hacks..which have their place&#8230;but a rhizome, a branched system with shared root,end user adjustable and variable..this is the key.</strong></strong></p>
<p><strong><strong>This takes AR and mapping and makes a possible world of channels in space and this eventually can be a kind of net we see in our field of vision with a selected percentage of visual field and placement so a geo-spatial net, a local to world wide fusion of lm into a tool and educational tool</strong></strong></p>
<p><strong><strong><span>VR[virtual reality] has greatly advanced, but in nodes as it has limitationsâ€¦LM [locative media] is the sameâ€¦AR [augmented reality] is the way..</span></strong><strong> it now has locative elements and aspects of VR integrated into its functionality and nodes&#8230;it is the best option with all of these elements, greater hybridity and data level potential a well as end user and community sourcing potential</strong></strong></p>
<p><strong><strong>I wrote an essay for Archis&#8217; Volume, the architecture magazine on a near future sense of some of this&#8230;.a visual net on the lens like ar but with smart objects and social networking and dissent.</strong></strong></p>
<p><strong><strong>I also wrote of these things for immersive graphic design, spatially aware museumÂ  augmentation, education through ar and lm and nod to the base interface of eye to cerebral cortex in layered and malleable augmentation in my essay <a href="http://www.neme.org/main/645/immersive-sight" target="_blank">&#8220;Immersive Sight&#8221;</a> a few years back</strong></strong></p>
<div id="gqg9" style="text-align: left;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b.jpg"><img class="alignnone size-medium wp-image-4601" title="dgznj3hp_3dj7g8zf7_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b-300x225.jpg" alt="dgznj3hp_3dj7g8zf7_b" width="300" height="225" /></a></strong></div>
<p><strong><strong>image [above] is simple illustration of a possible example on a screen or in front of eye where in a mondrian show..the graphic design of information actually builds as one moves</strong></strong></p>
<p><strong><strong>(key is calibrated spatial intervals and related layers of further augmentation which is logical due to location and proximity)</strong></strong></p>
<p><strong><strong>from immersive sight on immersive graphic design:</strong> <em>&#8220;The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.</em></strong></p>
<p><strong><em>Another interesting possibility is that individual points of augmentation donâ€™t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect). </em></strong></p>
<p><strong><em>Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.</em>&#8220;</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>One of the things I found very inspiring when I read your papers was that your ideas are not all dependent on a model of AR that would necessarily require goggles, back packs and lots of CPU/GPU &#8211; not that that wouldn&#8217;t be nice, but that even using &#8220;magic lens&#8221; AR of the kind smart phones has enabled in an open distributed framework would open up a lot of new possibilities for what you call modulated mapping wouldn&#8217;t it?Â  What kind of social augmented realities might be enabled by a distributed infrastructure like this [AR Wave]?</p>
<p><strong><strong>Jeremy Hight: right&#8230;.I see that as wayyy down the road&#8230;most important is the one you talk about as it is more immediate and thus more essential and needed. Eventually the goggles will be like a contact lens and a deep immersive ar version ofÂ  this will come, that to me is certain, but a ways down the road.Â  An incredible amount is possible now, and this is a more pragmatic move as opposed to the more theoretical of what is a few steps from here. Thus it is more important and essential now. Tools like Google Wave are taking what even 2 years ago was more theoretical discussions of what may be and instead introducing key elements to a more immediate, powerful, flexible level of augmentation. What have been hacks and isolated elements are to be integrated and social networking, task completion, shared tools and graphics building and geo-location.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think some people question what augmented reality has to bring to the continuum of location based experiences that other forms of interface/mapping do not?</p>
<p><strong><strong><span>Jeremy Hight: rightâ€¦.and the schism between its commercial </span></strong><strong>flat self and tests with physics etc and in between &#8230;there are a lot of unfortunate assumptions it seems as to where ar and lm cross and how ar can be many things beyond deep immersion or the opposite pole of a hockey puck having a magic purple line etc&#8230;.like lm is seen as either car directions or situationist experiments with deep data&#8230;..the progression to me is deeply organic&#8230;.and now augmentation can be more malleable, variable and end user controlled.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes, it is really exciting time for AR.Â  Historically AR research has gone after the hard problems of image recognition, tracking and registration because we have had available to us these dynamic, real time, large scale architectures like Wave available (until now!),Â  so less work has been done on exploring the possibilities for distributed AR fully integrated with the internet and WWW hasn&#8217;t it?</p>
<p>A distributed augmented reality framework such as we have envisaged on Wave wouldÂ  allow people to see many layers from many different people at the same time. â€¬And this kind of model has been part of your thinking and fundamental to your work for a while, hasn&#8217;t it? But it is a very new idea to most people to think about collaboratively editing layers on the world, and to be able to viewÂ  augmented space through channels and networked communities?Â  Could you explain some of the ways you have explored these ideas and how they could be explored further now to create meaningful experiences for people?</p>
<p><strong><strong><span>Jeremy Hight: right..exactlyâ€¦modulated mapping to me can be an amazing tool for studentsâ€¦back end searching data visualizations and augmentations based on their needsâ€¦while they do something else on their computer or iphoneâ€¦that can be amazing..and not deep </span></strong><strong>immersive..The map can be active, malleable, open source fed, and even, in a sense, intelligent and able to adapt. The possibility also exists for this map to have a function that based on key words will search databases on-line to find maps, animations, histories and stories etc to place within it for your study and engagement. The map is thus a platform and yet is active. Community is possible as people can communicate graphically in works placed on the map and in building mode in the tool. All the tropes of locative media are to be in a </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> system of channels of augmentation and a spatial net. The software by design will allow development on the map and communication like programs such as second life but in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> itself.</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1.jpg"><img class="alignnone size-medium wp-image-4607" title="interactive 3d map copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1-246x300.jpg" alt="interactive 3d map copy" width="246" height="300" /></a></strong></p>
<p><strong><strong><em><strong><span>image from Parsons Journal of Information Mapping Volume 2 (Hight/Wehby)</span></strong></em></strong></strong></p>
<p><strong><strong><span>I wrote an essay a few years ago for the Sarai reader questioning the traditional map and its semiotics and need to reconsider â€“ then did work looking into it and what those dynamics were and they got into 2 group shows in museums in Russiaâ€¦so it actually was my arc toward modulated mappingâ€¦an interesting way to it! But yes the map itself..this is a huge area of potential and non screen based alone navigation etc. I see now that my 2 dozen or so essays in lm,ar, interface design and augmentation have all also been leading in this direction for about 10 years now</span></strong></strong></p>
<p><strong><strong>Tish Shute: </strong>IÂ  love immersive visualization but can we &#8220;return to the map &#8211; the internet of data&#8221; as you mentioned earlier and produce interesting augmentation experiences that go beyond locative media&#8217;s device display mode without having the goggles, for example, through the magic lens of or smart phones?</strong></p>
<p><strong><strong>Jeremy Hight: yes, absolutely.Â  the map in the older paradigm is an artifice born often of war and border dispute and not of the earth itself and its processes&#8230;the new mapping like google maps is malleable, can be open source, can read spaces and can be layers of info in the related space not plucked from it as in the past..this is amazing. the old map also was born of false semiotics/semantics like &#8220;discovery of new lands&#8221; or &#8221; pioneer&#8221;Â  while the places were there already and names often were of empire&#8230;now this is no longer the case</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2.jpg"><img class="alignnone size-medium wp-image-4608" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2-300x233.jpg" alt="jeremy map small2 copy" width="300" height="233" /></a></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>So geoAR is an a better way to express a new social relationship to mapping? And how does this fit into the evolving arc of locative media that evolves into augmented reality?</p>
<p><strong><strong>Jeremy Hight:&#8230;early lm was mostly geocaching and drawing with gps..it took new paradigms to invigorate the fieldÂ  a lot of folks focus on tools and what already is, cross pollination can ground ideas that are more radical&#8230;a metaphor in a sense to place what can be in a familiar context.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>one of the great disappointments in VR has been its isolation from networked computing and also, up to now, augmented reality &#8211; to achieve an immersive experience withÂ  tight registration of media/graphics have to create separate system isolated from the internet and power of the web.</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.this will change. vr is to me an island but ar takes a part of it and shifts the paradigm and new things open this way. Do you know the project <a href="http://www.lifeclipper.net/EN/process.html" target="_blank">&#8220;life clipper&#8221;</a>? friends of mine..doing interesting things..they are a clear bridge betwen lm and ar&#8230;.and from vr</strong></strong></p>
<p><strong><strong>in ar augmentation and what is being augmented become fused or in collision or in complex interactions as a means to a larger contextualization and exploration of what is being augmented..this is true in immersive or non ar&#8230;.huge potential</strong></strong></p>
<p><strong><strong>vr is a space, now can be surgery which is amazing. but not layered interaction, thus an island and graphic iconography on a location can use symbolic icons which opens up even more layers (graphic designer/information designer in me talking there I suppose..)</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes !Â  talk to me more about layers and channels I think this is one of the most interesting questions for meÂ  in augmented reality at the moment &#8211; what can we do with layers and channels and the new possibilities on connections between people and environments that these can create?</p>
<p>The ability for anyone to post something is critical to the distributed idea but one of the reasons I am so excited by Google Wave is I am fascinated by the playback function. How do you think this will enable new forms of collaborative locative narratives (<a href="http://snarkmarket.com/2009/3605" target="_blank">nice post on Wave playback here </a>).</p>
<p><strong><strong>Jeremy Hight: We are in an age of cartographic awareness unseen in hundreds of years. When was the last time that new </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> tools were sold in chain stores and installed in most vehicles? When was the last time that also the augmentation of maps was done by millions (Google map hacks, etc)? The ubiquitous gps maps run in automobiles while people post pictures and graphic pins to denote specific places on on-line maps.</strong></strong></p>
<p><strong><strong>The need is for a tool that combines all of these new elements into an open source, intuitive layered and rhizomatic map that is porous (like pumice, organic in form yet with â€œbreathing roomâ€ ),ventilated (i.e: adjustable, a flow in and out), and open (open source,open access,open spatialized dialog).</strong></strong></p>
<p><strong><strong><span> I wrote of this in my essay &#8220;Revising the Map: Modulated Mapping and the Spatial Interface .&#8221;(</span></strong><span> </span><a id="h0qr" title="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )" href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%20%29"><span>http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )</span></a></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3.jpg"><img class="alignnone size-medium wp-image-4609" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3-300x206.jpg" alt="jeremy map small2 copy" width="300" height="206" /></a></strong></p>
<p><strong><em><strong><span>image from Parsons Journal of Information Mapping (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> One mapping project I really like is <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>.Â  How could distributed AR contribute to a project like <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>?</p>
<p><strong><strong>Jeremy Hight: that is a good example..imagine taking manhattan and having channels of options to overlay, that being an excellent option, and imagine being able to even run a few at once with deliniating icons..you can augment a space with history, data, erasure, narrative, scientific analysis, time line of architecture, infrastructure, archaeological record etc&#8230;.endless possibilities, and this agitates place and place on a map into an active field of information with end user control&#8230;and open options for new layers</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>and do you think we could do interesting things with AR on a project like Mannahatta even with the current mediating devices we have available &#8211; i.e. our smart phones as obviously the rich pc experience of Mannhatta has built for it&#8217;s web interface would not be available as AR at this point?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.k.i.s.s right?Â Â  these projects do not have to only be immersive and graphic intensive&#8230;&#8230;take how people upload photos onto google maps&#8230;.just make that on a menu of options, there are some pretty cool hacks already..<br />
&#8230;options is key, a space can have a community as well, building on it in software, and others navigating it, i see it near future and down the road..always have with ar really</strong></strong></p>
<p><strong><strong><a href="../wp-content/uploads/2009/10/locativenarratives1.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1.jpg"><img class="alignnone size-medium wp-image-4596" title="locativenarratives1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1-230x300.jpg" alt="locativenarratives1" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><em><strong><span>image from Volume Magazine (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Jeremy Hight: and yes, a lot of people focus on ar as its limitations and processing power needs as a major road block</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>so do you see AR on smart phones adding any value to a project like Mannahatta?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;that it can be integrated into other similar works and even disparate but cloud linked ones&#8230;so a place can be &#8220;read&#8221; in diff ways on the iphone&#8230;.beyond its map location, and more can be possible if you are there&#8230;others away, so it becomes channels of augmentation</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>AR like locative media puts who you are, where you are, what you are doing, what is around you center stage in online experience but it also &#8220;puts media out in the world&#8221; &#8211; people I think understand this well as a single user experience but we are only just beginning to think about how this will manifest as a social experience &#8211; could explain more about modulated mapping as an experience of social augmentation?</p>
<p><strong><strong style="background-color: #99ff99; color: black;"><span>Jeremy H</span>ight: Modulated</strong> <strong style="background-color: #ff9999; color: black;">Mapping </strong><strong>is a tool that will allow channels to be run along the map itself. This will allow one to view different icons and augmentations both as systems on the map and in deeper layers of information (photos, videos, animations,Â  visualizations, etc) that can be turned on and off as desired. The different layers of icons and data may be history, dissent, artworks, spatialized narratives, and annotations developed that are communally based on shared interests, placed spatially and far beyond. The use of chat functionality in text or audio will be open in building mode and in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> navigation/usage as desired. This also allows a community to develop or augment in the spaces on the earth. These nodes can be larger and open or small and set by groups in their channel. The end result is an open source sense of </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> that will also have a needed sense of user control as one can select which layers of augmentation they wish to see and interact with at any time. It also will incorporate all the functionality of locative media in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software and </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong>. In building mode and in map mode, icons will be coded to represent within channels (remember that the person using it has selected channels of augmentation from many based on their current interests and needs). Icons will be coded as active to show work in progress in cities and the globe to both invite participation and to further agitate the map from the sense of the static as action is visible even with its icons as people are working and community is formed in common interest/need .</strong></strong></p>
<p><strong><strong>locative media got a buzz for &#8220;reading&#8221; places&#8230;when I helped create locative narrative that was what blew me away back in 2001&#8230;that we could give places a voice by placing data from research and icons on a map&#8230;&#8230;this meant lost history or augmentation was possible as kind of voices of a place and its layers&#8230;&#8230;.I called it &#8220;narrative archaeology.&#8221; We now have tools that can push these ideas and concepts farther..much farther&#8230;and with a range beyond what was before, and then the map was just a tool&#8230;.but now we are returning to the map itself&#8230;..and this as place as much as marker..this is where ar takes the ball to use a bad metaphor</strong></strong></p>
<p><strong><strong>also that project could only work if you came to our spot of a 4 block augmentation and with us there to lend you our gear&#8230;we are far beyond that now but it had its place</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>How do you see &#8220;in context&#8221; AR and something we might call &#8220;context aware&#8221; cloud computing models interacting?</p>
<p><strong><strong>Jeremy Hight: sure&#8230;and I must add that I have issues with cloud computing as much as it is a good idea..</strong>.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>because of loss of autonomy?</p>
<p><strong><strong>Jeremy Hight: tivo is simply a hard drive&#8230;but it keyword reads and givesÂ  suggestions..that is the is cro magnon link to what can be</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The nice thing about Wave is because of the Federation model, the cloud model and local store ur own data models should work together.<strong><strong><span> </span></strong></strong></p>
<p><strong><strong><span>Jeremy Hight: yes..that is better&#8230;..loss of autonomy also opens up the arbitrary which is the flaw of search engines as we know itâ€¦even Bing fails to me in that sense</span></strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>how do you mean, could you explain?</p>
<p><span> </span><strong><strong><span>Jeremy Hight: spidersÂ  cull from wordsÂ  but cull like trawlers at sea â€¦. tested Bing with very specific requests.. it spat out the same mass of mostly off topic resultsâ€¦.</span><br />
<span> I wonder if there is a way to cull from key words and topics from a userâ€¦not O</span>rwellian back end of courseâ€¦but from their preferences, their searches etc..</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>did you see the discussion on search in the AR Framework doc? AR search will be a massively important thing that will take a lot of intelligence and all sorts of algorithm development won&#8217;t it?</p>
<p><strong><strong>Jeremy Hight:It also has one area of key functionality that moves into more intuitive software. Upon continued usage, the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software will â€œlearnâ€ and search based on key words used and spheres of interest the user is </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> or observing as mapped and will integrate deeper data and types of animations, etc. into the map or will have them waiting to be integrated upon user approval as desired. Over time the level of sophistication of additions and of search intuition will increase dramatically. The search can also, if the user wishes, run in the back end while working in the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> program, or in off time as selected while doing other tasks. It also can never be used if one is not interested. One of the key elements of this </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> is that it is not composed of a closed set or needs user hacks to augment, but instead is to evolve and deepen by user controls and desired as designed. Pre-existing data,visualizations and augmentations can be integrated with relative ease.</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>One of the things that Joe Lamantia points out about social augmented experiences is that they will operate across a number of different scales &#8211; conversation &gt; product design &amp; build team &gt; neighborhood / town fixing potholes &gt; global community for causes. How do designs for channels and layers change across these different social scales?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> quote myself &#8230;&#8221;The &#8220;frontier&#8221; is often defined as the space just ahead of the known edge and limit, and where it may be pushed out deeper into the previously unknown. The frontier in the world of ideas is not the warm comfort of what has been long assimilated; and the frontier in the landscape is not of maps, but of places beyond and before themâ„</strong></strong></p>
<p><strong><strong>The border along what has been claimed is not only that of maps â€“ it is of concepts, functions, inventions and related emergent industries. Ideas and innovations are like the cloud shape that briefly forms around a jet breaking the sound barrier, tangible yet not fully mapped into measure. It is when things are nailed down into specific entities, calibrated and assessed, that the dangers may inflict themselves â€“ greed, competition, imitation, anger, jealously, a provincial sense of ownership either possessed or demanded&#8221;. (from essay in Sarai reader). Otherwise channels and augmentation do not have to be socio-economically stratifying or defined by them. We built 34nÂ  for almost nothing on older tools.</strong></strong></p>
<div id="yqjj" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><img class="alignnone size-medium wp-image-4599" title="dgznj3hp_1g3svj8fq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b-300x225.jpg" alt="dgznj3hp_1g3svj8fq_b" width="300" height="225" /></a></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><span> </span></a></strong></div>
<p><strong><em><strong><span>image from 34north 118westÂ  (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>The ar that is not deep immersion can be more readily available and channels can be what end users need like the diversity of chat rooms or range of Facebook users among us.</strong></strong></p>
<p><strong><strong>I had two moments yesterday that totally fit what we talked about.Â  I went to west hollywood book fair and traditional directions off of mapping for driving directions were wrong and we got lost&#8230;our friend could only get a wireless signal to map on itouch and we had to roam neighborhoods then we called a friend who google mapped it and we found we were a block away&#8230;.so a fast geomapping overlay with an icon for the book fair on some optional grid service or community would have made it immediate.Â  Then at the book fair talked to a small press publisher who is trying to map works about los angeles by los angeles authors on a map..she was stunned when I told her it could be a kind of google map feature option</strong></strong></p>
<p><strong><strong>it also has great potential to publish and place writing and art in places..both for commentary and access. imagine reading joyce in chapters where it was written about and then another similar experience but with writers who published on a service into their city.</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> The challenge of shared augmented realities is not just a matter of shipping bits around, but also of how it we will use channels and layars &#8211; to create and negotiate different, distributed perspectives, understand a shared common core/or expressions of dissent (this came up in an email conversation with <a href="http://www.oreillynet.com/pub/au/166" target="_blank">Simon St Laurent</a>).</p>
<p><strong><strong><strong>Jeremy Hight:</strong> well my example earlier could have been communal in a way too..a tribe sort of augmentation channeling &#8230;.like subscribing to list servs back in the day but of augmentation communities/channels, and for folks to build and use in shared live form, coordinating too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong> </strong>one good thing though about building an open AR Framework is that as bandwidth/CPU/hardware gets better shared high def immersive experiences could be supported by the same framework..</p>
<p><strong><strong>Jeremy Hight: excellent</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>were you thinking of the image recognition and tracking with this example?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> yeah&#8230;.like scanning across a multi channeled google map augmentation with diff icons and their connected data&#8230;and poss social networking and fle sharing even in that mode&#8230;and rastering etc&#8230;.could be cool with google wave </strong><strong><span>- on the map..then zooming in a la powers of ten..(eames film).</span></strong></strong></p>
<p><strong><strong>-</strong><strong><span>I have pictured variations of this for a few years now in my head like the example of my friends and I yesterdayâ€¦we could have correlated a destination by icons in diff channels..one being lit events within lit channel in l.a mapâ€¦maybe things streaming on it tooâ€¦remote info and video etc&#8230; that would be awesome</span></strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> So many of the ideas in you paper on modulated mapping (see <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a>) are brilliant use cases for shared augmented realities. Perhaps you could talk more your ideas about locative narrative because this is something I think is at the core of the kinds of experiences that a distributed AR Framework would make possible?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> on the project &#8220;34 north 118 west&#8221; we mapped out a 4 block area for augmentation of sound files triggered by latitude and longitude on the gps grid and map and the map on the screen had pink rectangles that were the &#8220;hot spots&#8221; where the augmentation had been placed.</strong></strong></p>
<div id="nwc6" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b.jpg"><img class="alignnone size-medium wp-image-4600" title="dgznj3hp_0gg994bf9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b-300x225.jpg" alt="dgznj3hp_0gg994bf9_b" width="300" height="225" /></a></strong></strong></div>
<p><strong><em><strong><span>image of interactive map with map based augmentation connected to audio augmentation on site for 34north 118west (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>We researched the history of the area and placed moments in time of what had been there at specific locations &#8230;.I called this <a href="http://www.xcp.bfn.org/hight.html" target="_blank">&#8220;narrative archaeology&#8221;</a> as it allowed places to be &#8220;read&#8221; by their augmentations&#8230;info that was of the place beyond the immediate experience (diff types of info) that otherwise would be lost or only found in books or web sites elsewhere. there now are locative narratives around the world but they need to be linked.Â  from humble origins &#8220;narrative archaeology&#8221; went on to be recently named of the 4 primary texts in locative media which is pretty amazing to me&#8230;but it is growing</strong></strong></p>
<p><strong><strong>- the limitations then were what I called the &#8220;bowling alley connundrum&#8221; &#8211; the specifc data had to reset like pins&#8230;..and was isolated&#8230;.this led me to think about ar back then and up to now.Â  How these could lead to much more from that point, data that would be more layered, variable , fluid..yet still augmented place and sense of place and social networking within data and software</strong></strong></p>
<p><strong><strong><a href="http://34n118w.net/34N/" target="_blank">lifeclipper</a> to me is a bridge</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>But Life Clipper is isolated from the internet currently is it?</p>
<p><strong><strong><span>Jeremy Hight: yes&#8230;ours was too.. that is what google wave makes possible.. our project only ran on our gear..in 4 blocksâ€¦with additional auxi</span>liary info online, and not malleable..but hey 2001 and all..</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>so the sites for 34 north 118 west are still active though?</p>
<p><strong>Jeremy Hight: oh yeah!</strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>nice I really like sound augmentation &#8211; have you seen <a href="http://www.soundwalk.com/blog/tag/augmented-reality/" target="_blank">Soundwalk</a>?</p>
<p><strong><strong><span>Jeremy Hight: yes, very cool..</span> </strong><strong>we chose sound only as it fought the power of image..instead caused a person to be in a sense of two places and times at once</strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> and in 2001 that was definitely a visionary project!</p>
<p>You must be very excited that finally the pieces are coming together to make this stuff scale!</p>
<p><strong><strong><strong>Jeremy Hight:</strong> I can&#8217;t even tell you!! it is funny..i have known that this would come..just waited and waited&#8230;</strong></strong></p>
<p><strong><strong>..knew it needed the right people and tools..</strong></strong></p>
<p><strong><strong><span>..so the bowling alley connundrum led me to develop my project shortlisted for the iss (international space station)Â  as I thought a lot about how points and works are not to be isolatedâ€¦but connectedÂ  and should be flowing in diff parts of a mapâ€¦.to open up perspective and connected augmentations , but also to think about the map againâ€¦not as a base only. then moved into my work with new ways to visualize time and it all really began to gell.Â  The ideas first were published as an essay</span></strong><span> </span><a id="qw.2" title="(http://www.fylkingen.se/hz/n8/hight.html)" href="http://www.fylkingen.se/hz/n8/hight.html"><span>(http://www.fylkingen.se/hz/n8/hight.html)</span></a><span> </span><strong><span>and later my project blog</span></strong><span> (</span><a id="bp.b" title="http://floatingpointsspace.blogspot.com/)" href="http://floatingpointsspace.blogspot.com/%29"><span>http://floatingpointsspace.blogspot.com/)</span></a></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>One thing I noticed when I was reading your paper is how you have been exploring non-euclidian geometries.Â  Could you explain how this is part of your idea of modulated mapping?</p>
<p><strong><strong><span>Jeremy Hight: Yes, this first came to me when my wife was reading to me from a book on the Poincare Conjecture and I was hit with a new way to measure events in time and after months of sketches, schematics and research came to see how it could also be connected to a geo-spatial web of projects and augmentations.Â  It was published in the inaugural issue of Parsons School of Design&#8217;s Journal of Information Mapping which was an exciting fit.</span></strong><span><strong> I call it &#8220;Immersive Event Time&#8221;</strong>(</span><a id="o3rt" title="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)" href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%29"><span>http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)</span></a></strong></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b.jpg"><img class="alignnone size-medium wp-image-4634" title="dgznj3hp_4cxz57xgv_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b-195x300.jpg" alt="dgznj3hp_4cxz57xgv_b" width="195" height="300" /></a></strong></span></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b.jpg"><img class="alignnone size-medium wp-image-4635" title="dgznj3hp_5g68k9ggh_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b-300x225.jpg" alt="dgznj3hp_5g68k9ggh_b" width="300" height="225" /></a><br />
</strong></span></p>
<p><strong><strong>so the last 3 years I have been working on how it could all work as channels of augmentation, and building and navigation as open and community in a sense as well as ai capability that was the time work especially. how time as experienced within an event is not a time &#8220;line&#8221;Â  but points on and within a form&#8230;.and how this model is better for visualizing events in time and documenting them. it actually sprang form reading a book on the poincare conjecture, popped a bunch of other stuff together so one could visualize an event in time as like being in the belly of a whale..with time as the ribs..and our measure of time as the skin&#8230;and moving within it&#8230;.hoping this will be used as educational tool</strong></strong></p>
<p><strong><strong>and this also can be tied to ar and map again&#8230;how documentation of important events can be kept within icons on a google map..then download varying visualizations based on bandwidth and desired format</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>I have been thinking about is the new forms of social interaction/agency that these kinds of augmentations of space/place/time will create.Â  it seems there are two poles &#8211; one is the area Natalie Jeremijenko explores of shifting social relations from institutions/statistics to real time/location based/interactions and new forms of social agency.Â  The other pole completely is more like the cloud based AI and perhaps crowd sourced machine learning.</p>
<p>Your ideas explore the possibilities of both these poles.Â  And certainly one of the big deals of distributed AR integrated with would be the possibilities it opened up both for new forms of networked social relationships and for new ways to draw on network effects.</p>
<p><strong><strong><strong>Jeremy Hight:</strong> and cross pollinations within &#8230;that is what my mind goes to</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>The other night I met Assaf Biderman, MIT, from the <a href="http://senseable.mit.edu/trashtrack/" target="_blank">Trash Track</a> team. Trash Track doesn&#8217;t utilize AR but I could see that there are possibilites there.<br />
What do you think?</p>
<p><strong><strong><span>Jeremy Hight: yes, absolutely,</span> </strong><strong>there can sort of skins on locations that user end selection can yield &#8230;like channels of place&#8230;.and can range from pragmatic core to art and play and places between&#8230;.how this recalibrates the semiotics of map&#8230;more than just augmentation as seen as a kind of piggy back on map..map becomes interface and defanged platform if you wil, interestingly my more poetic/philosophic writing led me here too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> I know they are at very different poles of the system but I do wonder how AR can bring some of the level of social agency/interaction that <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a> works on into a productive interaction with the kind of innovations in Machine learning that Dolores Lab style machine learning!!and others are pioneering?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> Natalie&#8217;s genius to me is in practical functional tech that also opens deeper questions and even new openings of what is needed..amazing layers in her work that way.. succint yet deep..very deep</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>Yes &#8211; I a just writing a post about her work &#8211; I find it deeply moving the way she has delved into the possibilities to using technology to open us up to our world.Â  One of the reasons I find distributed AR so interesting is because it will make it possible for all kinds of people to create and use augmentation in their lives and communities.</p>
<p>So to return to how a distributed AR framework could contribute to a project like Trash Track?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> what about using it for community, dissent and awareness raising then?Â  like Natalie&#8217;s work but building like a communal work of multiple points, like the old adage of the elephant and the blind menÂ  sorry..metaphor &#8211; like one of my points in immersive sight was how one could take augmentation as multiple works sort of turning the faces of a thing or place&#8230;and how this would make a larger work even in such a flow so people moving in a space could also build..</strong></strong></p>
<p><strong><strong>what of ar traces left as people move calibrated to user traffic and trash as estimated in an urban space&#8230;like it goes back to chris burden in the 70&#8242;s making you know that as you turn the turnstyle you are drilling into the foundation and may be the one that collapses the building?</strong></strong></p>
<p><strong><strong>so their movements leave trash. Natalie is all about raising awareness to cause and effect and data , space and ecology. love that.Â  so maybe &#8230;<br />
a feedback loop , artifact and user end responsibility can leave traces &#8230;trash&#8230;</strong></strong></p>
<p><strong><strong>.. cybernetics vs ecology and human waste</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>could you elaborate?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> brain fart&#8230;that the mass of trash people leave is a piece at a tiime&#8230;.and how like the space shuttle mission when it was argued first true cybernaut occured&#8230;.one cord to air for astronaut..one for computer on their back to fix broken bay arm&#8230;if there is a way to build on that and in relation to the topic&#8230;..how this can go further, that machines do not waste as much&#8230;as ar is a means to cybernetic raise awareness..eh..</strong><strong><span>In a sense it is likeÂ  the space shuttle mission when arguably the first true cybernaut occurredâ€¦.one cord to air for astronaut..one for computer on their back to fix broken bay armâ€¦if there is a way to build on that and in relation to the topicâ€¦..how this can go further, that machines do not waste as muchâ€¦as ar is a means to cybernetic raise awareness..eh.. hmmm.</span>.. </strong><strong> sensors etc&#8230;wearables too &#8211; could be eco awareness with data and machine and human</strong></strong></p>
<p><strong><strong>what about a cloud computing system with a slight ai in the sense of intuitive word cloud and interest scans&#8230;..so as one moves through say new york they can be offered new ai data and services as they move ? could also be of eco interests? concerns about urban farming, eco waste, air pollution etc&#8230;.perhaps with (jeremijenko element here) Â sensors placed in locations and these also giving data reads in public areas Â with no input but hard data itself&#8230;&#8230;hmm..could be interesting</strong></strong></p>
<p><strong><strong>it can also give info of the carbon footprints (estimated prob unless data is public record somehow) of chain businesses Â and data on which are more eco friendly as well as an iconography color coded and icon coded to the best places to go to support greening and eco friendly business? Â and the companies could promote themselves on this service to attract eco aware customers who would be seeing them as kindred spirits and helping the<br />
larger effort?</strong></strong></p>
<p><strong><strong>kind of eco mapping..and ar on mobile app</strong></strong></p>
<p><strong><strong>what about sensors that read air pollution levels, levels of solar radiation (to aid with skin protection in shifting light values in a city space..ie put on some skin cream now&#8230;), light sensors that detect density and over density in public spaces&#8230;to use the old trope in art of reading crowds in a space..but instead could indicate overcrowding, failing infrastructure in public spaces (which is a congestion that leads to greater pollution levels as well as flaws in city planning over time..), and perhaps a tie in to wearables&#8230;&#8230;worn sensors Â on smart clothes&#8230;.this could form a node network of people in the crowds &#8230;.and also send data within moving in a space&#8230;</strong></strong></p>
<p><strong><strong>here is a kooky thought&#8230; what of taking the computing power and data of people moving in a space..and not only get eco data and make available to them levels of<br />
data..but make possibly a roving super computer&#8230;crunching the deeper data of people open to this&#8230;&#8230;a hive crunching deeper analysis of the space, scan properties from sensors, and even a game theory esque algorithm of meta data if say 40 people out of 50 hit on a certain spike or reading&#8230;and even their input&#8230;..I worked in game theory for paleontology in this manner for a time as a teen&#8230;.a private project&#8230;&#8230; Â  the reading can lead to a sort of meta read by what hits most consistently..as well as in their input..text of what they experienced, observed,postulated,analyzed even&#8230;. this could be really interesting&#8230;even if just the last part from collected data and not from any complex branching of servers..</strong></strong></p>
<p><strong><strong>I thought at 19 or so that the flaw in paleontology was in how so many larger theories were shifting exhibitions and larger senses of things like were there pre-historic birds that were mistaken for amphibean and then back again&#8230;.so why not make a computer program and feed all the papers published into it and see what hits were counted in terms of an emerging meta theory&#8230;and landscape of key points being agreed upon&#8230;this data would be in a sense both algorithmic and a sort of unspoken dialogue &#8230;came from a lot of study of game theory one summer&#8230;</strong></strong></p>
<p><strong><strong>hope this makes some sense&#8230;I forgot to mention that I originally planned to be a research meteorologist and my plan in middle school or so was to get a phd and develop new software to have a global map and then run models of hypothetical storms across it in real time animations of cloud forms, radar and wind analysis/fields, barometric pressure spaghetti charts etc&#8230;.and to also do 3d cut away models of storm architectures&#8230;so been into visualizations of complex data and mapping for a long time!</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>Wow let me think about this one!</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/feed/</wfw:commentRss>
		<slash:comments>18</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Twitter and The Web of Flow: Talking with Stowe Boyd &amp; Bruce Sterling about Microsyntax, Squelettes, Favela Chic and the State of Now</title>
		<link>https://www.ugotrade.com/2009/06/28/twitter-and-the-web-of-flow-talking-with-stowe-boyd-bruce-sterling-about-microsyntax-squelettes-favela-chic-and-the-state-of-now/</link>
		<comments>https://www.ugotrade.com/2009/06/28/twitter-and-the-web-of-flow-talking-with-stowe-boyd-bruce-sterling-about-microsyntax-squelettes-favela-chic-and-the-state-of-now/#comments</comments>
		<pubDate>Sun, 28 Jun 2009 18:23:28 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[#140conf]]></category>
		<category><![CDATA[Aaron Straup Cope]]></category>
		<category><![CDATA[aesthetics of streaming]]></category>
		<category><![CDATA[asymmetric follow]]></category>
		<category><![CDATA[asynchronous web versus synchronous web]]></category>
		<category><![CDATA[being a character]]></category>
		<category><![CDATA[bottom up informatics]]></category>
		<category><![CDATA[brian solis]]></category>
		<category><![CDATA[brightkite]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruce Sterling on Twitter]]></category>
		<category><![CDATA[Clay Shirky]]></category>
		<category><![CDATA[CNN and Twitter]]></category>
		<category><![CDATA[cross-links keywords and networks]]></category>
		<category><![CDATA[data shadows]]></category>
		<category><![CDATA[evolution of microsyntax]]></category>
		<category><![CDATA[Favela Chic]]></category>
		<category><![CDATA[favela chic and bottom up informatics]]></category>
		<category><![CDATA[geoslashes]]></category>
		<category><![CDATA[Google and Twitter]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[googlewave]]></category>
		<category><![CDATA[Gothic High Tech]]></category>
		<category><![CDATA[hash tags]]></category>
		<category><![CDATA[hash tags on Twitter]]></category>
		<category><![CDATA[high rise favelas]]></category>
		<category><![CDATA[hybrid vigor]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[Interactions Magazine]]></category>
		<category><![CDATA[Iran and Twitter]]></category>
		<category><![CDATA[iran election and Twitter]]></category>
		<category><![CDATA[Iranian Twitters]]></category>
		<category><![CDATA[Jack Dorsey]]></category>
		<category><![CDATA[Jeff Pulver]]></category>
		<category><![CDATA[Kevin Slavin]]></category>
		<category><![CDATA[Lars and Jens Rasmussen]]></category>
		<category><![CDATA[LIFT]]></category>
		<category><![CDATA[Lift Conference 2009]]></category>
		<category><![CDATA[magic words]]></category>
		<category><![CDATA[Mark Vanderbeeken]]></category>
		<category><![CDATA[Michael Jackson and Twitter]]></category>
		<category><![CDATA[Microsyntax]]></category>
		<category><![CDATA[Microsyntax and Twitter]]></category>
		<category><![CDATA[Microsyntax.org]]></category>
		<category><![CDATA[New Depression]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[pachube google wave and microsyntax]]></category>
		<category><![CDATA[Prada Goth]]></category>
		<category><![CDATA[real time search]]></category>
		<category><![CDATA[reboot11]]></category>
		<category><![CDATA[semantic web]]></category>
		<category><![CDATA[semweb]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[SMS messages in Iran]]></category>
		<category><![CDATA[social web]]></category>
		<category><![CDATA[Squelettes]]></category>
		<category><![CDATA[Stowe Boyd]]></category>
		<category><![CDATA[streamy aesthetics of sensors]]></category>
		<category><![CDATA[stuffed animals]]></category>
		<category><![CDATA[stuffed animals and failed states]]></category>
		<category><![CDATA[stuffed animals and regulatory capture]]></category>
		<category><![CDATA[The 140 Characters Conference]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[The Now Web]]></category>
		<category><![CDATA[The State of Now]]></category>
		<category><![CDATA[The Web of Flow]]></category>
		<category><![CDATA[Things That Twitter]]></category>
		<category><![CDATA[Tim O'Reilly on Google Wave]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tweet Deck]]></category>
		<category><![CDATA[twitter]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[webthropology]]></category>
		<category><![CDATA[Wyclef Sean and Twitter]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3835</guid>
		<description><![CDATA[I met Stowe Boyd, of Microsyntax.org at Jeff Pulverâ€™s 140 Characters Conference which convened in the middle of a perfect storm for the State of NOW (more mundanely known as the real time web) as thousands of tiny Twitter pipes became a vital conduit for the historic events occurring in Iran (picture on left, Stowe [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/stoweboyd2.jpg"><img class="alignnone size-medium wp-image-3851" title="stoweboyd2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/stoweboyd2-296x300.jpg" alt="stoweboyd2" width="296" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/BruceSterlingAtReboot.jpg"><img class="alignnone size-medium wp-image-3971" title="BruceSterlingAtReboot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/BruceSterlingAtReboot-297x300.jpg" alt="BruceSterlingAtReboot" width="297" height="300" /></a></p>
<p>I met <a href="http://www.stoweboyd.com/" target="_blank">Stowe Boyd,</a> of <a href="http://www.microsyntax.org/" target="_blank">Microsyntax.org</a> at Jeff Pulverâ€™s <a href="http://www.140conf.com/" target="_blank">140 Characters Conference</a> which convened in the middle of a perfect storm for <a href="http://pulverblog.pulver.com/archives/008934.html" target="_blank">the State of NOW</a> (more mundanely known as the real time web) as thousands of tiny Twitter pipes became a vital conduit for the historic events occurring in Iran (picture on left, Stowe Boyd, from <a href="http://www.briansolis.com/" target="_blank">Brian Solis</a>&#8216; Flickr <a href="http://www.flickr.com/photos/briansolis/3569544825/" target="_blank">here</a>, and on the right, Bruce Sterling, presenting at <a href="http://www.reboot.dk/" target="_blank">reboot11</a> from <a title="Link to scriptingnews' photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/scriptingnews/">scriptingnews</a>&#8216; Flickr <a href="http://www.flickr.com/photos/scriptingnews/3662894176/" target="_blank">here)</a>.</p>
<p>But, <a href="http://blog.ted.com/2009/06/qa_with_clay_sh.php" target="_blank">as Clay Shirky pointed out,</a> re Twitter and Iran:</p>
<p><strong>â€œItâ€™s incredibly messy, and the definitive rules of the game have yet to be written. So yes, weâ€™re seeing the medium invent itself in real time.â€</strong></p>
<p>Stowe Boyd is  managing director of <a href="http://www.microsyntax.org/">Microsyntax.org</a>, a non-profit investigating the embedding of structured information within microstreaming applications, particularly Twitter. It is a communitarian project so if you are interested you should get involved &#8211; see Stoweâ€™s #140conf. presentation, <a href="http://blip.tv/file/2267166" target="_blank">â€œThe evolution of Microsyntax.&#8221;</a> Stowe is an architect of &#8220;flow&#8221; and a webthropologist of the State of NOW.Â  I had the opportunity to talk with him at the conference (<a href="#StoweInterview">see the full conversation below</a>). We talked not only about some of the practicalities of implementing microsyntax but about how &#8220;the web of flow&#8221; produces a fundamental shift in how we communicate, and who we are.Â  As Stowe Boyd put it:</p>
<p><strong> â€œYou use these tools, and you are changed. And itâ€™s just a question of how long you use them and the longer you use them, the more you use them, the more changed you are. When people shift to a basis of sociality around connection with other people as opposed to mass affiliation, itâ€™s different. Itâ€™s completely different. Your whole system of ethics, the way you judge the world and decide whatâ€™s important is different. And not only different itâ€™s better. Itâ€™s a better way to deal with the world.â€</strong></p>
<p>As Wyclef Sean (@<a href="http://twitter.com/wyclef" target="_blank">wyclef</a>) remarked at #140conf, <strong>â€œTwitter just cuts the middle man in everything.â€</strong></p>
<p>At the 140 Characters Conference it was hard not to be captivated by the energy and optimism arising from the successful use of Twitter by Iranians to communicate in the aftermath of the election.Â  But the subsequent repression in Iran, in which the regime took advantage of central infrastructure controls to silence Iranian twittering (we have similar network technologies in place here in the US), leaves a big question that came to the fore after the conference:</p>
<p>While these real time applications give us the ability to leverage network effects in totally new ways, and they have enormous potential to make our lives better, do we need to give more thought to the infrastructure they rely on?</p>
<p><a href="http://pulverblog.pulver.com/archives/008957.html" target="_blank">The videos for the 140Conf</a> are up now. If you havenâ€™t already seen them, after watching Jeff Pulverâ€™s intro to <a href="http://pulverblog.pulver.com/archives/008950.html" target="_blank">The State of NOW</a> a great place to start is the <a href="http://blip.tv/file/2260001" target="_blank">â€œTwitter as a News Gathering Toolâ€</a> (Part 2).Â  Also see <a href="http://www.observer.com/2009/media/cnns-rick-sanchez-todays-ann-curry-stand-their-twitter-iran-coverage" target="_blank">Ann Curry Defends Foreign Correspondents, Twitter; Rick Sanchez Defends CNN</a> and Brian Solisâ€™ <a href="http://www.techcrunch.com/2009/06/17/is-twitter-the-cnn-of-the-new-media-generation/">post on techcrunch</a>. Christopher R. Weingarten (<a href="http://twitter.com/1000timesyes" target="_blank">@1000TimesYes</a>), <a href="http://pulverblog.pulver.com/archives/008954.html" target="_blank">â€œTwitter and the End Of Music Criticism,â€</a> and <a href="http://www.moeed.com/" target="_blank">Moeed Ahmad&#8217;s</a> (<a href="http://twitter.com/moeed" target="_blank">@moeed</a>), <a href="http://www.moeed.com/blog/2009/05/20/gaza-focus-media-140-conference-london" target="_blank">Gaza in Focus</a>, are two of several must see presentations. The #140Conf was an extraordinary event.Â  Jeff Pulver orchestrated a brilliant cast of characters and a manifestation of social media â€œhybrid vigorâ€ that was exhilarating to be part of.<span><span> </span></span></p>
<p>A â€œDirectorâ€™s Cutâ€ of <span><span>#140conf will be re-broadcast (Monday, June 29th and Tuesday, June 30th) at 11AM EST / 8AM PST &#8211; <a rel="nofollow" href="http://140conf.com/watchit" target="_blank">http://140conf.com/watchit</a>. </span></span>Some of the speakers will be tweeting while their session is being re-broadcast (<a href="http://pulverblog.pulver.com/archives/008960.html" target="_blank">see The Jeff Pulver Blog for more</a>).</p>
<p><strong><strong> </strong></strong><strong> </strong></p>
<p><span><span><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3635038955_2998f2a9e1_b.jpg"><img class="alignnone size-medium wp-image-3886" title="3635038955_2998f2a9e1_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3635038955_2998f2a9e1_b-300x200.jpg" alt="3635038955_2998f2a9e1_b" width="300" height="200" /></a></span></span></p>
<p>(picture above from <a href="http://www.briansolis.com/" target="_blank">Brian Solis&#8217;</a> Flickr<a href="http://www.flickr.com/photos/briansolis/3635038955/sizes/l/in/set-72157619870975030/" target="_blank"> here</a>)</p>
<p>In a serendipitous convergence of events I found myself in the front row taking photos <a href="http://www.flickr.com/photos/briansolis/sets/72157619870975030/" target="_blank">for Brian Solis</a> (@briansolis) see Brian&#8217;s post, <a href="http://www.briansolis.com/2009/06/is-twitter-the-cnn-of-the-new-media-generation/" target="_blank">&#8220;Is Twitter the CNN of the New Media Generation.&#8221;</a> I like <a href="http://www.flickr.com/photos/briansolis/3635866464/in/set-72157619870975030/" target="_blank">my photo of Jack Dorsey</a> (@jack) Twitter founder &#8211; the lens of my own camera would never have allowed for this one!</p>
<p>I was also sitting close to Stowe Boyd (@stoweboyd), who out of all of attendees at this jam packed event was one of the people I had most hoped to connect with.</p>
<p><span style="font-size: medium;"><strong>Talking with Bruce Sterling</strong></span><span style="font-size: medium;"><strong> about Squelettes, Twitter, Favela Chic, and Gothic High Tech<br />
</strong></span></p>
<p>I have been following the <a href="http://microsyntax.org/" target="_blank">microsyntax.org</a> effort that Stowe has been leading since <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">this post by Bruce Sterling  (@bruces) on Pachube Feeds</a> which contained this challenge:</p>
<p><strong>â€œ(((Extra credit for eager ubicomp hackers: combine this [<a href="http://www.pachube.com/" target="_blank">pachube</a> feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))â€</strong></p>
<p>Stowe pointed out in our conversation at #140conf, that Microsyntax.org is in one sense a very narrow project but on the other hand itâ€™s very broad, because every sort of information that you can imagine is going to be streaming through Twitter and related [real time] applications.</p>
<p>Or as <a href="http://www.aaronland.net/" target="_blank">Aaron Straup Cope</a> put it to me: <strong>â€œThis is ultimately the â€œmagic wordâ€ problem, which is essentially the semweb vs. google-is-smarter-than-you problem.â€</strong></p>
<p>There are a bunch of crystal ball posts up at the moment looking into the future of the real time webâ€¦. for example, <a href="http://threeminds.organic.com/2009/06/docs_are_old-school_we_need_pa.html?utm_source=twitter&amp;utm_medium=threeminds&amp;utm_campaign=praise" target="_blank">this post on threeminds.organic</a> (via @timoreilly and @<a href="http://twitter.com/buckybit" target="_blank">buckybit</a>) asking whether we need page rank for people and not just sitesâ€¦..and <a href="http://www.readwriteweb.com/archives/as_the_sun_sets_on_myspace_-_what_will_beat_facebo.php#more" target="_blank">this post on readwriteweb</a> that asks is the state of now the harbinger of doom to walled gardens like Facebook. And there seems to be an arms race starting around real time search.</p>
<p>But Bruce Sterling (<a href="http://twitter.com/bruces" target="_blank">@bruces</a>) in <a href="http://interactions.acm.org/content/?p=1244" target="_blank">his cover story</a> for <a href="http://interactions.acm.org/" target="_blank">Interactions Magazine</a> examines some of the blinkering on <strong style="font-weight: normal;">â€œt</strong>wo inherently forward looking schools of thought and action [design and science fiction].â€ He writes:</p>
<p><strong>â€œWe have entered an unimagined culture. In this world of search engines and cross-links, of keywords and networks, the solid smokestacks of yesterdayâ€™s disciplines have blown out.â€</strong></p>
<p>While I was writing up this post, I found myself up at the crack of doom (4 am EST) with insomnia I attribute to a tweet from <a href="http://www.experientia.com/en/who-we-are/mark-vanderbeeken/" target="_blank">Mark Vanderbeeken</a> <a href="http://twitter.com/Vanderbeeken" target="_blank">@vanderbeeken</a> which I (<a href="http://twitter.com/tishshute">@tishshute</a> ) retweeted:</p>
<p><strong>â€œInternet of Things &#8211; An action plan for Europe,â€  (This EU Doc.  cites @<a href="http://twitter.com/agpublic" target="_blank">agpublic</a> â€™s Everyware) <a rel="nofollow" href="http://bit.ly/16uiu3" target="_blank">http://bit.ly/16uiu3</a> via @<a href="http://twitter.com/vanderbeeken" target="_blank">vanderbeeken</a>â€œ</strong></p>
<p>(I wish I had used the new microsyntax in Tweetdeck RE (for more on RE <a href="http://www.stoweboyd.com/message/2009/06/a-useful-bit-of-microsyntax-re.html" target="_blank">see Stowe Boydâ€™s post here</a>) then I would have been able to find @vanderbeekenâ€™s original tweet just now.)</p>
<p>So after a quick scan of the EU paper on the internet of things, and in a â€œhere comes everybodyâ€ pre-dawn state of mind, craving oracular pronouncement, I impulsively shot an email to Bruce Sterling.</p>
<p>[<strong>Note:</strong> the following is an asynchronous exchange &#8211; not synchronous as a <a href="http://wave.google.com/">Google Wave</a> would have made possible. Also I have pulled the conversation out of the original email format. Lars and Jens Rasmussen ofÂ  <a href="http://wave.google.com/">Google Wave</a> seem to have hit the nail on the head when they &#8220;set out to answer the question: What would email look like if we set out to invent it today?&#8221; (see <a href="http://radar.oreilly.com/2009/05/google-wave-what-might-email-l.html" target="_blank">this excellent post by Tim O&#8217;Reilly on Google Wave</a>)]</p>
<p><strong>Tish Shute: </strong>I shouldnâ€™t be up at 4am EST sending you more questions but I began reading The â€œInternet of Things â€“ An action plan for Europe,â€Â <a href="http://bit.ly/16uiu3" target="_blank">http://bit.ly/16uiu3</a> before I went to sleep and woke up thinking: â€œHow can we work on an action plan for everybody?â€ ((Another highlight of 140Conf. was <a href="http://www.areacodeinc.com/" target="_blank">Kevin Slavinâ€™s talk on â€œThings that Twitter</a> â€“Â  â€œsensor aesthetics are streamyâ€)).</p>
<p><strong>Bruce Sterling: *Everybody? Â What, allÂ <span style="font-family: arial;"><span style="font-size: small;">6,706,993,152 of us?</span></span></strong></p>
<p><strong>Tish Shute:</strong> How does, â€œitâ€™s all about the data,â€ and â€œgoogleâ€™s smarter than youâ€ thinking versus &#8220;bottom up&#8221;/&#8221;personal informatics&#8221;/&#8221;sem web&#8221; get worked out in the internet of things?</p>
<p><strong>Bruce Sterling:</strong> *<strong>Iâ€™d be guessing via mergers, acquisitions, lawsuits and police crackdowns, but you never know. Â You might have a massive financial collapse where innovations like this start coming out of slums and favelas. Â I heard such a great term at LIFT last week: Â â€Favela Chic.â€ Â Thatâ€™s when you are totally penniless and without commercial prospects of any kind but still wired to the gills and big on Facebook.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3653530586_eb90ef0241_o.jpg"><img class="alignnone size-medium wp-image-3852" title="3653530586_eb90ef0241_o" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3653530586_eb90ef0241_o-300x207.jpg" alt="3653530586_eb90ef0241_o" width="300" height="207" /></a><br />
</strong></p>
<p>Photo of Bruce Sterling at Lift 2009 by <a href="http://www.flickr.com/photos/centralasian/" target="_blank">Centralasian</a></p>
<p><strong>Tish Shute:</strong> Could you elaborate on your comment:</p>
<blockquote><p><em><strong>&#8220;Also, this stuff theyâ€™re discussing: this is like all kindsa trouble ten years from now.&#8221; (from your postÂ <a href="http://www.wired.com/beyond_the_beyond/2009/03/spime-watch-data-shadows/" target="_blank">http://www.wired.com/beyond_the_beyond/2009/03/spime-watch-data-shadows/</a>)</strong></em></p></blockquote>
<p><strong>Bruce Sterling:</strong> <strong>*Okay: you know how much trouble SMS messages are in Iran right now, even though ten years ago, cellphones were only for foreigners and rich guys in Iran? Â Kinda like that.</strong></p>
<p><strong>Tish Shute</strong>: <a href="http://www.wired.com/beyond_the_beyond/2009/06/ruins-of-the-present/" target="_blank">You wrote here</a>:<em> &#8220;<strong>The idea of living in *abandoned prototypes* or giant failed larvalÂ  husks is very contemporary, very New Depression. Very â€œFavela Chic&#8230;â€</strong></em></p>
<p><a href="../wp-content/uploads/2009/06/squelette-300x221.jpg" target="_blank"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/squelette-300x2211.jpg"><img class="alignnone size-full wp-image-3855" title="squelette-300x221" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/squelette-300x2211.jpg" alt="squelette-300x221" width="300" height="221" /></a></p>
<p>And:</p>
<p><em><strong>&#8220;Ocasionally squatters move into â€œsquelettesâ€ and bring in some breeze-block, corrugated tin and plastic hoses, transforming squelettes into high-rise favelas. This doesnâ€™t work very well because itâ€™s tough to manage the utilities, especially the water.&#8221;</strong></em></p>
<p><strong>Tish Shute:</strong> So what happens when we rely on Google &amp; Twitter repurposed as our main means to access our government?Â  Not only repressive regimes can cut these utilities off, even though Twitter was asked to delay maintenance so that the Iranian Twitters could keep flowing, Michael Jackson brought Twitter down.</p>
<p><strong>Bruce Sterling: *Google and Twitter aren&#8217;t going to last long enough to become main means of an access to government. Â It&#8217;s not that Google and Twitter go away and we return to a previous status quo, however. Â It&#8217;s that they are ramshackle digital expedients that get replaced by Â even more ramshackle digital expedients.</strong></p>
<p><strong>In the meantime the stuff we used to call &#8220;government&#8221; gets similarly destabilized. Â It&#8217;s been privatized, or offshored, or turned into a hollow shell.</strong></p>
<p><strong>Tish: Shute:</strong> So is Twitter a squelette (like all our favorite internet platforms, including Google Wave which we havenâ€™t even had a chance to squat yet)? And is microsyntax our breeze-block, plastic hose and corrugated tin-Â  â€“ very Favela chic but vulnerable to the vagaries of Michael Jackson&#8217;s life and death, and deadly shut downs and snooping by repressive regimes that control the underlying utilities? (Squelettes, as Bruce Sterling points out, are:Â <strong><em> </em></strong><strong><em>â€œone of those coinages like â€œPrada Gothâ€ that spring out everywhere once they are pointed out.â€</em></strong><em>)</em></p>
<p><strong>Bruce Sterling: *We can draw a distinction here: Â &#8220;Gothic High Tech&#8221; is the top-end version, while &#8220;Favela Chic&#8221; is the low-end. Â &#8220;Gothic High Tech&#8221; would be the likes of a &#8220;repressive regime&#8221; which finds itself forced to conduct cruel, secret, spooky, Guantanamo cyberwars&#8230; it&#8217;s pretending to transparency, accountability and open elections, while below that surface is a weird, torchlit, Gothic hall of mirrors where invisible hands wreck banks, impoverish the civil population and kidnap people.</strong></p>
<p><strong>It&#8217;s &#8220;Gothic&#8221; because of its magnificent, elaborate appearance &#8212; very &#8220;Castle of Dracula&#8221; &#8212; but that no longer maps onto its panicky, extremist, transgressive behavior.</strong></p>
<p><strong>Gothic High Tech doesn&#8217;t live in &#8220;squelettes.&#8221; Â Gothic High Tech lives in fancier, more respectable structures called &#8220;stuffed animals.&#8221; Â A stuffed-animal used to be a functional building. From the outside it looks pretty much like it always did, maybe even &#8220;conservative.&#8221; Â Inside it&#8217;s half-retrofitted with aging, Frankenstein machineries, already outmoded, rapidly decaying.</strong></p>
<p><strong>A &#8220;stuffed animal&#8221; might, for instance, be a &#8220;savings and loan&#8221; where the behavior of the present-day inhabitants involves no actual saving and no actual loaning. Â Instead the inhabitants are on television negotiating a position in a crisis narrative and living on bailouts, while, every day, the cobwebs get a little thicker. Â &#8220;Regulatory capture&#8221; is stuffed-animal activity. Â &#8220;Failed states&#8221; and &#8220;hollow states&#8221; are stuffed animals.</strong></p>
<p><strong>&#8220;Favela Chic&#8221; is the same basic activity, but with much less money and institutional clout. Â In &#8220;Favela Chic&#8221; nobody bothers to ask for bailouts. Â They know the state has failed, or they themselves are engaged in weird activities they prefer to hide from the authorities. Â  &#8220;Favela Chic&#8221; lives within openly failed structures, or else in half-structures that are in &#8220;permanent beta&#8221; and falling down as rapidly as they can be erected. Â Favela Chic is bottom-up, open-sourced, heavily networked, subversive and piratical.</strong></p>
<p><strong>There&#8217;s a certain amount of class-transition between Gothic High Tech and Favela Chic &#8212; like, Twitter was Favela Chic and is heading straight for Gothic High Tech. Â But there&#8217;s much less transition than there used to be, because of income differentiation &#8212; the tiny faction of Gothic moguls &#8220;own&#8221; what&#8217;s left of most of the wealth, which they themselves are rapidly destroying. Â The general trend is not toward increasing global prosperity. Â The precarity is becoming general. Â The Favela beckons for everybody. Â That&#8217;s where most of the planet&#8217;s population lives already, and it&#8217;s certainly where most of the young people live. Â The idea of a &#8220;developing world&#8221; needs to be reversed; the end game is in the &#8220;developing world&#8221; and the rich nations are heading there.</strong></p>
<p><strong>Tish Shute:</strong> It seems to me that Twitter and the real time web of flow is a revolution in our means of communication presenting awesome opportunities.Â  But, are we squatters in an infrastructure that is hard to manage?</p>
<p><strong>Bruce Sterling: *Yes. I&#8217;d go farther and say that we are squatters in an infrastructure that methodically destroys previous systems of management. Â Especially itself: the closer you are to a revolutionary real-time web flow, the faster you have to reboot.</strong></p>
<p><strong>Tish Shute:</strong> And what is the answer to the question at the end of <a href="http://interactions.acm.org/content/?p=1244" target="_blank">your cover story for Interactions</a>:</p>
<p><strong><em>&#8220;The winds of the Net are full of straws. Who will make the bricks?&#8221;</em></strong></p>
<p><strong>Bruce Sterling: *I frankly have no idea. Â The storm-gusts are rising in a hurry and we are in for a whole lot of straws.</strong></p>
<p><strong>*I would point out that, if we could make up out minds about what kind of bricks we wanted, we could make them at tremendous speed. Â We&#8217;re not helpless: our productive capacity is frankly fantastic. Â Clearly we&#8217;ve lost the thread and can no longer explain what we&#8217;ve done to ourselves or how we get out of our fix. Â But we might surprise ourselves. Â 21st century Favela Chic is no mere favela, and Gothic High Tech isn&#8217;t just Gothic, it&#8217;s also very high tech. Â We&#8217;re in a Depression and it&#8217;s gonna last, but this is no 1930s Depression.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><a name="StoweInterview">Talking with Stowe Boyd</a></strong></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3629162035_a9332a67e1_o.jpg"><img class="alignnone size-medium wp-image-3862" title="3629162035_a9332a67e1_o" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/3629162035_a9332a67e1_o-300x247.jpg" alt="3629162035_a9332a67e1_o" width="300" height="247" /></a><br />
</strong></p>
<p>Photo <a href="http://www.flickr.com/photos/stoweboyd/3629162035/" target="_blank">from Stowe Boyd&#8217;s Flickr stream,</a> &#8220;Little&#8221; Tower Of Babel, Pieter Bruegel the Younger. It is also a slide from his presentation, <a href="http://blip.tv/file/2267166" target="_blank">â€œThe evolution of Microsyntax.&#8221;</a></p>
<p><strong>[Note:</strong> Most of this conversation took part in a busy foyer at #140conf and various people joined in the conversation at different points.Â  I have cut out these other conversations and tried to maintain the thread of my own questions in the transcription.Â  But this may have resulted in a sense of choppiness and discontinuity in places.]</p>
<p><strong><strong>Tish Shute:</strong></strong> You have been on the front-line of so much web innovation, but, perhaps, you could give me a little back story on how you came to take the lead with microsyntax.org.</p>
<p><strong><strong>Stowe Boyd: </strong>Well, I&#8217;ve been on twitter 990 days or something. But long before Twitter became a commonplace household word, I&#8217;ve been advocating what I&#8217;ve been calling flow application, based on the streaming metaphor &#8211; the notion that you&#8217;d have a stream of updates coming from people that you chose to follow, which is now being called the asymmetric follow model. Years and years ago I postulated that that model was going to come along and completely change all future significant social applications. Back in the late nineties, I introduced a term &#8220;Social tools&#8221; and said social tools were going to come along and change the way the web worked. So I have a history of being 4 or 5 years ahead of what actually happens.</strong></p>
<p><strong>Microsyntax is sort of an interesting outgrowth of that. In a way it&#8217;s a very narrow area, in the sense that it&#8217;s focusing on these information patterns, the way that people want to encode information in the twitter stream or in the realtime stream of other apps. So it&#8217;s very narrow in the sense that it doesn&#8217;t immediately include all sorts of other things like these sports figures talking about how to market their services or whatever. But on the other hand it&#8217;s very broad, because every sort of information that you can imagine is going to be streaming through twitter and related applications.</strong></p>
<p><strong>We saw examples today of plants demanding water or DJ&#8217;s posting their set lists as they&#8217;re playing them, devices or equipment talking about its status, video stream from surveilance cameras. Everything you can possibly imagine will find it&#8217;s way in that stream. It&#8217;s all going to be encoded in different ways and grappling with that is actually an interesting problem. But more importantly it&#8217;s better for us as a community of users if we try to approach it in some systematic fashion. That&#8217;s the purpose of Microsyntax.org &#8211; this nonprofit. The concept of microsyntax is immediately evident to people who use Twitter, and that is we have a whole bunch of conventions that have emerged, and we have some places where it would be nice if conventions did emerge, but we don&#8217;t have them yet. And the idea of creating a nonprofit to do it is a sensible thing to do. So I decided I&#8217;ll go along with the request that others have made, because other people asked me to do this. So that&#8217;s a little unusual for me.</strong></p>
<p><strong>The Web of Flow<br />
</strong></p>
<p><strong>Tish: </strong>What first attracted my attention to Microsyntax.org was Bruce Sterling&#8217;s post<strong> </strong><strong>suggesting combining pachube feeds with Googlewave and then describing this in microsyntax</strong>. Why do you think Bruce Sterling posed this particular challenge?</p>
<p><strong>Stowe: Well, because he sees that everything is moving into the web of flow. Everything is moving out of the web of pages. In the next ten years we&#8217;re going to cease to experience the web as we do now, which is as a bunch of pages and we move around from link to link. And that&#8217;s what browsers are about. They help us move from page to page on the web. But Twitter, and before it the minifeed and instant messaging and a handful of other really interesting applications, have suggested a completely different web where information flows from other people to you through streaming mechanisms.</strong></p>
<p><strong> And the really interesting stuff that comes to me now on a daily basis is streaming to me through Twitter, not through my RSS reader, not me wandering around figuring out what to google, news or something. And that&#8217;s an indicator of the fact that that&#8217;s the hottest, coolest way to do it now, and means that in the future it will be &#8220;the way&#8221; that it&#8217;s done. So there will still be a web of pages out there, but it&#8217;ll exist like an archive. And we won&#8217;t experience the web that way in general because, &#8220;why would I go to the web page and see the guy&#8217;s blog post on his page, when it&#8217;s been served up to me 16 other ways?&#8221; And most importantly I&#8217;ve found it initially in some client, because somebody recommended it to me, and I resolved it in a hover window in my Twitter client. I&#8217;d never go to the page. I comment on it here&#8230;</strong><strong><br />
</strong></p>
<p><strong>Tish:</strong> I like your framing,Â  &#8220;the web of flow&#8230;&#8221;</p>
<p><strong>Stowe: Well it&#8217;s also that one of the characteristics is the tempo is different. I actually wrote a post about this, that I think it&#8217;s fundamentally important. It&#8217;s not really gotten much drift yet. I think it&#8217;s too hard for people to think this way. They just can&#8217;t get it. </strong></p>
<p><strong>The dimension that&#8217;s really most interesting is the transition from secret to private to public. The fact that Twitter is inherently public as a default is a breakthrough. I mean there&#8217;s nothing else like this. The first time that the idea, except for the blogosphere itself which is the concept it&#8217;s built on,Â  the inherent notion is that you&#8217;re publishing stuff and anyone can get access to it. But the tempo thing really matters, the fact that it&#8217;s near synchronous so your perception of what you feel like you&#8217;re doing is you feel like you&#8217;re in a stream of updates from friends. We know that. But the sensation is dramatically different than your close personal relationship with your inbox, which is email. Email is secret, closed, and the sense is the context is that it&#8217;s an inbox, like the one on your desk. And you are boxed in by that, and you&#8217;re not actually feeling like you&#8217;re dealing with people. You feel like you&#8217;re dealing with the inbox.</strong></p>
<p><strong>Tish:</strong> This was only present in boxes as you say &#8211; chat rooms, IM, IRC, MUDs, Virtual Worlds but they all had that realtime experience going on.</p>
<p><strong>Stowe: Yes instant messaging, chat rooms, etc. they were private. You had to invite people. The update paradigm on instant messaging was backwards. It said I want to follow this guys updates, but you had to get his permissin to do it. That seemed like a sensible thing in the mid &#8217;90s when people worried about privacy and so they made it private. And private is not good, actually.</strong></p>
<p><strong>Tish: </strong>IRC is exactly like twitter but it&#8217;s off in closed worlds&#8230;</p>
<p><strong>Stowe: Yes you have to know about them. You can&#8217;t just stumble across them, you have to be invited or give the password. It&#8217;s another closed model. But instant messaging is the father of all this, or the mother, depending on which way you look at it. But that fundamental last thing, it&#8217;s based on a quote by Gabriel GarcÃ­a MÃ¡rquez</strong> <strong>which is, &#8220;All people have three lives. they have a public life, a private life and a secret life.&#8221; And we are philosophically moving from a time where things were primarily secret (pre internet) to a time where things were primarily private which is web 1.0 into this new web where things are going to be primarily public and open and immediate. So we are building the scaffolding real fast to allow that to happen. And it&#8217;ll take us away from the old web. The old web will go down there. Everything&#8217;s built on dirt right? Do you see very much dirt in cities? No. No. The dirt is all concealed. It&#8217;s down there. If you want to go find it you can dig underneath the floor, and there&#8217;s dirt under there. But most people don&#8217;t spend very much time down there we send professionals down there to put plumbing and pipes underneath and we experience the world like this.</strong></p>
<p><strong>Tish: </strong>I met Eric Horvitz (Microsoft Researcher) at <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0</a>.Â  He is interested in community sensing and ideas about how people can share data in a win win way (<a href="http://en.oreilly.com/where2009/public/schedule/speaker/49828" target="_blank">see here</a>). Do we need to work out ways to make sure people&#8217;s relationship to their data is not just to have it harvested by others for profit or repression?</p>
<p><strong>Stowe: I&#8217;m interested in this actually. I recently wrote a piece about the governance of Twitter and for the purpose of your question let&#8217;s just go along with the premise that Twitter&#8217;s going to continue to be benevolent, and everything will be open, and everything will be public and everyone can do whatever they want with it. Well there&#8217;s a tremendous amount of things that people will want to do, but most of the things that they will set about doing to begin with will turn out to be irrelevant. </strong></p>
<p><strong>People will want to measure sentiment and all this other stuff, for example. And they&#8217;ll do that and they&#8217;ll coerce a lot of big brands and so on to pay money for these services. But the thing that&#8217;s going on with the now web, my web of flow is that people are disconnecting from self identity based on mass affiliations. So ultimately the more you spend your time doing this, you don&#8217;t give a s**t about brands. Nike &#8211; I could care less. </strong></p>
<p><strong>So there is defection from the mass media. We heard it today. There&#8217;s people here who were like booing these media guys, who think they should be held up as gods because, &#8220;Oh I&#8217;m one of the first to use Twitter on TV.&#8221; Well F*** you, I don&#8217;t give a s***. I don&#8217;t watch television. Every hour that people spend on the internet is an hour they do not spend watching television. It&#8217;s a direct and one to one correlation. Sure people still want to get their fill of whatever, the NBA playoffs, but significantly less than ever before. Which is why they&#8217;re increasingly irrelevant. </strong></p>
<p><strong>So the idea that some magicians are going to come along, figure out how to mine this data to find out how I feel about my automobile? I do not have a close personal relationship with an automobile. I don&#8217;t. And increasingly people won&#8217;t affiliate that way. They won&#8217;t bond with their stuff like that. That&#8217;s why I say most of this information won&#8217;t be helpful. It&#8217;ll be interesting sociologically. Webthropologists will be able to make it interesting &#8211; and marketing people, who are trying to figure what&#8217;s going on, might be able to do the right thing. But if they&#8217;re trying to take it and make it do something for them&#8230; They&#8217;re going to try to take it and use it to change us? To control us? It&#8217;s like that line in The Labyrinth,Â  &#8220;you don&#8217;t have any power over me anymore.&#8221;</strong></p>
<p><strong>Tish: </strong>You are actually saying something much more radical than say community sensing or that we need to store our own data. You seem to be saying that in some ways it doesn&#8217;t matter whether you store your own data or your data&#8217;s in the cloud (although Iran seems to be showing how centralized network control can be a powerful tool of repression).</p>
<p><strong>Stowe: Most of the things that they&#8217;re going to try to use it to do won&#8217;t work because we&#8217;re not the same anymore. It&#8217;s inevitable. </strong><strong>You use these tools, and you are changed. And itâ€™s just a question of how long you use them and the longer you use them, the more you use them, the more changed you are. When people shift to a basis of sociality around connection with other people as opposed to mass affiliation, itâ€™s different. Itâ€™s completely different. Your whole system of ethics, the way you judge the world and decide whatâ€™s important, is different. And not only different itâ€™s better. Itâ€™s a better way to deal with the world.</strong><strong> And these guys are still hoping that the old rules hold, but they don&#8217;t. They just won&#8217;t.</strong></p>
<p><strong>Tish:</strong> This isÂ  rather a broad question. But one of the things that Kevin Slavin brought up in his talk is about things that tweet &#8211; your plant is tweeting, your shoes are tweeting, your house is tweeting. Twitter is a natural medium for the internet of things and what Kevin Slavin calls the &#8220;streamy aesthetics of sensors.&#8221; But with all these things that are tweeting people have had a lot of problems with filtering that kind of flood of tweets.Â  For example, I may want to listen to a tweet from my plant telling me it needs water when I am actually at home and can do something about it. But I may not want to listen to my plant whining about being thirsty all the time. Can microsyntax help? Or is this a place for those appliances you mentioned earlier?</p>
<p><strong>Stowe:Â  There&#8217;s a whole other category of stuff having to do with priorities &#8211; this isn&#8217;t really a microsyntax &#8211; of different times of day when you&#8217;re involved in different activities. You may be more or less interested in different collections of Twitter streams. And the notion of how you go about dealing with that is &#8211; it could semi-microsyntactical, but maybe it isn&#8217;t at all. Maybe it&#8217;s all just having to do with the way that clever client apps work. So maybe if you have a super duper Tweet Deck, and you say it&#8217;s evening time and I&#8217;m in my evening mode, so a whole bunch get blocked and a different group of people, for example, your Parcheesi evening friends get enabled, and at the weekend when you have time to do house care you listen to your house.</strong></p>
<p><strong>I don&#8217;t think this is a microsyntactical issue. I don&#8217;t think this is an issue of what&#8217;s embedded in the stream except as a notion of priorities. There&#8217;s a lot of people who would like to have a mechanism to indicate priority. But I can&#8217;t think of any effective way to do it that wouldn&#8217;t immediately be abused. Of course anything can be abused. This guy thinks that this is high priority, but maybe once again it&#8217;s one of these sort of mutual dimensions where they want to indicate it&#8217;s high priority but I say I only believe in priorities from certain people.</strong></p>
<p><strong> But still there might be a case to be made for allowing people to put some kind of indication of priority in a tweet, so that there is a hope that it could rise out of the clutter. I talked about some things that I&#8217;m interested in that are just purely operational. One of these things I want to get people to build, in Tweet Deck, but it could be in any kind of a client, I want to be able to say don&#8217;t let this tweet go away. So I&#8217;m getting them to build the pushpin. So I can put a pushpin in the thing and it&#8217;ll stay at the top, or stay at the bottom, wherever I put it. And then I can respond to it later, because if I don&#8217;t respond to it right now, in most places it goes bye, and then you&#8217;ve got to go search for it &#8211; a pain in the ass. </strong></p>
<p><strong>Then I say if I&#8217;m going to have pushpins I want to have a record of all the things that I&#8217;ve push pinned &#8211; a history of pushpins. But it&#8217;s all client based. It&#8217;s got nothing to do with what&#8217;s in the text. </strong></p>
<p><strong>Tish:</strong> And knowing how many of your followers had already got a particularly tweet from somewhere else which would be very useful has to be done as an appliance&#8230;</p>
<p><strong>Stowe: Yes that&#8217;s sort of a downstream metrics kind of thing.</strong></p>
<p><strong>Microsyntax is not the answer to every kind of thing. Like, appropriately dealing with hash tags in a sensible fashion is not purely a function of how we use them. But some of it is the structure itself. That&#8217;s why I came up with the subtags model. So everybody at <a href="http://en.oreilly.com/where2009/public/schedule/speaker/49828" target="_blank">South by South Wes</a>t tagged everything southbysouthwest, so if you searched for it there were 150,000 hits a day. So it was useless. But if people had used the subtags model, or something else like that, you could have searched for the subtag. So you could have searched for south-by-southwest.parties or south-by-southwest.thirtytwo-bit which was a particular party.</strong></p>
<p><strong> And so if you have sensible tools that are doing a better job of aggregating information around more complicated ways of structuring hash information, then we can get past the fact that brute force search just isn&#8217;t going to work. It just won&#8217;t work. For example somebody going through the stuff from today all the stuff that says #140conf but they want to find just the stuff that had to do with media, they wont be able to do it. They&#8217;ll have to do it manually. So some of that is better syntax. But some of it is better tools. I mean somebody should go build a better hashtags.org. </strong></p>
<p><strong>Tish: </strong>And in terms of creating a web of flow not all of what we need can been done within the Twitter messages &#8211; it has to be done in the client and external applications<strong>&#8230;<br />
</strong></p>
<p><strong>Stowe: Yes, there&#8217;s this class of applications that listen very diligently to what you&#8217;re doing in Twitter. The primary mechanism of how you influence the app is doing stuff in Twitter. You can always go to the app and look at it and fool with it. But, if in fact, the preponderance of your interaction is, it&#8217;s listening or talking to you in Twitter &#8211; I call that an appliance, to distinguish it from these other apps. Any external application might provide you with the mechanism to dump information into Twitter, but you have to go to the app to do the primary kinds of interaction. In fact major functionality may not be available at all in Twitter or maybe no functionality, except for like <a href="http://brightkite.com/" target="_blank">Brightkite</a> allows you to dump stuff into Twitter. But the idea is that primarily you do it there. Or there&#8217;s a very limited thing like you get with Brightkite, you can send a message saying, &#8220;I&#8217;m somewhere.&#8221;</strong></p>
<p><strong>Tish: </strong>Should location be put into tags?</p>
<p><strong>Stowe: I don&#8217;t think that location should be put into tags. In other words, if I talk about Paris, then using hashtags is sensible. Or I&#8217;m talking about Sherlock Holmes and his relationship to London. It&#8217;s a conceptual thing &#8211; like talking about Heaven. It doesn&#8217;t actually have to exist on the planet somewhere. But it&#8217;s really different if you say I am in New York City right now or the more interesting case I think really is, &#8220;I am going to be in Boston colon next week&#8221; or June 15 dash 17. And I want that information to be available to everybody or a select group of my friends, or just to myself and have it find it&#8217;s way into my calendar. But that&#8217;s really different than saying &#8220;I&#8217;ve always enjoyed it when I visit HASH New York.&#8221; </strong></p>
<p><strong>Tish:</strong> I liked Kevin Slavin&#8217;sÂ  phrase &#8220;the streamy aesthetics of sensors.&#8221; I guess streamy aesthetics is something you have given a lot of thought to?<strong><br />
</strong></p>
<p><strong>Stowe: First of all I read a lot of poetry, so I believe in poetics in reading and writing. But I don&#8217;t think punctuation marks really degrade that dramatically. I mean it&#8217;s OK to have periods and exclamation marks and commas, and things can still be poetic. I think it&#8217;s important to try to dream up microsyntax that doesn&#8217;t take your eyes off the content, the stuff that people are really trying to say. So that&#8217;s why for example I hate L: as a location queue because anything that has letters in it, if you&#8217;re not supposed to say them, &#8211; if you&#8217;re not mentally supposed to say them, or if you&#8217;re not supposed to say them if you read it aloud, causes you to do a stutter step when you&#8217;re reading the tweet. </strong></p>
<p><strong>But if you use punctuation marks, special characters at various points or placement conventions, like where do things appear in order in a tweet, those things don&#8217;t have the same toe stub, that I think really ugly syntactic conventions would. So it&#8217;s possible to make these things pretty. For example I&#8217;m testing out trying on various conventions for what do you do with a re-tweet. If you want to re-tweet it, if you actually want to have people see it, and then you want to make your own comment. So the question is how do you separate the two? So, RT &#8211; guy&#8217;s name and then text. Well then how do you know where his text ends and my text begins. So certain things don&#8217;t work for me. I mean like a comma is not enough because there might be a comma in the text. And a period doesn&#8217;t work because there might be multiple sentences. So it has to be something else.<br />
</strong></p>
<p><strong>Tish:</strong> And aren&#8217;t there confusions that arise because there are already conventions of usage&#8230;</p>
<p><strong>Stowe: Yes, I have problems with angle brackets, for example. Sometimes when the tweets wind up in not particularly smart rendering systems, it gets confused because it thinks they&#8217;re html. For example, somebody was using the open angle bracket, and even though it&#8217;s just text, and it&#8217;s not html, when I took that tweet and put it in a blog post, it thought it was the start of an html tag, and so it disappeared. You could use an html escape character but that&#8217;s the kind of thing that causes problems. The other problem is there are other ways that it&#8217;s been used a lot. People have used this as the thing to introduce the comment that they&#8217;re making after a re-tweet.</strong></p>
<p><strong>Tish: </strong>There must be very few characters not being used for other things?<strong><br />
</strong></p>
<p><strong>Stowe:Â  Yes but for example, when we use geoslashes there&#8217;s a blank in front of it, or it&#8217;s the first character in the tweet &#8211; so i</strong><strong>n that particular exampleÂ  it is similar because slash is used for other things. </strong><strong> But, in all the places where it is used, generally there&#8217;s a character that precedes it &#8211; like &#8220;w/o&#8221; for without or a fraction or a long list list ofÂ  these options. </strong><strong> </strong><strong>[</strong>Geoslash is microsyntax for user location using slash (&#8216;/&#8217;) &#8212; as in &#8216;just arrived /SFO&#8217; or &#8216;heading to /New York: tomorrow/&#8217; for more see <a href="http://microsyntax.pbworks.com/Geoslash" target="_blank">Stowe&#8217;s post here</a>.]</p>
<p><strong>When I was rooting around for a character I looked for a long time.Â  And also I wanted to make sure that the slash was easily reachable on cell phones, which, for example, angle bracket isn&#8217;t. So if you&#8217;re on a phone and you want to say I&#8217;m here &#8211; I don&#8217;t know how far you have to go on your phone, but it isn&#8217;t in the first eight characters of Symbian. I looked carefully to make sure it wasn&#8217;t a common character that people use widely in everyday speech like commas and semicolons and exclamation marks, but was still easily used. There are still other alternatives. It&#8217;s not the only one. There are cases to be made for all of these things &#8211; pros and cons for all of them.</strong></p>
<p><strong><br />
Anyway I was making the case of experimenting with different things for this re-tweet, &#8220;Here&#8217;s my comment.&#8221; And I was trying all sorts of stuff like double colon, I tried all kinds of things I wanted to see what it looked like. So starting this week I used the solid bar, the upright bar. It sets it off. It really feels like there&#8217;s a divide. There&#8217;s a cleavage point, and that&#8217;s that guy and this is this guy. So I&#8217;m going to write it up as one of the candidates. Some people use square brackets and many other things. There are many personal conventions but nothing has become a real convention, accepted as the norm.</strong></p>
<p><strong>[ </strong>Note: Our conversation ended here as the presentations had resumed at <a href="http://www.140conf.com/" target="_blank">140 Characters Conference</a> ]</p>
<p><strong><br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/28/twitter-and-the-web-of-flow-talking-with-stowe-boyd-bruce-sterling-about-microsyntax-squelettes-favela-chic-and-the-state-of-now/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
	</channel>
</rss>
