<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; AR</title>
	<atom:link href="http://www.ugotrade.com/tag/ar/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Augmented Awareness &amp; Reality Games, ARE2012</title>
		<link>http://www.ugotrade.com/2012/05/09/augmented-awareness-reality-games-are2012/</link>
		<comments>http://www.ugotrade.com/2012/05/09/augmented-awareness-reality-games-are2012/#comments</comments>
		<pubDate>Wed, 09 May 2012 18:12:41 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[gestrural interface]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[ipad]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Linked Data]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[ARE2012]]></category>
		<category><![CDATA[Augmented Awareness]]></category>
		<category><![CDATA[augmented experiences]]></category>
		<category><![CDATA[Cold Reading]]></category>
		<category><![CDATA[CosPlay]]></category>
		<category><![CDATA[Dimensions App]]></category>
		<category><![CDATA[facial recognition]]></category>
		<category><![CDATA[Game Design]]></category>
		<category><![CDATA[Gaming Reality]]></category>
		<category><![CDATA[global possibility space]]></category>
		<category><![CDATA[Google Project Glass]]></category>
		<category><![CDATA[Improv and Game Design]]></category>
		<category><![CDATA[Integrated Games]]></category>
		<category><![CDATA[Life Based Games]]></category>
		<category><![CDATA[Life Ganes]]></category>
		<category><![CDATA[location based games]]></category>
		<category><![CDATA[New Aesthetic]]></category>
		<category><![CDATA[new aesthetic of artificial intelligence]]></category>
		<category><![CDATA[Qualified Self]]></category>
		<category><![CDATA[quantified self]]></category>
		<category><![CDATA[reality games]]></category>
		<category><![CDATA[social shopping]]></category>
		<category><![CDATA[The Future of AR eyewear]]></category>
		<category><![CDATA[time- based games]]></category>
		<category><![CDATA[TimeHop]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Weavrs]]></category>
		<category><![CDATA[Where 2012]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6527</guid>
		<description><![CDATA[Augmented Awareness &#38; Reality Games, ARE2012 View more PowerPoint from Tish Shute ARE2012 is being live streamed this year, and the wrap up fire side chat between Bruce Sterling and Daniel Suarez and a surprise stupid fun grand finale is still to come. We have a live stream this year so you can see for [&#8230;]]]></description>
				<content:encoded><![CDATA[<div style="width:425px" id="__ss_12853433"> <strong style="display:block;margin:12px 0 4px"><a href="http://www.slideshare.net/TishShute/augmented-awareness-reality-games" title="Augmented Awareness &amp; Reality Games, ARE2012" target="_blank">Augmented Awareness &amp; Reality Games, ARE2012</a></strong> <iframe src="http://www.slideshare.net/slideshow/embed_code/12853433" width="425" height="355" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
<div style="padding:5px 0 12px"> View more <a href="http://www.slideshare.net/thecroaker/death-by-powerpoint" target="_blank">PowerPoint</a> from <a href="http://www.slideshare.net/TishShute" target="_blank">Tish Shute</a> </div>
</p></div>
<p><a href="http://augmentedrealityevent.com/">ARE2012 </a> is being live streamed this year, and the wrap up fire side chat between Bruce Sterling and Daniel Suarez and a surprise stupid fun grand finale is still to come.  We have  <a href="http://augmentedrealityevent.com/stream/index.2.php">a live stream this year</a> so you can see for yourself!   Also you can catch up on any sessions you have missed, including the video of my talk, Augmented Awareness and Reality Games.  My slides are here and my speaker notes are below, enjoy!</p>
<p>1. Hi my name is Tish Shute. Currently I am working with Will Wright and Stupid Fun Club on a new genre of personally aware mobile games that move away fromt he idea that games are a way to escape reality. If you want to know more about what I mean by Reality Architect please feel free to look up my TEDXSilicon Alley talk <a href="http://www.youtube.com/watch?v=pBRa4gJPLHo">â€œOn Becoming a Reality Architect..&#8221;</a> .</p>
<p>2. As Will puts it, â€œgames are getting more and more personal to the point that our actual lives are becoming the most interesting gaming platform.&#8221;  Personally Aware Games, Life Based Gaming or Integrated Games are expressions that are just beginning to emerge to describe this idea that our lives are the most interesting gaming platform.</p>
<p>3. <a href="http://www.ugotrade.com/2012/04/25/where-2012-will-wright-gaming-reality/">Will Wrightâ€™s talk</a> at Where 2012 is a must see.  He pointed too a turning point for mobile gaming.- a shift for games from being about simulating reality to being about parsing reality.</p>
<p>4. The ghosts of AR past. Bruce Sterling at ARE2010 mentioned that AR eyewear was haunted by the spectre of ARs Gothic Stepsister &#8211; virtual reality, and Jesse Schell probed on the other hand ARâ€™s aspirations as the ubiquitous all seeing data eyeâ€“ the man with the x-ray eyes.. As Jesse put it, â€œYou guys are going to put it togetherâ€¦and then everybody is going to be like, oh my god we are freaking naked, all this information about me is out thereâ€¦I had security through obscurity, but not anymoreâ€¦â€</p>
<p>5. Yes, it seems we have put it all together. Although the ubiquitous all seeing data eye &#8211; our x ray eyes have turned out to be carried around in our pockets or integrated into our clothes and eyewear is not yet ubiquitous, at least yet. But, for the moment, we are looking at the most intimate aspects of ours lives only as an opportunity for optimization and efficiency, (but there are some interesting apps/products emerging &#8211; try out the Heart Rate app â€“ if you hold your finger up against the camera an you will get a pretty accurate reading). But as the explorations of makers, hackers and self trackers move out into consumer culture the quantified self is ripe for new forms of expression http://www.electricfoxy.com/projects/modwells/The term â€œgamificationâ€ has been worn out already . We sense its shallow inadequacy. So whatâ€™s next? </p>
<p>6. There is barely a trace of ARâ€™s Gothic stepsister VR in the Google glasses pitch which is super simple and seems to be aimed at optimizing Pinterest like social shopping experiences, by taking photos and videos from your direct eye-line and disseminating them through Google+  No mentions of mapping, tracking and registration or how they are working the hands free part yet â€“ all Iâ€™ve seen for input is nods so far. Is eye movement tracking up next &#8211; or what? Thrun was pretty down on the AR ghosts &#8211; the man with the x ray eyes stuff (Iâ€™m already feeling nostalgic for classic AR!).  But seeing with shared eyes is what makes AR technology super interesting as Jesse Schell pointed out at ARE2010, â€œThe internet allowed us to think with shared memoryâ€¦Augmented Reality will allow us to see with shared eyes,â€ Jesse Schell ARE2012.  Applying our design chops to this possibility space seems like a pretty good project to me. Bruce has always said that AR should be more about creating experiences than the technology.</p>
<p>7.  And we do need new forms of expression in our digital culture where technologies of seeing are primarily technologies of watching used for power and control.</p>
<p>8.  If you havenâ€™t already drunk at the New Aesthetic fountain you have some googling to do after this session â€“ start with James Bridleâ€™s Tumblr and Bruce Sterlingâ€™s essay http://www.wired.com/beyond_the_beyond/2012/04/ perhaps. James Bridle might have already closed the New Aesthetic tumblr but this collection of images is a provocation to explore the possibilities of feedback loops between people and machines â€“ a reflexive augmented awareness where we play with modes of digital seeing. I think AR and digital seeing is in need of a New Aesthetic more than most technologies because augmentation implies that we have an idea of what is aesthetically  valid at a given time and place, and that we have a position re the difference between augmented and degraded reality, and machinomorphic and anthropomorphic modes of perception. Howie Wooâ€™s â€œin <a href="http://woowork.blogspot.ca/2012/03/in-yo-face-facial-recognition.html">yo face facial recognition</a>â€ project (pic in my opening slide too), uses crochet + cunning to transform facial recognition into a reality game.</p>
<p>9.  Reality Games can give us new opportunities to explore the free play in the systems of our lives. AyseBirsel, a friend and brilliant  designer from New York City has being showing people, in a series of innovative workshops, how to bring powerful design tools to their lives, to design not necessarily a better life but at least an original life, beginning with a method of deconstruction,reconstruction, and visualization. The goal of an original life rather than an optimized more efficient life challenges AR and reality game designers to explore the possibility space of our lives.</p>
<p>10. We are already parsing our lives through powerful digital filters. Four Square has shown us the power of the fundamental change to maps that has at itâ€™s center the notion that â€œyou are hereâ€. See <a href="http://www.youtube.com/watch?v=Tzlv69lGrtQ">Adam Greenfieldâ€™s Where 2012</a>  talk for a deeper understanding of the significance of this change to mapping. While location is a powerful filter to parse what Will callâ€™s the GPS â€œglobal possibility spaceâ€ of our lives, it is not the only one.http://dornob.com/you-are-here-3-real-life-works-of-digital-map-inspired-art/</p>
<p>11. Time is another a powerful filter for our lives and games. Jonathan Blowâ€™s Braid explores how time can be manipulated in different game worlds. </p>
<p>12. Cosplay (or costume role playing) is different from earlier incarnations of say renaissance fairs or civil war reenactments in its integration into the present. In Tokyo a commuting hub turns into a cosplay mecca every Sunday and as AT Wilson puts it â€œturns a non-place to a place.â€</p>
<p>13. â€œ[TimeHop] sends users a daily e-mail reminder of what they did a year ago, and it does so by retracing the subscriberâ€™s digital footsteps Facebook, Twitter, Instagram and Foursquare.â€http://www.nytimes.com/2012/01/08/fashion/timehop-a-new-online-service-tells-you-what-you-were-doing-a-year-ago.html</p>
<p>14. Reality Games have of course predated a machine readable world. This book on Cold Reading by Ian Rowland parses the rules of the game that enables â€œpsychicsâ€ and â€œfortune tellersâ€ to deploy techniques that border on actual mind reading. http://www.thecoldreadingbook.com/Lifeâ€™s players &#8211; â€œpick up artistsâ€ &#038; â€œpsychicsâ€ and â€œcon-artistsâ€ are master gamers of the intimate social dynamics of life but NLP and semantic tech are bringing digital seeing to the kind of intimate social dynamics that are the domain of cold reading.</p>
<p>15. Status games are a core dynamic of life. The great ethnologist Erving Goffman, devoted his career to analyzing the face-to face relations of everyday life. Goffman, described everyday social life as a strategic game that could be understood through the metaphors of the stage.- front stage and back stage. But, as we parse reality, digital hierarchies and the abstractions of data viz begin to control the information flow and create a new stage for status games that demand a a different kind of awareness of what is back stage and what is front stage in social lives.</p>
<p>16. We are entering a new era of social intelligence where people and algorithms are interacting in interesting new ways. OKCupid has been getting a lot of attention for offering social intelligence that can help us play better in our dating lives. Did you know your profile narratives can reveal whether you like rough or gentle sex?</p>
<p>17. We are also beginning to see an interesting New Aesthetic for Artificial Intelligence -the expressive interaction between algorithms and people. SIRI, for example, is no cold reader, but she does have has a more developed character than Google voice.<br />
 Jeff Kramer has <a href="http://www.realityaugmentedblog.com/">an excellent post on Weavrs</a> &#8211; personality based social â€“ web robots.  I like weavrs a lot because they are out on there at the edge with there exploration of the expressive power of bots. Bots shape our algorithmic world from call centers to Wall street but we have barely began to explore their expressive potential .<br />
Weavrs exist on their own. You can ask them questions, but you canâ€™t tell them for example â€˜I like this, post more like this. Weavrs are social web bots that evolve and grow without your direct hand guiding them. But as <a href="http://www.realityaugmentedblog.com/2012/05/life-in-the-weavrs-web/">Jeff Kramer in his interesting post</a> on Reality Augmented notes,  </p>
<p>â€œitâ€™s also obvious that having more full featured persona creation/control options is going to be a big part of the future of social bots too.â€</p>
<p>18. The eruption of the digital into the physical is a catch phrase for The New Aesthetic. And <a href="http://dimensions.rjdj.me/">RjDjâ€™s Dimensions app</a>  and awesome Inception app, I think are exemplary explorations of new aesthetic dimensions for Sonic AR. The dimensions app pulls data from your surroundings â€” including movement, time of day and microphone input â€” to give you a very personal experience that adjusts to and transforms your environment and actions.</p>
<p>19. Imrov practitioners are early explorers of Reality Games. The Life Game is one of Keith Johnstoneâ€™s projects and his books on Improv have been a great source of inspiration for RPG players and game designers. A CMU student visiting Stupid Fun Club once asked Will what he should do to be a better game designer and Will said study Improv!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2012/05/09/augmented-awareness-reality-games-are2012/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Real Time Big Data at Strata 2011: Ambient Findability, Social Search, GeoMessaging, Augmented Data, and New Interfaces</title>
		<link>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/</link>
		<comments>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/#comments</comments>
		<pubDate>Thu, 20 Jan 2011 22:48:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Alistair Croll]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android Tasker]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[attention data]]></category>
		<category><![CDATA[augmented data]]></category>
		<category><![CDATA[augmented reality ecosystem]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[BackType]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[Big data and new interfaces]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cassandra]]></category>
		<category><![CDATA[Collecta]]></category>
		<category><![CDATA[content-shifting]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Data Engineering]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[digital divide]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[Edd Dumbill]]></category>
		<category><![CDATA[Factual]]></category>
		<category><![CDATA[future of work]]></category>
		<category><![CDATA[geo]]></category>
		<category><![CDATA[geo social aware discovery]]></category>
		<category><![CDATA[geo-search]]></category>
		<category><![CDATA[geodata]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[Geoloqi]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[geosearch]]></category>
		<category><![CDATA[gestural interfaces]]></category>
		<category><![CDATA[Gov2.0.]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Hive]]></category>
		<category><![CDATA[key data trends]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[location data]]></category>
		<category><![CDATA[Maneko Neki]]></category>
		<category><![CDATA[MapReduce]]></category>
		<category><![CDATA[mapufacture]]></category>
		<category><![CDATA[Mesos]]></category>
		<category><![CDATA[Michal Avny]]></category>
		<category><![CDATA[mobile local interactions]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[My6sense]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[NoSQL]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[OpenGov]]></category>
		<category><![CDATA[P2P cloud computing]]></category>
		<category><![CDATA[pervasive computing]]></category>
		<category><![CDATA[Q&A]]></category>
		<category><![CDATA[Q&A ecosystems]]></category>
		<category><![CDATA[Q&A platforms]]></category>
		<category><![CDATA[Q&A The New Search Insurgents]]></category>
		<category><![CDATA[Quora]]></category>
		<category><![CDATA[RabbitMQ]]></category>
		<category><![CDATA[real time data analytics]]></category>
		<category><![CDATA[real time data in mobile development]]></category>
		<category><![CDATA[real time search]]></category>
		<category><![CDATA[real time search engines]]></category>
		<category><![CDATA[real time social discovery]]></category>
		<category><![CDATA[semantic web]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[social graph]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[social web]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Strata 2011]]></category>
		<category><![CDATA[Swift River]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Topsy]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Who owns your data?]]></category>
		<category><![CDATA[XMPP]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6025</guid>
		<description><![CDATA[We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with Anselm Hook, Part 2 upcoming).Â  O&#8217;Reilly&#8217;s Strata Confernence 2011, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31.jpg"><img class="alignnone size-medium wp-image-6034" title="noisedderived3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31-300x163.jpg" alt="" width="300" height="163" /></a></p>
<p>We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, Part 2 upcoming).Â  <a href="http://strataconf.com/strata2011" target="_blank">O&#8217;Reilly&#8217;s Strata Confernence 2011</a>, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new interfaces.&#8221; It is, perhaps, one of the most important events of 2011.</p>
<p>Data is driving a revolution much as coal, oil, and steel powered the industrial revolution.Â  And the world changing insight from Karl Marx that &#8220;the industrial revolution polarized the world into two groups: those who own the means of production and those who work on them,&#8221; is taking on on new life, asÂ <a href="http://twitter.com/#!/acroll" target="_blank"> Alistair Croll</a>, co-chair of <a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a>, points out in his post,Â  <a href="http://mashable.com/2011/01/12/data-ownership/" target="_blank">&#8220;Who Owns Your Data?&#8221;</a></p>
<p><strong>&#8220;The important question isnâ€™t who owns the data. Ultimately, we all do. A better question is, who owns the means of analysis? Because thatâ€™s how, as Brand suggests, you get the right information in the right place. The digital divide isnâ€™t about who owns data â€” itâ€™s about who can put that data to work.&#8221;</strong></p>
<p>Strata is where a vanguard will be meet, not only to discuss this revolutionâ€™s futures, but to define how to create, handle, and build the platforms and experiences that will harness the data.  My flight is booked!Â  (Also check out <a href="http://www.bigdatacamp.org/">BigDataCamp</a> which takes place the night before <a title="Strata Conference" href="https://en.oreilly.com/strata2011/public/regwith/str11dnaff" target="_blank">Strata</a>.)</p>
<p>The picture opening this post is from Michael EdgeCumbe&#8217;sÂ  <a href="http://garden.neocyde.net/thoughts/2010/12/fall-2010-itp-winter-show-project/">Fall 2010: ITP Winter Show Project</a>.Â  A project exploring ways to intuitively get the feel of what it going on with big data sets using &#8220;the gestural manipulation and stereoscopic visualization of complex data to create a meditative state for data analysis.&#8221;Â  Michael project will be part of the <a href="http://strataconf.com/strata2011/public/schedule/detail/17840" target="_blank">Science Fair at Strata</a>.Â  For more on Michael&#8217;s work see <a href="http://www.neocyde.net/derive/2010/12" target="_blank">Noise Derived.</a> I also have a number of theÂ    <a href="http://strataconf.com/strata2011/public/schedule/topic/595 " target="_blank">interesting new interface sessions </a>at Strata in my schedule.</p>
<p>The daily <a href="http://radar.oreilly.com/2010/12/write-your-own-visualizations.html" target="_blank">Strata Gems</a> on O&#8217;Reilly Radar are great place to get a gestalt of some of the Strata themes, and <a href="http://radar.oreilly.com/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank">this  post </a>by <a href="http://strataconf.com/strata2011/profile/1" target="_blank">Edd Dumbill</a>, program chair for Strata,<a href="http://radar.oreilly.com/m/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank"> Three key data trends for 2011</a>, looks at the year ahead.Â  This week, I got the chance to ask Edd a few of the questions that I will have on mind at Strata &#8211; see his responses below.</p>
<p>If you have been reading Ugotrade, you will know I am interested in our mobile social augmented futures and there is no question in my mind that these will be unleashed by our new capacities to work with data (see <a href="http://www.ugotrade.com/2010/10/31/tim-o%E2%80%99reilly%E2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/" target="_blank">my post here</a>).</p>
<p><strong><br />
</strong></p>
<h3>Data is the how.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram.png"><img class="alignnone size-medium wp-image-6045" title="backtypediagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram-210x300.png" alt="" width="210" height="300" /></a></p>
<p><em>The pic above is from <a href="http://www.readwriteweb.com/hack/2011/01/secrets-of-backtypes-data-engineers.php" target="_blank">&#8220;Secrets of BackType&#8217;s Data Engineers.&#8221;</a> This post on ReadWriteHack by <a href="http://twitter.com/petewarden">Pete Warden</a>, an ex-Apple engineer, and founder of <a href="http://www.openheatmap.com/">OpenHeatMap</a>, really lives up to its title.Â  Check it out if you want to know howÂ <strong> &#8220;three guys (the <a title="opens in new window" href="http://backtype.com/" target="_blank">BackType</a> team ) with only seed funding process a hundred million messages a day?&#8221;</strong></em></p>
<p>I asked on Quora, &#8220;<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank">What would be the most important developments for Augmented Reality in 2011,&#8221;</a> <a href="https://sites.google.com/site/michalavny/" target="_blank">Michal Avny,</a> Strategist &amp; Real Time search expert, wrote:</p>
<p><strong>&#8220;AR strongly relies on localized personalized real time information.</strong></p>
<p><strong>Having a stream of tweets based on keyword search, location or circle of friends doesnâ€™t really make the AR experience; it is the processed real time relevant information that will make AR useful and intensify the experience.&#8221;</strong></p>
<p><strong>In 2011 Real Time search and Social Search will drastically change to provide the infrastructure required.&#8221;</strong></p>
<p>I followed up on Michal&#8217;s Quora answer with some more questions &#8211; see below in this post.<strong><br />
</strong></p>
<p>Also note<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank"> the response</a> from <a href="http://research.microsoft.com/en-us/people/dmolnar/" target="_blank">David Molna</a>r, here is an excerpt:</p>
<p><strong>&#8220;2. A wave of actionable, important data APIs opened up, enabling useful non-gimmicky AR apps for the first time. Think geoloqi.com , or the work Max Ogden has done with Portland civic data. Plus of course <a href="http://face.com/" target="_blank">face.com</a> , email providers and calendar providers, etc.&#8221;</strong></p>
<p><a href="http://strataconf.com/strata2011/public/schedule/speaker/100889" target="_blank">Amber Case</a>, one of the founders of <a href="http://geoloqi.com/" target="_blank">Geoloqi</a>, is on the programming committee of Strata and will be speaking.  Be sure to catch her session! <a href="http://strataconf.com/strata2011/public/schedule/detail/17748" target="_blank">Posthumans, Big Data and New Interfaces,</a> and if you haven&#8217;t already seen it, <a href="http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html" target="_blank">Amber&#8217;s TED talk</a> is a must see.</p>
<p>Geographic proximity is a powerful filter, as is route, and time. But clearly social proximity, social relevance, and shared tastes are also key dimensions for location based experiences, (see my convo with Schuyler of <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, upcoming).</p>
<p>While the whole business of location based search and curation of augmented mobile social experiences is still, for the most part, uncharted terrain, the danger of key points of control being only really accessible to elite players looms large.   I asked <a href="http://www.youtube.com/watch?v=C2HcWlu1BS4" target="_blank">Sophia Parafina</a>, a pioneer in the open geo space for some thoughts on real-time local /geosearch and geomessaging, and the future of openess &amp; big data (see Sophia&#8217;s response below).</p>
<h3><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet for P2P cloud computing?</a></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg"><img class="alignnone size-full wp-image-6048" title="8a174_invisibles_bigbrother_1210" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg" alt="" width="150" height="150" /></a></p>
<p>This is another question I&#8217;m following,Â <a href="http://www.quora.com/home/following" target="_blank"> </a><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet forÂ P2P cloud computing?</a> It is one of those questions that we seem to have been asking in various forms for a very long while now, but without aÂ  major shift in sight.Â  The pic above is from, <a title="Permanent link to The Cloud Made Open Source " href="http://www.readwriteweb.com/cloud/2010/12/open-source-invisible.php">The Cloud Made Open Source &#8220;Invisible&#8221; This Year</a>.Â  But, perhaps, we are at the point when open p2p clouds will find a place in the market because of their potential importance in real time social search and discovery. <a href="http://distributedsearch.blogspot.com/" target="_blank">Borislav Agapiev</a>, Search Entrepreneur and founder of <a href="Vast.com" target="_blank">Vast.com</a>, writes on <a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing?q=p2p+for+a+non+centralized+infrastructure" target="_blank">Quora</a>:</p>
<p><strong>&#8220;I believe a P2P cloud is ideally suited for social &amp; real-time search and discovery.</strong></p>
<p><strong>Consider MapReduce, a very interesting and popular paradigm for distributed computing. MapReduce is very much about bringing computation to data i.e. doing computation at nodes (map) and then aggregating results through network (reduce).</strong></p>
<p><strong>It is very clear now that user attention data (what they click on) is very valuable for search and discovery, yet a centralized model relies upon uploading all that to a single location and then doing a supposed local MapReduce. Clearly, MapReduce could be done  across the network, without any centralized uploads.</strong></p>
<p><strong>In addition to the efficiency argument raised here, it is even more important to consider privacy issues. Uploading massive amounts of user attention data to a centralized location is not something that is going to make users warm and fuzzy <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />   as we are increasingly seeing.</strong></p>
<p><strong>In a P2P cloud, there is no big brother watching over anyone, all computation and data storage is done in the cloud, fragmented in many, many small  encrypted pieces ala BitTorrent.&#8221;</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1.png"><img class="alignnone size-medium wp-image-6066" title="Screen shot 2011-01-16 at 2.13.43 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1-300x223.png" alt="" width="300" height="223" /></a><br />
</strong></p>
<p><em>Picture above from Brynn Marie Evans, <a href="http://brynnevans.com/blog/2010/03/17/it-takes-two-to-tango/">&#8220;It takes two to tango: review of my social search panel</a>&#8220;</em></p>
<p><em><br />
</em></p>
<h3>The Delta of Now &#8211; Transforming Search into a Social Democratic Act</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44.jpg"><img class="alignnone size-medium wp-image-6049" title="2538108030_d37d124e44" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44-300x225.jpg" alt="" width="300" height="225" /></a></p>
<p><em>Picture of Maneki Neko &#8220;beckoning&#8221; cats from <a href="http://www.journeyetc.com/travel-ideas/famous-landmarks-of-cats-and-dogs-around-the-globe/">Journeyetc</a></em></p>
<p>New ecologies of human and machine intelligence are beginning to change basic social structures â€“ see the <a href="http://www.youtube.com/watch?v=t1J2RXrvPek" target="_blank">Future of Work (Biewald and Chirayath Janah 2010)</a>. And projects like <a href="http://swift.ushahidi.com/" target="_blank">Swift River</a>, using search and machine mining to filter out streams on topics of interest that can then be subsequently curated by human beings. This may be extended to the curation of real-time data streams and employment of machine learning algorithms based upon the explicit relationships.</p>
<p>Augmented mobile social experiences are a new frontier in which ideas and practices from a number of fields collide, including: ambient findability (Morville 2005), urban psychogeography, narrative structures, ambient games and devices, 4d (time-space), explorations of place and memory, enchanted objects and people (Kuniavsky 2010), and designed animism (Laurel 2010), to mention just a few.</p>
<p>Mobile local interaction presents an opportunity to invert the search pyramid and to transform search into a social, democratic act (see my interview with Anselm Hook upcoming).Â  Up until now search has been predicated around a very narrow revenue model.  Google has an implicit model of a B2C â€“ business to consumer brokerage. We are only just beginning to get a glimpse of the disruptive potential of C2C &#8211; consumer to consumer brokerages.  Mobile local C2C brokerages that allow us to transact in a trustworthy way over our local geography in close to real time (Hook 2010) have the potential to enable new forms of social organization.  Bruce Sterlingâ€™s short story about a networked gift economy, <a href="http://tqft.net/wiki/Maneki_Neko" target="_blank">Maneko Neki,</a> is a brilliant glimpse at the disruptive potential of such re-imaginings.</p>
<p>Augmented experiences that shift or change a personâ€™s situated geolocal experience of social reality, and change our relationship to the people and the place by augmenting engagement in, and reputation through, socially driven consumer tie ins and game dynamics, like <a href="http://foursquare.com/" target="_blank">Four Square</a>, &amp; <a href="http://gowalla.com/" target="_blank">Gowalla</a> are beginning to emerge, as <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Kati London pointed out in her excellent keynote at Web 2.0 Expo</a>.  And, while the integration of mobile local interaction and an augmented view that shifts our geolocal experience visually will involve creative solutions to some well churned mobile, tracking, mapping and registration challenges, the exploration and development of new dimensions through which we can filter and create trusted and meaningful augmented mobile social experiences is vital, whether you are considering a mobile screen, map, camera view, or futuristic HUDs and gestural interfaces.</p>
<h3>Talking with Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbill.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png"><img class="alignnone size-full wp-image-6077" title="edddumbillheadshot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png" alt="" width="150" height="150" /></a><br />
Picture from <a href="http://people.oreilly.com/edd" target="_blank">O&#8217;Reilly Community.</a></p>
<p><strong>Tish Shute: </strong>First congratulations on Strata!   On the Strata homepage there is a quote from Jason Hoffman:</p>
<p><strong>&#8220;My gut feeling is that we&#8217;re going to look back at the upcoming Strata Conference like we do at the Web 2.0 Conference in 2004/2005.&#8221;<br />
â€”Jason Hoffman, CTO/Founder, Joyent, Inc.</strong></p>
<p>Why do you think Jasonâ€™s comparison might be prescient?</p>
<p><strong>Edd Dumbill: Web 2.0 is a development that ran through every brand that has a web presence and radically changed the way business is done for many companies and brands.</strong></p>
<p><strong>Strata will have a similar impact: every business has data, every business collects an increasing amount of data. This data is the new oil â€“ a valuable raw material that when refined or combined creates value and opportunity.</strong></p>
<p><strong>Tish Shute:</strong> The rise of real time was one of your three key data trends for 2011.  Hadoop is bringing the capacity to work with big data to more than just a few elite players.  But the challenge is still real time.  You mention we will be seeing a hybrid approach to real time and batch MapReduce processing.  Will we hear more about these approaches to real time at Strata?  And, what do you see as the most important conversations on real time data analytics emerging at Strata?</p>
<p>You point out â€œopen source projects and cloud infrastructure means developers can evaluate and learn to love technologies without requiring support or approval from above.â€  What are the most exciting developments on the horizon for open source tools?</p>
<p><strong>Edd Dumbill: </strong><strong>Here are some projects worth watching, in the key areas of real time, cluster management and Hadoop.</strong></p>
<p><strong>* Cassandra and MongoDB â€” NoSQL databases that will prove vital for anybody with real time big data needs</strong></p>
<p><strong>* Mesos â€” a compute cluster management tool, modeled after that which powers Google</strong></p>
<p><strong>* Hadoop ecosystem&#8217;s continuing maturation, especially HBase and Hive.</strong></p>
<p><strong>Tish Shute: </strong> Do you think the market is ready for p2p cloud computing?</p>
<p><strong>Edd Dumbill: The market is emerging for decentralized and distributed cloud computing, and P2P technologies are one way of achieving that. They key trends will be moving computation nearer the data sets or nearer the point of user consumption of the result.</strong></p>
<p><strong>P2P is a difficult model for anybody wanting to commercialize a service, so I think it will tend to form part of a hybrid solution.</strong></p>
<p><strong>Tish Shute:</strong> We have seen enormous strides in our ability to work with giant unstructured databases recently.  Do you think, perhaps, that the dream of a web of linked data &#8211;  â€œa web of data that can be processed directly and indirectly by machines,â€ will be attained through brute force &#8211; i.e. through our ability to harness the power of massively parallel processing, as much as by Semantic Web approaches focused on machine readable metadata? [Also see <a href="http://www.quora.com/Is-this-a-good-approach-www-dist-systems-bbn-com-people-krohloff-shard_overview-shtml-to-use-Hadoop-to-build-a-scalable-distributed-triple-store" target="_blank">my question on Quora</a>, &#8220;Is this a good approach (<a rel="nofollow" href="http://www.dist-systems.bbn.com/people/krohloff/shard_overview.shtml" target="_blank">www.dist-systems.bbn.com/people/&#8230;</a>) to use Hadoop to build a scalable, distributed triple store?&#8221;]</p>
<p><strong>Edd Dumbill:  I&#8217;ve been an observer of the SW for over a decade and I tend to believe that on the web, data means to you whatever meaning you give it as the consumer. With that model, the links are made by the consumer rather than sitting out there explicitly. Some links become de facto standards, and some very few become web standards.</strong></p>
<p><strong>I think the actuality will be a mix of both explicitly stated metadata and that which is inferred. The Semantic Web is a great framework for certain operations, especially interoperable exchange of metadata. A great many more private meanings, never intended to be shared, will be created by consuming software.</strong></p>
<p><strong>There&#8217;s no question that machines will learn how to process most of the Web. Furthermore, machines will learn how to process most of the physical world we&#8217;re in. And that by the end of this decade</strong>.</p>
<h3>Talking with Sophia Parafina</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere.jpg"><img class="alignnone size-medium wp-image-6062" title="sophiawhere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere-300x250.jpg" alt="" width="300" height="250" /></a></p>
<p><em>Picture of Sophia at <a href="http://where2conf.com/where2011" target="_blank">Where 2.0</a><a href="http://www.flickr.com/photos/rich_gibson/2509114741/" target="_blank"></a></em></p>
<p><strong>Tish Shute:</strong> Sophia you have worked in the trenches for a long time now  to support the growth of open geo data.  What do you hope to see emerge in 2011 in the field of geo-data?</p>
<p><strong>Sophia Parafina: Better support for displaying and handling location data across multiple apps. Fred Wilson <a href="http://www.avc.com/a_vc/2011/01/content-shifting.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AVc+%28A+VC%29" target="_blank">recently blogged about content-shifting</a>, he talks about overcoming content silos across devices. Weâ€™ve worked very hard to reduce data silos via formats, but devices are creating their own silos. I would like to see a standard method for sending geo data and geo information to mobile devices.</strong></p>
<p><strong>Producing content for mobile is different from producing content for a computer browser. Web 2.0 produced a lot of infrastructure for browser based interfaces, but in mobile devices that gap has been filled with apps which is fragmenting how data is handled by various devices. What is even more interesting in the mobile space is that devices can push data back that contains location, user updates, photos and even sensor data.Â  If mobile data standardizes, it could lead to browser based applications and stem the continued fragmentation of the mobile application market.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://simplegeo.com/" target="_blank">Simple Geo</a> and<a href="http://www.factual.com/" target="_blank"> Factual</a> are startups emerging in the geodata space. What do you see on the horizon in terms of both the growth of business opportunities and an open geo data community?</p>
<p><strong>Sophia Parafina: In the near future think weâ€™ll see startups providing curated data + API and in response we will also see companies that provide a single interface across multiple data providers. We saw this when everyone released a mapping API and companies such as <a href="http://mapufacture.com/">Mapufacture</a> provided a single interface across multiple APIs.</strong></p>
<p><strong>We will see a resurgence in data providers repackaging the the 2010 US Census data in different ways to respond to market segments, some of this will be open data but all of it will be provided through an API instead of file. Additionally, weâ€™ll see more data from outside the US.</strong></p>
<p><strong>Tish Shute:</strong> What are the biggest obstacles to having the open geodata sets available that we need to enable mobile local interactions and social augmented experiences?</p>
<p><strong>Sophia Parafina: Licensing for both crowd sourced data and private curated open data will become an issue. We recently seen VLC, the open source video player, pulled from the Apple app store because of licensing issues. Also, licensing of content by geography will be problematic, limiting searches by geographical location. In addition, how will licensing of data that is updated by crowd sourcing work?</strong></p>
<p><strong>Multiple APIs for accessing data sources. The current trend for each provider to create an API for their data sets will result in data silos â€“ there needs to be a single sign-on equivalent for requesting data.</strong></p>
<p><strong>Size of data on the wire, the current models for delivering data is based on broadband connections. However, as mobiles increasingly become the way people use the web, the data needs to be sized accordingly. This also goes for mobile interfaces. Have you tried to shop on a mobile device, or buy a train or plane ticket? Itâ€™s frustrating and error prone. There is a large untapped market of people who only use the Internet on mobile devices.</strong></p>
<p><strong>Tish Shute</strong>: You pointed me to <a href="http://radar.oreilly.com/2010/12/strata-gems-diy-personal-sensi.html" target="_blank">this link in Strata Gems</a> re â€œan interesting and pertinent (also a competitor to GeoLoqi),â€ â€“ <a href="http://tasker.dinglisch.net/" target="_blank">the Android Tasker app.</a> What do these emerging services bring to the table in terms of the next generation of location based services?</p>
<p><strong>Sophia Parafina: This app letâ€™s your device interact with the environment. I think that this is a great way of using the sensors on existing platforms to increase interaction and to implement ambient findability. The basic premise of Tasker is that some action happens in response to an event in an application, time, date, location, event, or gesture. Tasker has defined 180 actions that can occur based and number or combination of events. This can provide a basic vocabulary for interaction between the user and the device and more importantly between users. Tasker also can use Android script plugins, which lowers the bar to creating your own ambient  application.</strong></p>
<p><strong>Programs such as Tasker can provide a way for people to interact with social networks beyond sending messages. People can use their mobile devices to interact with their surroundings with out having to interact with the device.</strong></p>
<p><strong>Tish Shute:</strong> We have had many conversations about emerging ideas of geo-search, geo-messaging and geo-fencing. What are the most interesting developments in these areas and what do you see on the horizon for 2011?</p>
<p><strong>Sophia Parafina: The map will fade into the background and become less important. Display of information will be context aware, that includes location. For example, letâ€™s say I make a grocery list, when Iâ€™m at the grocery story, the list will just pop-up without the need for me to find the app that has the list. Or reminders or offers pop-up when you are near a place at a certain time, letâ€™s say you need to buy a present for a birthday party for a child, you could send out a request that you are looking for an item and retailers could offer â€œon the spotâ€ discounts if you are in the area.</strong></p>
<p><strong>Geo-search, geo-messaging, and geo-fencing are geared to towards mobile devices, so I expect to see them soon as part of apps. Building generic applications that implement geo* will fail because that sort of information is useful only within a context. Geo* apps are solutions looking for an problem. The killer mobile app will use these functions transparently to reduce the cognitive load of the user who is busy moving around in the world.</strong></p>
<p><strong>User data gathered from multiple web applications will become consolidated profiles that will used for context aware applications. For example, there could be a service which matches prices of items that you have shopped for on the web, so for example the service would have access to your cookies, know your favorite retailers, things you have shopped for, your location and activity patters (when you are at home, work, restaurant). When you are in the vicinity of a brick and mortar retailer with the same or similar items, the service can send you alert to match the price of the item you found on line. So your digital life will become more closely linked with your day to day activities.</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Michal Avny</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic.jpg"><img class="alignnone size-medium wp-image-6059" title="Michal_Pic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic-300x275.jpg" alt="" width="300" height="275" /></a></p>
<p><strong>Tish Shute: </strong>At <a href="http://www.web2summit.com/web2010" target="_blank"> Web 2.0 Summit</a>, one of the highlights for me was the, <a href="http://www.web2summit.com/web2010/public/schedule/detail/17101" target="_blank">Q&amp;A:The New Search Insurgents</a> lunch where Charlie Cheever of <a href="http://www.quora.com/" target="_blank">Quora</a>, IMO, stole the show. I tweeted:</p>
<p><em>&#8220;One of my takeaways from #w2s is that #quora points to future of augmented mobile social experiences &#8211; a search filter for experience! #AR&#8221;</em></p>
<p>In your view what are the biggest challenges for location Q&amp;A to emerge as a search filter for location based experiences?</p>
<p><strong>Michal Avny: The biggest location Q&amp;A challenges yet to be conquered are immediacy (real time dynamic data), relevancy (strong personalized filters) and user experience (simplified interface).</strong></p>
<p><strong>Location Q&amp;A enables different use cases.  The most prominent are Follow (follow places, topics and friends to learn about a location), Interact (meet new people based on common interests), Plan ahead (plan a trip, night out or a shopping day by asking and searching for local information) and On-site (check for recommendations, friends, deals, events and traffic nearby).</strong></p>
<p><strong>Unlike Follow, Interact and Plan ahead that can be added to existing Q&amp;A platforms (such as Quora) by attending location specifics as they share similar characteristics, the on-site mode introduces a completely different experience, first and foremost it requires immediate attention.  It is real time based and the nature of the data is dynamic.  Traffic updates, current events, nearby friends, all that changes constantly.  Posting a location question on-site implies the response should be in real time (e.g. best kid friendly restaurant), the normal Q&amp;A response latency wouldnâ€™t work.</strong></p>
<p><strong>Strong relevancy filters are required to accommodate for the overwhelming flood of information.  Moreover, some of the data should be filtered by user behavior and preferences, check in notifications (type of relation), restaurant recommendations (type of food, price level, etc), shopping deals (commercial categories) and more.</strong></p>
<p><strong>Mobile experience requires ease of use and simplicity.  A new Q&amp;A interface and query language that allows for posting questions should be defined as well as coherent summarized response interface.  User on the go should not have to post lengthy questions, browse through tens of results or search for the right service, but instead use a simple intuitive tool.</strong></p>
<p><strong>Tish Shute: </strong>Real- time location based search is in its infancy.  Real time questions can be answered using different services such as Yelp, TripAdvisor, <a href="http://www.waze.com/homepage/" target="_blank">Waze,</a> <a href="http://foursquare.com/" target="_blank">Foursquare</a>, IMDb and more.  But what are the challenges to moving forward with aggregating these sources and then into â€œlocalsâ€ that are able to process and deal with vast amounts of information?</p>
<p><strong>Michal Avny: Using some of the leading location services to answer question is sufficient to start with.</strong></p>
<p><strong>In order to provide broad coverage (worldwide) and reliable information, aggregation of the different services is required for instance to normalize product and service rank, aggregate classified, and more. This is quite challenging as there is no one standard available.</strong></p>
<p><strong>When location Q&amp;A user base is big enough, I foresee a tendency to rely more on â€˜localsâ€™ input as the base of information.   As the platform grows, communities will be formed with different cultures, relationships and trust levels, making the information more valuable and customizable.  Some of the challenges I already mentioned are implementing filters, query language and interfaces to enable using the vast amounts of real time data in a mobile environment.  More of the challenges lying ahead are integrating the â€˜localsâ€™ data with location based services as they are integral components of the Q&amp;A ecosystem.   Merging trust levels and relationships while adhering to different privacy guidelines is a challenge yet to be explored. (This should be discussed in more detail under the protocols topic).</strong></p>
<p><strong>It is quite evident that Quora is now facing growing pains and is struggling to maintain its character.  Same as with Quora, it will also be a challenge to support and maintain the ecosystem while allowing for massive scale-up.</strong></p>
<p><strong>Tish Shute:</strong> I have been very interested in exploring protocols that will be enablers to micro local interaction and mobile social interaction for AR &#8211; particularly the XMPP extensions and operational transform work of Google Wave (now <a href="http://incubator.apache.org/projects/wave.html" target="_blank">Apache Wave</a>), and PubSub protocols like <a href="http://code.google.com/p/pubsubhubbub/" target="_blank">PubHubSubbub</a> and Erlang based <a href="http://www.rabbitmq.com/" target="_blank">RabbitMQ</a>.  We are beginning to see protocols emerging that could enable new real time local services.  What do you think are some of the most valuable use cases for â€œlocalsâ€ that this new generation of real time protocols can enable?</p>
<p><strong>Michal Avny: AR is about interacting with digital information; the AR ecosystem is composed of layers and components such as devices, platforms, browsers, applications and content.  For the different components to interact new protocols, security guidelines, and privacy policies must be in place.  A standard will enable local vendors and service providers to publish specials, deals, updates and events for any application to broadcast, identify people and places by proximity (without having to use the same application or device), local recommendations will be shared by services, devices will be able to interact, location based platforms, such as Q&amp;A, will have access to vast breadth of information, geo aware devices will provide consistent experience globally, and much more.</strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest challenges to going mainstream for this emerging field of real time social discovery?</p>
<p><strong>Michal Avny: The biggest challenge is building towards real time, geo-aware, localized, personalized ambient data.   Discovery is in its infancy, location social based Best, Top, and Trending lists with some basic filtering options are available, and this is great as people are getting accustomed to information surrounding them.  To some degree it can intensify the AR experience, for instance suggest the most popular dish in a restaurant, or map the best coffee shops nearby, but it is customized at best by friend recommendations and depends on the coverage and broadness of the specific discovery service.</strong></p>
<p><strong>There is a need for the next generation of discovery, customized geo social aware discovery that filters the vast amount of real time data by learning user preferences and behavior (built on top of the much needed local social real time open protocol)</strong></p>
<p><strong>Tish Shute:</strong> Who are your favorite startups/upstarts in the the field of real time search and why?</p>
<p><strong>Micha Avny: <a href="http://www.my6sense.com/" target="_blank">My6Sense </a>- My6sense provides a sharper and better way to experience your information from feeds you subscribe to (Social Networks, News, RSS feeds, etc.).  Itâ€™s personal &#8211; Content is ranked based on whatâ€™s relevant to you. It learns what&#8217;s valuable to you by translating your consumption behavior into a personalized ranking function.<br />
My6Sense â€“ because it is a personalized prediction filter, a critical foundation for AR</strong></p>
<p><strong><a href="http://topsy.com/" target="_blank">Topsy</a> &#8211; Topsy is realtime search powered by the social web that finds the most relevant conversations happening online. The siteâ€™s underlying technology examines popular links as well as the influence of each person citing a link. Topsy augments traditional search engines by finding information that people are talking about.<br />
Topsy â€“ because its ranking is based on retweets and influencers, a great social experience</strong></p>
<p><strong><a href="http://collecta.com/" target="_blank">Collecta</a> &#8211; Collecta is a real-time search engine for the social web. It monitors the update streams of popular realtime blogs and sites like Twitter, WordPress, and Flickr, and shows results as they happen. Results can be filtered by status updates, comments, stories, or photos. The entire engine is built around the XMPP standard, which pushes out data on a continual basis, so that for every search you end up watching a stream that keeps updating itself.<br />
Collecta â€“ because it is built around XMPP, a real time experience</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Interview with Bruce Sterling, Part I: At the 9am of the Augmented Reality Industry, are2010</title>
		<link>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/</link>
		<comments>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/#comments</comments>
		<pubDate>Wed, 16 Jun 2010 21:58:28 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mapping and Augmented Reality]]></category>
		<category><![CDATA[3d smartphone animated avatars]]></category>
		<category><![CDATA[Alan Turing-style AI]]></category>
		<category><![CDATA[Andrea Carignano]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR and Farmville]]></category>
		<category><![CDATA[AR as an interface for devices]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR technology]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[AR Wave at are2010]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave at are2010]]></category>
		<category><![CDATA[Auggie Award]]></category>
		<category><![CDATA[Augmented Reality Consortium]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality game development]]></category>
		<category><![CDATA[augmented reality gamers]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[Augmented reality shoes]]></category>
		<category><![CDATA[Blaise Aguera y Arcas]]></category>
		<category><![CDATA[Brad Foxhaven]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Bruno Uzzan]]></category>
		<category><![CDATA[Chris Cameron]]></category>
		<category><![CDATA[Cloud Mirror]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[e23 Games]]></category>
		<category><![CDATA[Eric Gradman]]></category>
		<category><![CDATA[federation and AR]]></category>
		<category><![CDATA[fiduciary markers]]></category>
		<category><![CDATA[gamer guys at are2010]]></category>
		<category><![CDATA[glocal]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Goggles on the iphone]]></category>
		<category><![CDATA[H.E.AI.D]]></category>
		<category><![CDATA[Helen Papagiannis]]></category>
		<category><![CDATA[Iguchi Takahito]]></category>
		<category><![CDATA[Ivan FRanco]]></category>
		<category><![CDATA[Jay Wright]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Jesse Schell at are2010]]></category>
		<category><![CDATA[Jesse Schell's keynote at are2010]]></category>
		<category><![CDATA[Joe Dunn]]></category>
		<category><![CDATA[Joshua Kauffman]]></category>
		<category><![CDATA[Kent Demaine]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[linked data and AR]]></category>
		<category><![CDATA[Mark Billinghurst]]></category>
		<category><![CDATA[Mark Billinghurst at are2010]]></category>
		<category><![CDATA[Marvin Minsky-style hard AI]]></category>
		<category><![CDATA[Microsoft and AR]]></category>
		<category><![CDATA[mini-global micro-startups]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[oooii]]></category>
		<category><![CDATA[Open AR]]></category>
		<category><![CDATA[OPen AR Stack]]></category>
		<category><![CDATA[Open AR Standards]]></category>
		<category><![CDATA[OpenAR]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Parrot AR Drone]]></category>
		<category><![CDATA[Patched Reality]]></category>
		<category><![CDATA[Patrick O'Shaughnessey]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Qualcomm at are2010]]></category>
		<category><![CDATA[rÃ©alitÃ© augmentÃ©e]]></category>
		<category><![CDATA[realtÃ  aumentata]]></category>
		<category><![CDATA[Roger Corman]]></category>
		<category><![CDATA[Rudy Rucker]]></category>
		<category><![CDATA[Sekai camera]]></category>
		<category><![CDATA[Sekai No Camera]]></category>
		<category><![CDATA[semantic search and AR]]></category>
		<category><![CDATA[Sigal Arad Inbar]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[standards for AR]]></category>
		<category><![CDATA[Stupid Fun Club]]></category>
		<category><![CDATA[Talking with Bruce Sterling at are2010]]></category>
		<category><![CDATA[territorialization]]></category>
		<category><![CDATA[The Future of AR eyewear]]></category>
		<category><![CDATA[The Hollywood AR Scene]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[VR]]></category>
		<category><![CDATA[Will Wright at are2010]]></category>
		<category><![CDATA[X: The Man with the X-Ray Eyes]]></category>
		<category><![CDATA[YDreams]]></category>
		<category><![CDATA[Zenitum]]></category>
		<category><![CDATA[Zenitum at are2010]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5524</guid>
		<description><![CDATA[Shortly after Augmented Reality Event &#8211; are2010, I talked with Bruce Sterling on skype and in gdocs about his experience there.Â  I am posting the conversation in two parts to make it a more blog friendly length! The picture above is the Auggie Award for the best AR demo (above) designed by Sigal Arad Inbar.Â  [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/auggie.jpg"><img class="alignnone size-medium wp-image-5525" title="auggie" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/auggie-300x217.jpg" alt="auggie" width="300" height="217" /></a></p>
<p><em> </em></p>
<p>Shortly after <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event &#8211; are2010</a>, I talked with Bruce Sterling on skype and  in gdocs about his experience there.Â  I am posting the conversation in two parts to make it a more blog friendly length!<strong><br />
</strong></p>
<p>The picture above is the <a href="http://gallery.me.com/pookatak#100153" target="_blank">Auggie  Award</a> for the best AR demo (above) designed by <a href=" http://www.pookatak.com" target="_blank">Sigal Arad Inbar</a>.Â  It was won by <a href="http://www.ydreams.com/#/en/homepage/" target="_blank">YDreams!</a> See, <a title="Permanent Link to Ivan Franco recounts the teamâ€™s   ARE 2010 experience, and winning the eventâ€™s first-ever Auggie Award" rel="bookmark" href="http://www.ydreams.com/blog/2010/06/05/ivan-franco-recounts-the-team%e2%80%99s-are-2010-experience-and-winning-the-event%e2%80%99s-first-ever-auggies-award/">Ivan   Franco recounts the teamâ€™s ARE 2010 experience, and winning the  eventâ€™s  first-ever Auggie Award,</a> for more. Â  The video below was shot at the <a href="http://www.ydreams.com/" target="_blank">YDreams</a> booth by Bruce Sterling.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="300" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="flashvars" value="intl_lang=en-us&amp;photo_secret=40ef3f4bc9&amp;photo_id=4671874785&amp;flickr_show_info_box=true" /><param name="bgcolor" value="#000000" /><param name="allowFullScreen" value="true" /><param name="src" value="http://www.flickr.com/apps/video/stewart.swf?v=71377" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="400" height="300" src="http://www.flickr.com/apps/video/stewart.swf?v=71377" allowfullscreen="true" bgcolor="#000000" flashvars="intl_lang=en-us&amp;photo_secret=40ef3f4bc9&amp;photo_id=4671874785&amp;flickr_show_info_box=true"></embed></object><br />
<em>&#8220;The Hotness&#8221; &#8211; <a href="http://www.flickr.com/photos/brucesterling/4671874785/in/photostream/" target="_blank">YDreams rocking it at ARE2010 from brucesflickr</a></em></p>
<p>Rudy Rucker, who was hanging out with  Bruce Sterling, captured the are2010 buzz and some great  images in his post, <a title="Permanent Link to Augmented Reality,  Painting,  Twitter" rel="bookmark" href="http://www.rudyrucker.com/blog/2010/06/06/augmented-reality-painting-twitter/">Augmented   Reality, Painting, Twitter.</a> As Rudy put it:</p>
<p><strong>&#8220;AR is  hoping to be a next big thing, a cozier and more commerce-driven  cousin  of the old VR, or virtual reality.&#8221;</strong></p>
<p>Bruce Sterling&#8217;s opening key note is up<a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">, ARE 2010 Keynote by Bruce Sterling: Bake a Big Pie!</a>,   and also<a title="ARE 2010 Keynote by Will Wright: Brilliant  Inspiration  for the  Augmented Reality Community" href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/"> </a>the<a title="ARE 2010 Keynote by Will Wright: Brilliant Inspiration   for the  Augmented Reality Community" href="http://augmentedrealityevent.com/2010/06/14/are-2010-keynote-by-will-wright-brilliant-inspiration-for-the-augmented-reality-community/"> ARE 2010 Keynote by Will Wright: Brilliant  Inspiration for the   Augmented Reality Community</a> with more videos from are2010 on the  way.Â  One must read post on are2010 is Chris Cameron&#8217;s post, <a href="http://www.readwriteweb.com/archives/augmented_realitys_next_steps_sitting_down_with_titans_of_ar.php" target="_blank">Augmented Reality&#8217;s Next Steps: Sitting Down with  the Titans of AR</a>.</p>
<p><strong><br />
</strong></p>
<h3>Talking with Bruce Sterling, Part 1</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/bruceandauggiepost.jpg"><img class="alignnone size-medium wp-image-5528" title="bruceandauggiepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/bruceandauggiepost-300x199.jpg" alt="bruceandauggiepost" width="300" height="199" /></a><br />
<em>The Auggie panel, <a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://gamepocalypsenow.blogspot.com/" target="_blank">Jesse Schell</a>, and Mark <a href="http://www.hitlabnz.org/wiki/Billinghurst,_M." target="_blank">Billinghurst</a> inspect the award.</em></p>
<p><strong>Tish Shute:</strong> In your keynote at the 9am of the augmented reality industry you asked  some questions of the are2010 audience: &#8220;Whatâ€™s the mission statement?Â   Youâ€™re the worldâ€™s first pure play experience designers, except that  user experience itâ€™s mostly futuristic hot air.Â  But run with that,  right?Â  What are your tactical steps?Â  You should get dressed, have a  coffee, have a to-do list.&#8221;</p>
<p>How much of that did you see going on in the  next two days?</p>
<p><strong>Bruce Sterling: </strong> <strong>Well, I wasnâ€™t privy to any of the business discussions.Â  I didnâ€™t  think it was an accident that <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-total-immersion-standards-proposal/" target="_blank">this standard AR enabled tag thing came up  from Bruno Uzzan, Total Immersion</a>.Â  That seemed to me to be a useful  thing. Â I was always interested in the <a href="http://www.arconsortium.org/" target="_blank">Augmented Reality Consortium</a>. Â It  struck me as remarkable that there was this group of people who clearly all knew one another and it had some  kind of game plan. Â I applaud them for that, because these are not the  1980â€™s.Â  [laughs]Â  You know, itâ€™s just a different world for young  startup companies.</strong></p>
<p><strong>Tish Shute:</strong> I think youâ€™re right.  There seem to be some VC conversations going on, we donâ€™t know what went on in the meetings, but it was noticeable in the atmosphere of excitement, and remarked on by a few people.  So I think that kind of was definitely going on.</p>
<p>And, of course, I was so busy I never even got to see the expo properly!  You said you wanted to be surprised.</p>
<p>Did anyone surprise you in any of the talks, in any of the expo?</p>
<p><strong><br />
</strong></p>
<h3 style="text-align: left;"><em><strong>AR used as interfaces for  devices</strong></em></h3>
<p style="text-align: left;"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/SeacO2are2010.jpg"><img class="alignnone size-medium wp-image-5530" title="SeacO2are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/SeacO2are2010-300x225.jpg" alt="SeacO2are2010" width="300" height="225" /></a></p>
<p><a href="http://www.flickr.com/photos/brucesterling/4673885122/" target="_blank"><em>Italian augmented robot from SEAC02 from brucesflickr</em></a></p>
<p><strong>Bruce Sterling:</strong> <strong>I have to say I was a little bit surprised to see Andrea Carignano demoing a robot.  I happen to know him because heâ€™s here in Torino.  Heâ€™s the guy that came out of Fiat and went into AR.  I am not a particularly huge robot fan, but I think itâ€™s of great interest that AR is used as interfaces for devices, as opposed to the Jesse Schell idea that AR is all about a â€œman with the X-ray eyes.&#8221;</strong></p>
<p><strong>My suspicion is that a lot of surprises will come out of mashups of AR.</strong></p>
<p><strong> </strong></p>
<p><strong>Tish Shute:</strong> I didnâ€™t get to see Andreaâ€™s robot.Â  So what did it do?</p>
<p><strong>Bruce Sterling:Â  It&#8217;s basically a sister device to that little helicopter that those Parrot AR Drone guys were doing. Â Itâ€™s a little autonomous robot and it runs around with a webcam on it.Â  You can place video into the acquisition stream coming off the robot.Â  You can play a game, and blow away imaginary monsters or whatever.</strong></p>
<p><strong>Tish Shute: </strong> Itâ€™s interesting, because did you notice Will Wright and Patrick O&#8217;Shaughnessey, <a href="http://patchedreality.com/" target="_blank">Patched Reality,</a> spend some time hacking the Parrot AR drone in the hallway?Â  Did you come across them?</p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post1.jpg"><img class="alignnone size-medium wp-image-5531" title="willpatrickparrot2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post1-300x199.jpg" alt="willpatrickparrot2post" width="300" height="199" /></a><br />
</strong></p>
<p><strong>Bruce Sterling:</strong> <strong>Rudy was there with them.Â  You know, I didnâ€™t want to watch Will Wright hack a robot.</strong></p>
<p>[laughter]</p>
<p><strong>Tish Shute: </strong> They seemed to be having fun even though as it turned out the power supply was dead.</p>
<p><strong>Bruce Sterling:Â  Iâ€™m sure Will enjoyed that. Â As a game designer, you want to go out and get your hands dirty with a plastic gizmo.</strong></p>
<p>[laughter]</p>
<p><strong>My Swiss Army knife can&#8217;t get through airport security, so I really donâ€™t want to strip anything down.Â  But yeah, what else did I see that was of particular interest?Â  I was pretty happy about the Korean guys because they are a difficult group to get close to.</strong></p>
<p><em><br />
</em></p>
<p><em><strong><br />
</strong></em></p>
<h3><em><strong>AR companies are like mini-global micro-startups.Â  Theyâ€™re <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-tonchidots-evolving-air-tags/" target="_blank">&#8220;glocal&#8221;.</a></strong></em></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Zenitumare2010.jpg"><img class="alignnone size-medium wp-image-5532" title="Zenitumare2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Zenitumare2010-300x225.jpg" alt="Zenitumare2010" width="300" height="225" /></a></strong><em> </em></p>
<p><em>&#8220;Korean elegance at the Zenitum booth&#8221; &#8211; <a href="http://www.flickr.com/photos/brucesterling/4673249423/in/photostream/" target="_blank">from brucesflickr</a></em></p>
<p><strong>Tish Shute: </strong><a href="http://www.zenitum.com/" target="_blank">Zenitum</a>.Â  What did you like from <a href="http://www.zenitum.com/" target="_blank">Zenitum</a>.Â  They were one of our sponsors, along with Qualcomm.</p>
<p><strong>Bruce Sterling:Â  I know that Seoul is like the number one center for augmented reality discussion.Â  But itâ€™s Â difficult to get behind the scenes as a journalist there and Â track whatâ€™s going on in Korea. Â Iâ€™m fine with Italian &#8220;realtÃ  aumentata.&#8221;Â Â Â And I feel like Iâ€™ve got a handle on French &#8220;rÃ©alitÃ© augmentÃ©e.&#8221; Â  The Germans were not hard to find, and the Dutch all speak English!Â  But the Koreans, and whoever the hell it is in Kuala Lumpur&#8230; Â I have no idea whatâ€™s going in Kuala Lumpur, and only the vaguest idea of whatâ€™s transpiring in Singapore! Â But I know that people there are paying a coherent interest.</strong></p>
<p><strong>So the Koreans show up, and they had some relatively predictable anime style 3D avatar conversion stuff.Â  But they had a really nice display space.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/zenitumare20102.jpg"><img class="alignnone size-medium wp-image-5533" title="zenitumare20102" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/zenitumare20102-300x225.jpg" alt="zenitumare20102" width="300" height="225" /></a><br />
</strong></p>
<p><em>&#8220;Anime figures become three-d smartphone animated avatars,&#8221; <a href="http://www.flickr.com/photos/brucesterling/4673872354/in/photostream/" target="_blank">from brucesflickr</a><br />
</em></p>
<p><strong>Tish Shute:</strong> Ah, So Zenitum created a hot spot at the exhibition?</p>
<p><strong>Bruce Sterling:Â  Yeah. Â The Koreans had Â IKEA furniture and some nifty little woven baskets.Â  Theyâ€™d really classed up their presentation. Â Most Koreans in tech tend to be kind of muscular. Â The Koreans are not known for their refined presentations.Â  On the contrary, they tend to undersell everybody else.Â  But I donâ€™t know, maybe theyâ€™ve been hanging out with Samsung and upgrading their design chops. </strong>[laughs]</p>
<p>Tish Shute:Â  Did you take some photos you could send me?</p>
<p><strong>Bruce Sterling:Â  I took a few, but Â I donâ€™t consider myself a photographer. Â Theyâ€™re all up on my Flickr set. It was interesting to see so many people from so many different nations in such a collegial atmosphere.</strong></p>
<p><strong>Tish Shute:</strong> Yes &#8211; there were many different countries represented at are2010</p>
<p><strong>Bruce Sterling:Â  Itâ€™s the beginningâ€¦Â and so global at such a young stage.</strong></p>
<p><strong>Tish Shute:</strong> Yes. As you said, it was 9 AM, so everyone was actually super excited to be gathered together from across the globe to start a new day together.Â  As you mentioned, there was a very warm affirmative vibe &#8211; everyone sharing a passion.</p>
<p><strong>Bruce Sterling: Â  They have an online commonality. They seem to be aware of one anotherâ€™s work through the Internet.</strong></p>
<p><strong>Clearly they had all heard about one another. Â That&#8217;s a departure from earlier models of tech startup, where you usually have like three hippies in a local garage.Â  Now youâ€™ve got German-American-Korean outfits like <a href="http://www.metaio.com/" target="_blank">Metaio</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> has a Russian affiliate. Â They&#8217;re inherently multinational, both inside the company and out.</strong></p>
<p><strong>Tish Shute:</strong> It was the multinational garage, wasnâ€™t it?</p>
<p><strong>Bruce Sterling:Â  Yeah. Â AR companies are like mini-global micro-startups.Â  Theyâ€™re <a href="http://www.wired.com/beyond_the_beyond/2010/06/augmented-reality-tonchidots-evolving-air-tags/" target="_blank">&#8220;glocal.&#8221; </a> Thereâ€™s something quite new to me about that.Â  I donâ€™t find itâ€™s shocking, because in Europe today it&#8217;s common to find startup teams who are multinational.Â  But to see such intense globalism at such an early stage of an industry is really different.</strong></p>
<p><strong>Tish Shute: </strong> Yes it made for a fun atmosphere?Â  It was wonderful running into Iguchi Takahito, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>.Â  You have a great rapport with each other despite the language barrier?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Iguchiandbrucepost.jpg"><img class="alignnone size-medium wp-image-5534" title="Iguchiandbrucepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Iguchiandbrucepost-300x199.jpg" alt="Iguchiandbrucepost" width="300" height="199" /></a></p>
<p><strong>Bruce Sterling:Â  Yeah. Â That guy from Tonchidot, heâ€™s very charismatic.Â  Heâ€™s punchy.Â  That&#8217;s reflected in the very strong graphic design from his company.</strong></p>
<p><strong>Tish Shute:</strong> Using minimal English to make the case for Sekai No Camera at the Auggies,Â Iguchi Takahito still got through to the audience.</p>
<p><strong>Bruce Sterling:Â  Well, his visuals were good.</strong></p>
<p><strong><br />
</strong></p>
<h3><em><strong>What AR means for artistic practice&#8230;</strong></em></h3>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/cloudd.jpg"><img class="alignnone size-medium wp-image-5535" title="cloudd" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/cloudd-300x232.jpg" alt="cloudd" width="300" height="232" /></a><br />
</strong><em>Picture of</em> <a href="http://www.monkeysandrobots.com/" target="_blank">Eric Gradman&#8217;s</a> <a href="http://www.monkeysandrobots.com/cloudmirror" target="_blank">Cloud  Mirror</a>, <em>from James Alliban post</em><em> <a href="http://jamesalliban.wordpress.com/2010/06/10/are2010/" target="_blank">ARE2010 â€“ Augmented Reality utopia in SiliconÂ Valley</a> &#8211; </em><em>see for more on the are2010 ARt Gala</em><br />
<strong> </strong></p>
<p><strong>Tish Shute:</strong> So before I move on to wider themes, Iâ€™m going to wrap up on some of the different aspects of the conference.Â  I was chairing the technology track but you were more free roaming, was there anything that went on in the sort of hallway discussions and the presentation rooms that struck you?</p>
<p><strong>Bruce Sterling:Â  Well, I did get collared by artists. Â  They really wanted to talk to me. Â We got into someÂ serious discussions on Â what ARÂ meansÂ for artistic practice. Â How you can do this and reach that, how can one sharpen up oneâ€™s presentation? Â I mean, they really wanted some art criticism.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s very interesting.Â  Did you come up with anything that you hadnâ€™t been thinking about already through the conversations?</p>
<p><strong>Bruce Sterling: </strong> <strong>Iâ€™ve seen augmented reality installations before, and I certainly know many electronic artists.Â  But I donâ€™t know. Â People in the AR art space, they are looking for guidance and trying to find fellow spirits. Â In their own way, they have the same pioneer spirit as the business people.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/helenare2010post.jpg"><img class="alignnone size-medium wp-image-5541" title="helenare2010post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/helenare2010post-300x199.jpg" alt="helenare2010post" width="300" height="199" /></a><br />
</strong></p>
<p><em><a href="http://www.aliceglass.com/" target="_blank">Helen Papagiannis</a> shows Iguchi Takahito, Tonchidot, her AR Wonder Turner, an exquisite  corpse inspired installation</em></p>
<p><strong>Tish Shute:</strong> Yeah, itâ€™s interesting, because we wanted the art gala to be even bigger, but it turns out, because of the logistics of putting up art in a conference space is fabulously expensive, because it has to be all installed and hungâ€¦</p>
<p><strong>Bruce Sterling:Â  Iâ€™m keenly aware of that. Â At Share Festival in Turin we bring in six installations, and itâ€™s very heavy work. Â It really takes a lot of logistics. Â It was like a Battle of the Bands. Â It&#8217;s like doing a rock concert.</strong></p>
<p><strong>Tish Shute:</strong> One of the installations I was really sad to not have there was <a href="http://heaid.com/blog/" target="_blank">Uber geeks&#8217;Â  &#8220;Steve&#8221; H.E.AI.D installation</a> that Brady Forrest &amp; Co. took to Burning Man.</p>
<p>So I was very happy that we actually did get the number of artists we did.</p>
<p><strong>Bruce Sterling:Â  Well, there aren&#8217;t a million AR artists in the world, so itâ€™s hard to judge. Â  I didnâ€™t see many business people rushing up to have me critique their business plans.</strong></p>
<p><strong>Tish Shute: </strong>[laughs]Â  They were all in the meeting rooms.</p>
<p><strong>Bruce Sterling:Â  Maybe itâ€™s for the best.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>V<em>C and AR Startup Action</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671266724_7b7f1361d2.jpg"><img class="alignnone size-medium wp-image-5549" title="4671266724_7b7f1361d2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671266724_7b7f1361d2-300x199.jpg" alt="4671266724_7b7f1361d2" width="300" height="199" /></a><br />
</em></strong></p>
<p><a href="http://www.flickr.com/photos/chcameron/4671266724/in/photostream/" target="_blank"><em>The Zenitum Booth, are2010, photo from Chris Cameron&#8217;s Flickr stream</em></a></p>
<p><strong>Tish Shute: </strong> Do you know that why your talk started a few moments late is because we had 50 people who arrived from the Silicon Valley neighborhood I guess!</p>
<p><strong>Bruce Sterling:Â  Did they not preregister?</strong></p>
<p><strong>Tish Shute: </strong> No. They all stood in the line for the same day registration!</p>
<p><strong>Bruce Sterling: </strong> <strong>It &#8216;ll be interesting to see what transpires there, if there is a little wave of startup action.Â  God knows they need some place to put their money, because the VC scene in the US is pretty much moribund.</strong></p>
<p><strong>Tish Shute:</strong> Ogmento is the first US AR Games startup to get VC, I think.Â  I think there was some VC action at are2010 for sure.Â  And Qualcomm obviously seems to have commercialization plans for their AR technology, and to be scouting talentÂ  and ways to deliver new AR experiences.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/JayWrighte23games.jpg"><img class="alignnone size-medium wp-image-5542" title="JayWrighte23games" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/JayWrighte23games-300x199.jpg" alt="JayWrighte23games" width="300" height="199" /></a></p>
<p><span style="color: #1f497d;"><em>Jay Wright, Qualcomm presents Joe Dunn, e23 Games, winner of the are2010 StartUp Launch Pad with a check</em><br />
</span></p>
<p><strong>Bruce Sterling: Â Some Â people donâ€™t need venture capital.Â  I mean, Google Goggles isnâ€™t going to be hurting for VC money, obviously [ see Chris Cameron&#8217;s RWW post, <a href="http://www.readwriteweb.com/archives/google_goggles_coming_soon_to_iphone.php" target="_blank">Google Goggles Coming Soon to iPhone</a>] . Â AR mayÂ come up through other methods, like people allying themselves with Hollywood, or peeling off of advertising companies. Â  Thereâ€™s a lot of outfits who might conceivably want in-house AR skills. Â Then when people set up a specialty AR shop, Â they Â peel off the list of clients. Â I donâ€™t know.Â  Those old days Â of Silicon Valley venture capital seem like a lost world.</strong></p>
<p><strong>Tish Shute:</strong> Yes.Â  I, again, didnâ€™t see anything really of the business tracks and production tracks.Â  Did you get back and forth between the tracks?</p>
<p><strong>Bruce Sterling:Â  I went to the Hollywood tracks.Â  I mean, to the extent that I could.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>Is Hollywood stirring? Who&#8217;s going to have the first breakout AR property?</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.05.55-PM.png"><img class="alignnone size-medium wp-image-5562" title="Screen shot 2010-06-16 at 5.05.55 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.05.55-PM-300x162.png" alt="Screen shot 2010-06-16 at 5.05.55 PM" width="300" height="162" /></a><br />
</em></strong></p>
<p><strong>Tish Shute: </strong> So what did you see fromâ€¦Is Hollywood stirring?Â  Is it waking up?Â  I mean I know <a href="http://www.imdb.com/name/nm0218033/" target="_blank">Kent Demaine,</a> <a href="http://www.ooo-ii.com/" target="_blank">Oooii</a>,Â  and Brad Foxhoven, <a href="http://ogmento.com/" target="_blank">Ogmento</a>, spoke about the Hollywood AR scene.</p>
<p><strong>Bruce Sterling:Â  There were guys there from LA who were sort of saying, lookâ€¦they are aware of us, but they just want AR to promote their properties to some particular niche.Â  They realize that AR is potentially a mass medium and that you could do some real AR entertainment. Â So they were batting around some ideas as to where that might happen.Â  Like, could it come out of a console gaming scene? Â Whoâ€™s going to have the first breakout AR property? Â A popular hitÂ AR property, as opposed to like a neat way to sell shoes, or whatever.Â Â  Really, anybodyâ€™s guess is as good as theirs or mine. Â But at least they were actively guessing.</strong></p>
<p><strong>Tish Shute:</strong> I know the breaking the fourth wall discussion has been going on for a while and now the question is, whether AR is going to take down the fourth wall and bring interactive storytelling into the mainstream.Â  Did you hear any of that?</p>
<p><strong>Bruce Sterling:Â  Well, I always shy away from discussions of that kind because I donâ€™t think thereâ€™s any &#8220;final thing.&#8221; Â Practically everything that AR is involved in right now isÂ  a transitional technology. Also, because I am a storyteller, I get alarmed whenever people in technology start saying, â€œOh well, itâ€™s all about telling stories.â€Â  Because obviously it isnâ€™t.</strong></p>
<p><strong>People can tell stories perfectly well orally, and absolutely nobody does that. Â AR is not at all about telling stories.Â  Itâ€™s about a great many other things, such as user bases, niche audiences, Â media saturation, urban informatics, Â convergence culture, and the language of digital media. Â  I could list these factors until the world looks level. Itâ€™s really becoming pretty chaotic. Â As I was saying in my speech, AR companies are media startups who almost never use the old-fashioned word &#8220;media.&#8221;</strong></p>
<p><strong>Tish Shute: </strong> Oh, thatâ€™s interesting.Â  Yes.Â  So why do you think that has happened that way?</p>
<p><strong>Bruce Sterling:Â  Well, itâ€™s because they are trying to do a different thing than media does. Â I mean, they are trying to &#8220;augment reality.&#8221; Â They donâ€™t want you to know that you are using a medium. Â They don&#8217;t want you to realize that you&#8217;re watching computer animation overlaid on some video acquisition stream. Â That would defeat the whole point of AR. Â Itâ€™s entirely different from an analog medium like television, where you turn on the television and thereâ€™s a constant stream of station identification alerts. Â  Thatâ€™s like: â€œDonâ€™t touch that dial!Â  Youâ€™re on channel 13! Â Stay with us!â€ Â Then itâ€™s like, â€œAnd now a few words from our friendly sponsors!â€ Â That medium was engineered to keep your eyeballs locked to a single stream that theyâ€™re feeding you.</strong></p>
<p><strong>In AR, itâ€™s much more participative, more geolocative. Â Iâ€™m not particularly interested in station-identification branding from my AR provider. What I really want to see is the interactivity of the augments theyâ€™re bringing to me. Â Itâ€™s like Â FlickR, the photo sharing site. You donâ€™t have any TV-style splash page for FlickR. Â &#8220;Hi! Weâ€™re FlickR! FlickR, bringing your photos to you!&#8221; No, FlickR is all about &#8220;you, you, you,&#8221; your photos, your tags, your friends, your activity around you. Â  Itâ€™s immediately trying to be very participative.</strong></p>
<p><strong>Tish Shute:</strong> Will Wright got to that point, didnâ€™t he. He was trying to move us into an idea of blended reality. That the game is about the world, not about the dragons or the overlays per se.</p>
<p><strong>Bruce Sterling: Right. I think thatâ€™s true. But see, the world isnâ€™t a medium. A medium is something like this interview, Â where Iâ€™m connecting to you and thereâ€™s a video Skype channel between us. Â Whereas AR is more about spatial 3-D, Â about 3-dimensional impositions. Â Pieces of media: sound, vision, information visualization, tags, floating tags, air tags, icons, arrows, warning signs, warning sounds, tactility, whatever, being brought into the environment around us.</strong></p>
<p><strong>Thatâ€™s why it&#8217;s properly called &#8220;augmented reality&#8221; instead of just augmented media. Â  If you call your work &#8220;augmented media,&#8221; youâ€™re really in trouble. Because if itâ€™s all about augmenting somebody elseâ€™s media, why doesn&#8217;t that medium just buy you, and augment their own selves? Â Â Â If you think that way, instead of augmenting the world, you&#8217;ll just be a modest little plug-in for old-school media.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>The World as the Platform</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271578_50ef3396f5.jpg"><img class="alignnone size-medium wp-image-5548" title="4671271578_50ef3396f5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271578_50ef3396f5-300x199.jpg" alt="4671271578_50ef3396f5" width="300" height="199" /></a><br />
</em></strong></p>
<p><strong> </strong></p>
<p><em>Blaise Aguera y Arcas, Microsoft, Santa Clara, are2010, <a href="http://www.flickr.com/photos/chcameron/4671271578/in/photostream/" target="_blank">photo from Chris Cameron&#8217;s Flickr stream</a></em></p>
<p><strong>Tish Shute: </strong>Yes, which is why Blaise so generously gave the technical underpinningÂ  for augmenting reality in his tech talk &#8211; about the trellis and the grapes,Â  he really explained how the world can become a platform for augmented reality.</p>
<p><strong>Bruce Sterling: I wish I could have seen that. I did not see Blaiseâ€™s speech.</strong></p>
<p><strong>Tish Shute:</strong> Weâ€™re going to put the videos up in better quality.Â  People in the front row have <a href="http://gigantico.squarespace.com/336554365346/2010/6/6/mobile-ar-ooh-and-the-mirror-world.html">put it up on the web already</a>.Â  He really went into some of the challenges of mapping for augmented reality.</p>
<p><strong>Bruce Sterling: His visual-mapping technique is important. Â Registration is super important for AR.</strong></p>
<p><strong>Tish Shute: </strong>I think it was a really generous talk actually because he went step by step on how we will do this.</p>
<p><strong>Bruce Sterling: I rather imagine thatÂ Microsoft has patented those steps.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes, I guess so!</p>
<p><strong>Bruce Sterling: I could be wrong. Maybe theyâ€™ll open-source it. You never know.</strong></p>
<p><strong>Tish Shute: </strong>You never know. Because the world as a platform isn&#8217;t something one company can own, or go it on their own to exploit.</p>
<p><strong>Bruce Sterling: I expect there to be a thorny path, but sometimes Iâ€™m surprised. Sometimes people really do try to fertilize the tech field in the hope of getting a good corn crop before they start fighting.</strong></p>
<p><strong>Tish Shute: </strong>Weâ€™ll I keep hearing that we may even see the unlikely marriage of Apple and MicrosoftÂ  &#8211; maybe wishful thinking, but there are motivations beyond AR for this unlikely match, and certainly between them these titans have what it takes to realize the grand visions of AR ? [laughs] But who knows&#8230;</p>
<p><strong>Bruce Sterling: Well, yeah, it depends on where the thing catches fire.</strong></p>
<p><strong>Tish Shute:</strong> Yes. You mean whether AR catches fire in the form ofÂ  AR and mapping..</p>
<p><strong>Bruce Sterling: Itâ€™s hard to say, but Iâ€™m convinced now that thereâ€™s more going on than I once thought. I thought that Bruno Uzzan made a very good speech for his company when he talked about how he worked on AR for eleven years. Â Eleven years is no flash in the pan. Â  He has his long list of clients and successful applications. I thought he was right in his impatience with the press for not catching on. Itâ€™s gone on for quite awhile. The mere fact that youâ€™re not aware of it, doesnâ€™t mean it doesnâ€™t exist.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>The Illusive AR eyewear</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Origoggles.jpg"><img class="alignnone size-medium wp-image-5550" title="Origoggles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Origoggles-300x199.jpg" alt="Origoggles" width="300" height="199" /></a><br />
</em></strong></p>
<p><em>My <a href="http://augmentedrealityevent.com/" target="_blank">are2010</a>co-chair, Ori Inbar, CEO and co-founder of the hottest new AR game development  start-up, Ogmento, donning his goggles to open <a href="http://augmentedrealityevent.com/" target="_blank">are2010</a> &#8211;  <a href="http://www.flickr.com/photos/chcameron/4671264048/sizes/m/in/photostream/" target="_blank">picture from Chris Cameron&#8217;s Flickr stream </a></em></p>
<p><strong>Tish Shute:</strong> Yes. So, the other theme you brought up in your opening keynote and I would be interested to know if anything you saw at are2010 changed your view is the illusive AR eyewear, andÂ  if we actually got AR Goggles that worked they would bring AR&#8217;s gothic sister, VR, back from the grave right? [laughs]</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute: </strong> It took quite a lot of work, but we pulled together a six-company HMD panel, right?</p>
<p><strong>Bruce Sterling: Yeah. I was impressed to see so many of them there.Â  And I was chagrined to see how prototype-like all their gadgets were. But that doesnâ€™t surprise me, because if any of those head-mounts were remotely working, they would be hyped out the wazoo. Everybodyâ€™s been waiting for them and hoping for the best. Theyâ€™re obviously not ready for prime time. [laughs] Maybe in certain limited applications. Like maybe a diving mask. [laughs]</strong></p>
<p><strong>Tish Shute: </strong>No, I think what was nice though they got inspired and they all got together on the last day. I saw them having a meeting about standards. They got inspired to actually work together.</p>
<p><strong>Bruce Sterling: Yeah, well, unless theyâ€™re going to invent mechanical eyeballs that those machines can fit onto, itâ€™s going to be tough. OK, Iâ€™m a skeptic, but Iâ€™m prepared to be surprised. Iâ€™m also a skeptic in Artificial Intelligence, but as soon as they bring me an AI that can write a decent novel, Iâ€™m going to get it and review that book.</strong> [laughs]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting. Re AI, Iâ€™m totally in agreement with you. In terms of the way computers turned out, it wasnâ€™t AI per se that they turned out to be good for, not in the way everyone had dreamed of it, rather it was the harvesting of human intelligence that turned out to be the big thing. But what is interesting is that despite all of that, AI or machine learning, as it is now called, permeates our whole society now from the stock market to how many businesses make many of their decisions.</p>
<p><strong>Bruce Sterling: Well, thereâ€™s a lot of so-called collective intelligence. Â But Marvin Minsky-style hard AI, no way. Alan Turing-style AI, forget about that.</strong></p>
<p><strong>Tish Shute:</strong> Yeah. So, thatâ€™s an interesting comparison with the HMDs.</p>
<p><strong>Bruce Sterling: People stretch the definitions. Â Itâ€™s like, well, my car engine is Artificial Intelligence. Yeah, so is your wall transistor. No, I donâ€™t really think so.</strong></p>
<p><strong>And AR is a similarly big tent. I mean, Uzzan had to admit that he had denied that AR was AR, unless it was using his favorite technology. And he felt embarrassed to be rubbing shoulders with people who put AR into cell phones. And I can understand his feeling there, because, gee whiz, thatâ€™s certainly not what AR pioneers had in mind. But he had to admit heâ€™d become more ecumenical about it. Obviously, theyâ€™re Â there and doing business like gangbusters. You canâ€™t very well ignore success, right?</strong></p>
<p><strong>I had a similar feeling about the goggles. Obviously, the goggles would be great, should they work. But if they did work, I rather think virtual reality would come very strongly to the fore. Â Youâ€™d see people doing all kinds of elaborate immersive-style stuff. Â  A truly immersive technology doesn&#8217;t need to &#8220;augment&#8221; much of anything.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, youâ€™re right.</p>
<h3><strong><em>Social Augmented Experiences</em></strong></h3>
<p><strong>Bruce Sterling: I think many of the most interesting AI aspects are not personal in the way goggles are.Â  Theyâ€™re not about guys walking around with personal tech. Theyâ€™re about big, communal, social-media experiences, like stage shows, and urban informatics, things where large numbers of people can interact with the same augmented reality. The projection mapping, which I go on and on about. Augmented public spectacles.</strong></p>
<p><strong>Tish Shute: </strong>Yeah, projection&#8217;s our best example of a social augmented experience right now because we are yet to have an easy way to do networked social augmented experiences easily &#8211; but that is of course the thrust of my interest in <a href="http://arwave.org/" target="_blank">ARWave </a> [see the slides for my presentation, <a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank">AR Wave:Â  Federation,  Game Dynamics, Semantic Search, Mobile Social Communications</a> here].</p>
<p><a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank"><img class="alignnone size-medium wp-image-5563" title="Screen shot 2010-06-16 at 5.12.05 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Screen-shot-2010-06-16-at-5.12.05-PM-300x225.png" alt="Screen shot 2010-06-16 at 5.12.05 PM" width="300" height="225" /></a></p>
<p><strong>Bruce Sterling: I think of Edisonâ€™s early days, when he wanted to sell movies to people for a nickel a clip. Â You had to bend over and put your eyes on this visor and turn this crank. That coin-op device was easy for Edison to monetize, as opposed to getting a bunch of people to sit in theater seats. But people laugh at movies when theyâ€™re together in the seats. Â  Cinema is a more social, involving experience in a crowd situation.</strong></p>
<p><strong>Tish Shute: </strong>But it started with them, didnâ€™t it, Hollywood &#8211; the movie biz? Basically Nickelodeons, right?</p>
<p><strong>Bruce Sterling: Thatâ€™s right. They were Nickelodeons. They were a lot like the goggles because they isolated the user.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, thatâ€™s a really important point that the goggles are not Nirvana because of this question of whether they actually detract from the social augmented experience and blended realities, by drawing us into VR experiences?</p>
<p><strong>Bruce Sterling: Iâ€™m tempted to claim that theyâ€™re more a VR technology than an AR technology.</strong></p>
<p><strong>Tish Shute:</strong> Thatâ€™s a very interesting point becauseâ€¦</p>
<p>[thunder]</p>
<p><strong>Tish Shute:</strong> Wow! What was that?</p>
<p><strong>Bruce Sterling: Thunder storm.</strong></p>
<p><strong>Tish Shute:</strong> Oh, my God, how very Gothic! [laughs]</p>
<p><strong>Bruce Sterling:</strong> <strong>It can get pretty loud up here in the mountains.</strong></p>
<p><strong>Tish Shute:</strong> Oh, you live in the mountains, better still!</p>
<p><strong>Bruce Sterling: Â TorinoÂ is in the foothills. This is Piemonte. So the Apennines are over there. The Alps are over here. We do get some rather spectacularly unstable weather</strong>.</p>
<p>Tish Shute: It sounded like a bomb to my NYC ears. [laughs]</p>
<p><strong>Bruce Sterling: Yeah, it didnâ€™t hit the building, but it was maybe half a kilometer away. I saw the flash.</strong></p>
<p><strong>T</strong><strong>ish Shute: </strong>Oh, you did? Â Â Well, I hope you donâ€™t lose your power midstream here. Â  Â I was really happy to hear of that connection between Rudy Rucker and LayarÂ  [Rudy was touched when Maarten Lens-FizgGerald from <a href="http://www.layar.com/" target="_blank">Layar</a> said that he met  the Layar  co-founder at a Rudy Rucker lecture].</p>
<p><strong>Bruce Sterling: That was very fun, yes.</strong></p>
<p><strong>Tish Shute: </strong>Wasnâ€™t that wonderful? What was that experience like going around the conference with Rudy?</p>
<p><strong>Bruce Sterling: Well, you know, Rudyâ€™s very into graphics. Heâ€™s a mathematician, so he understands the underpinnings of this stuff. But heâ€™s a skeptic. He thinks theyâ€™re kid toys. Heâ€™s not a gamer. Heâ€™s a good old-fashioned computer-science hacker. So he wanted to tell me all about his new eighth-order, fifth-dimensional fractals. He showed me a great many of them. Theyâ€™reÂ psychedelic. Rudyâ€™s fractals are considerably trippier than most apps that help you find a barber or a train station. [laughs] Rudy really is a visionary. Heâ€™s into some very weird stuff.</strong></p>
<h3><strong><em>Gamer Guys at are2010</em></strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Brad-booth.jpg"><img class="alignnone size-medium wp-image-5552" title="Brad-booth" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/Brad-booth-300x211.jpg" alt="Brad-booth" width="300" height="211" /></a></p>
<p><em>Brad Foxhoven, </em><span><em>Chief Marketing Officer, Co-Founder, <a href="http://ogmento.com/" target="_blank">Ogmento </a>at are2010</em><br />
</span></p>
<p><strong>Tish Shute:</strong> At are2010 there was a lot of discussion about how game dynamics and AR are going to intersect, right? Anything that you saw of interest there?</p>
<p><strong>Bruce Sterling: Well, obviously, there are gamer guys there. Ori&#8217;s a gamer. The gamer guys are getting some money. The big buzz right now in gaming is, of course, social gaming. Â Farmville has kicked everybodyâ€™s ass because itâ€™s not even a game and yet it has more users than the entire gaming industry.</strong></p>
<p><strong>Tish Shute: </strong>I know, right! [laughs]</p>
<p><strong>Bruce Sterling: Obviously thatâ€™s kind of humiliating. For a long time, I&#8217;ve seen people trying to do giant multiuser games on cell phones. Itâ€™s difficult to do because the interface on cell phones is crap, right? People arenâ€™t going to run around responding to SMSs.</strong></p>
<p><strong>I can imagine people running around with little Wii-style bats that have audio and visuals on them. It makes a very large native AR game seem more plausible.</strong></p>
<p><strong>Tish Shute:</strong> Yes. that would be cool!</p>
<p><strong>Bruce Sterling: Again, it&#8217;s not very gamelike to use those little fiduciary markers.</strong></p>
<p><strong>Tish Shute:</strong> No.</p>
<p><strong>Bruce Sterling:</strong> <strong>Moving little cardboard chips, around like with card games&#8230;. It would be pretty easy to set up a little AR chess game. Â Star Trek style hologram chess pieces, Â and so forth. But itâ€™s just cumbersome.</strong></p>
<p><strong>Tish Shute:</strong> And also, from what weâ€™ve seen from things like Foursquare, the proximity based social gaming doesn&#8217;t have to offer very much [a crown badge, a mayorship] to get some mind share.. the social is the primary game dynamic&#8230;</p>
<p><strong>Bruce Sterling: Â Iâ€™ve seen a lot of different philosophies of gaming over the years. Whoâ€™s to say that Second Life doesnâ€™t have the best idea? They built a little scene and then slammed their gate shut behind them. Â But at least theyâ€™ve got a really nicely-paying little cult stuck in there. Itâ€™s different. And itâ€™s manageable and itâ€™s really theirs, theirs, theirs. Â They donâ€™t have to call in outside experts to try and run the monster. Â Â They havenâ€™t blown it up to the scale of Yahoo! where theyâ€™ve lost control of the enterprise, and gone into a tailspin of management overhead. Second Life has a very intense, almost a cultish atmosphere among the player-slash-developers.</strong></p>
<p><strong>Tish Shute:</strong> One thing that helped them was the thing they were always criticized, that the barrier of entry was so high. But once they got people they never left, right?</p>
<p><strong>Bruce Sterling: Â Thatâ€™s not a bug, thatâ€™s a feature.</strong></p>
<p><strong>Tish Shute:</strong> One of the best features!</p>
<p><strong>Bruce Sterling: Yeah, itâ€™s like being in Mensa. Why donâ€™t you lower your barriers to entry and get in some interesting stupid people?</strong></p>
<p><strong>Tish Shute: </strong>[laughs]</p>
<p><strong>Bruce Sterling: In Mensa, weâ€™d rather sit here making puns about neutrinos and fourth-order quadratic equations. [laughs] OK, thatâ€™s a business model, if thatâ€™s what you want.</strong></p>
<h3><strong><em>The Man With the X-Ray Eyes!</em></strong></h3>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271624_d63b9bff7a.jpg"><img class="alignnone size-medium wp-image-5553" title="4671271624_d63b9bff7a" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4671271624_d63b9bff7a-300x199.jpg" alt="4671271624_d63b9bff7a" width="300" height="199" /></a><br />
</em></strong></p>
<p><em>Jesse Schell&#8217;s during his keynote, &#8220;Seeing,&#8221; at are2010, <a href="http://www.flickr.com/photos/chcameron/4671271624/in/photostream/" target="_blank">picture from Chris Cameron&#8217;s Flickr stream</a></em></p>
<p><strong>Tish Shute: </strong> Ok!Â  Now to unpack the man with the x-ray eyes idea, Jesse Schell&#8217;s keynote theme.Â  This is a root metaphor for AR &#8211; making the invisible visible, seeing through walls. To me. I think you kind of wrote the book on this because all my ideas on what radical transparency might be come from you &#8211; your idea of Amazon.org is key to how I understand this..</p>
<p><strong>Bruce Sterling: Oh, really? Thatâ€™s funny. Â Â I was touched that Jesse brought up that famous Corman film, because I was a judge in a fantasy film conference in Trieste earlier this year.Â  And Roger Corman was there.Â  He was the guest of honor. Â Â &#8220;X: the Man with the X-ray Eyes&#8221; was one of the films shown during the conference, and I saw it.Â  I even had dinner with Roger Corman.Â  I had never met him before, so that was quite amusing.Â  The difficulty with a film of that kind is that what we science fiction writers call a &#8220;House of Cards Ending.&#8221; Â In that story structure, Â you ramp the thing up until the protagonist sees God, and then he has to be destroyed by the falling pillars of the temple. Â Thatâ€™s a classic science fiction structure: Â like Frankenstein. Â For the sake of the drama, Corman evades the issue of whatâ€™s really going on. For instance, letâ€™s just suppose &#8220;the Man with the X-ray eyes&#8221; is not in fact a psychopath.Â  Letâ€™s say he gets a grant from the Department of Health and Human Services, and he acts like a real scientist, not a stock B-movie &#8220;mad scientist.&#8221; So he has, like, backup guys, and some placebos, and a large group of people to test it on, trusted colleagues, and so forth. Â You wouldnâ€™t get any of that movie&#8217;s wild activity out of that.Â  What you would get is like a 5% improvement to peopleâ€™s vision.</strong></p>
<p><strong>Then, in a year, there would be a 10% improvement in peopleâ€™s vision. Â There would be a Â classic industrial story. Â A rising star, you know, a cash cow. Â  Real tech isn&#8217;t done by a single guy as aÂ divine curse. Â It&#8217;s created by classicÂ  tech startup culture. Â So a runaway technology really behaves in the way that personal computers do.</strong></p>
<p><strong>Tish Shute:</strong> The things that get me all Utopian and happy about this are the ideas like those you first outlined with the notion of Amazon.org.</p>
<p><strong>Bruce Sterling:Â  It would be easy to do an entirely different kind of filmÂ than &#8220;Man with the X-ray Eyes.&#8221; Â Something much less B movie, Â much less pat.Â  I mean, at the end of the film, Â he destroys his own hardware and blinds himself.Â  Why?Â  For what rational reason would he do that? Â Why doesnâ€™t anybody else know the big secret of what heâ€™s doing?Â  Why arenâ€™t there Koreans doing it?Â  Why arenâ€™t there Austrians doing it?Â  Why arenâ€™t there Italians doing it?Â  Why?Â  AR doesnâ€™t behave like that.Â  Itâ€™s not one lone guy with magic eye drops.Â  Itâ€™s entire teams of people that have been working on stuff for 17 years.Â  They all approach it in different ways.</strong></p>
<p><strong>Now, they are going to get scandals in AR.Â  I can guarantee you that.Â  They are going to get into Â hot water eventually. Â At least some people will surely come out and accuse them of being Roger Corman B movie monsters.Â  But unless they accidentally discover atomic fission or destroy the Gulf of Mexico with an oil spill [laughs], I donâ€™t think theyâ€™re going to be particularly badly off! Â  The trouble I imagine Â for AR people is very typical new media trouble. Â It&#8217;s like movies being accused of corruptingÂ our morals, or comic books being accused of leading to violence, or Google being accused of making us stupid and warping our brains.</strong></p>
<p><strong>Iâ€™m not an alarmist in that sense, but at least Iâ€™m concerned about real threats. Â Roger CormanÂ is a B-movie director whoâ€™s trying to sew up his lost plot ends by destroying his hero and his hardware. Thatâ€™s not very plausible. Itâ€™s a nice science fiction movie device, but technology isn&#8217;t a movie.</strong></p>
<p><strong>Tish Shute:</strong> Yes. Well, the other thing that you always remind us of with AR is not to be saying itâ€™s going to be this glorious moment when itâ€™s no longer gimmickey, no longer pop culture. You always emphasize that&#8217;s actually part of whatâ€™s good about it.</p>
<p><strong>Bruce Sterling: </strong> <strong>Itâ€™s not an accident that practically everybody in that audience knew about Roger Corman. Â Nobody looked surprised; not the Austrians, not the Koreans. They were all like: â€œOh, yes! Roger Corman!Â Â Love him!â€</strong></p>
<p><strong>Tish Shute:</strong> There were so many Rudy Rucker fans. Were you watching Twitter? People like Eric Gradman were succumbing to fanboyz moments..</p>
<p><strong>Bruce Sterling: â€œYeah. Rudy Rucker, heâ€™s the best.â€</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4673263249_a73568ebca.jpg"><img class="alignnone size-medium wp-image-5556" title="4673263249_a73568ebca" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/4673263249_a73568ebca-225x300.jpg" alt="4673263249_a73568ebca" width="225" height="300" /></a><br />
</strong></p>
<p><em>&#8220;Rudy Rucker gripping an Augmented Reality shoe&#8221; <a href="http://www.flickr.com/photos/brucesterling/4673263249/in/photostream/" target="_blank">from brucesflickr</a></em></p>
<p><strong>Tish Shute:</strong> [laughs]Â  I noticed you inspired him to join Twitter..</p>
<p><strong>Bruce Sterling: Well, Iâ€™ve got 8,000 followers and, obviously, a lot of them are Rudyâ€™s fans. Â Of course heâ€™s going to be gang-rushed on Twitter. Thatâ€™s not really any more surprising than two motorcycle stunt guys at the same attraction. And Iâ€™m a big fan of his Rudy&#8217;s blog. Â  Heâ€™s always got interesting things to say.</strong></p>
<p><strong>Tish Shute:</strong> Yes. AR does seem to bring out some of the coolest smartest people!Â  This morning I had breakfast with <a href=" http://www.linkedin.com/in/joshuakauffman" target="_blank">Joshua Kauffman</a> in Central Park.Â  He is an advisor and entrepreneur working on design in the public sphere.Â  I was feeling rather brain dead and jet lagged.Â  I told Joshua I was wondering how to get the cottonwool out of my brains for this interview and he suggested,Â  the All Souls College one-word question interview!Â  Have you ever heard of that? &#8211; although apparently <a href="http://www.nytimes.com/2010/05/28/world/europe/28oxford.html" target="_blank">they recently scrapped it</a>.</p>
<p><strong>Bruce Sterling: Well, Iâ€™ve heard of All Souls College there in Oxford. What was their interview question?</strong></p>
<p><strong>Tish Shute:</strong> They used to use only one word, so they would only give you one word. Itâ€™s not a question. Basically, they throw out the word and then you had to spin off from there.</p>
<p><strong>Bruce Sterling: Youâ€™re supposed to free-associate on a single word?</strong></p>
<p><strong>Tish Shute: </strong>I guess so. I hadnâ€™t heard about it, but Joshua suggested it.</p>
<p><strong>Bruce Sterling:Â  Well, itâ€™s possible..</strong></p>
<p><strong>Tish Shute:</strong> Joshua came up with some good words..</p>
<p><strong>Bruce Sterling: Right.</strong></p>
<p><strong>Tish Shute:</strong> We were talking about these proximity-based social work networks like Foursquare and Gowalla and how they may influence the emergence of social augmented experiences.</p>
<p>So Joshua&#8217;s suggestion for the first word was &#8220;territorialization&#8221; e.g. how do these new mobile social experiences like Foursquare,Â  and the observation that actually rather than breaking down territorialization, which would be a good thing, tend to support territorialization&#8230;but perhaps new forms of territorialization?</p>
<p><strong>Bruce Sterling: Yeah, theyâ€™re re-intensifying it in a very odd, electronic fashion.</strong></p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Bruce Sterling: I have noticed that. Â Itâ€™s not true of stuff like projection mapping or the webcam fiduciary display stuff. But with the handheld stuff, and especially the urban informatic stuff, it really canâ€™t help but take on a local flavor. Layar is like &#8220;Augmented Dutch Reality.&#8221;</strong></p>
<p><strong>And TonchiDot really is &#8220;Augmented Japanese Reality.&#8221; Itâ€™s hard to imagine a Layar interface going gangbusters at Tokyo. Â Whereas the TonchiDot interface, which is very clearly influenced by Anime and cartoon graphics&#8230;. Maybe it could find some niche of hipsters in Amsterdam hash barsâ€¦</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><em>&#8230;to be continued in Part 2</em><strong> </strong></strong></h3>
<p><strong> </strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/06/16/interview-with-bruce-sterling-part-i-at-the-9am-of-the-augmented-reality-industry-are2010/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>A Geekgasm at 9am in the Augmented Reality Industry &#8211; are2010:  Bruce Sterling&#8217;s Keynote &amp; Will Wright and The Parrot AR Drone</title>
		<link>http://www.ugotrade.com/2010/06/07/a-geekgasm-at-9am-in-the-augmented-reality-industry-are2010-bruce-sterlings-keynote-will-wright-and-the-parrot-ar-drone/</link>
		<comments>http://www.ugotrade.com/2010/06/07/a-geekgasm-at-9am-in-the-augmented-reality-industry-are2010-bruce-sterlings-keynote-will-wright-and-the-parrot-ar-drone/#comments</comments>
		<pubDate>Mon, 07 Jun 2010 07:17:14 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR art]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave at are2010]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented reality industry]]></category>
		<category><![CDATA[Blaise Aguera y Arcas]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[future of AR]]></category>
		<category><![CDATA[Helen Papagiannis]]></category>
		<category><![CDATA[Iguchi Takahito]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Parrot Drone]]></category>
		<category><![CDATA[Rudy Rucker]]></category>
		<category><![CDATA[The Parrot AR Drone]]></category>
		<category><![CDATA[Tonchidot]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5494</guid>
		<description><![CDATA[The Augmented Reality Event: Bruce Sterling&#8217;s keynote from Ori Inbar on Vimeo. Bruce Sterlingâ€™s keynote (aka the prophet of the augmented reality industry) set the bar high on the opening day with a keynote address reminding us all of how awesome it is to be the &#8220;world&#8217;s first pure play experience designers,&#8221;Â  but also not [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=12351044&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=12351044&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><a href="http://vimeo.com/12351044">The Augmented Reality Event: Bruce Sterling&#8217;s keynote</a> from <a href="http://vimeo.com/user1409384">Ori Inbar</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<p><strong>Bruce Sterlingâ€™s</strong> <strong>keynote</strong> (aka the  prophet of the augmented reality industry) set the bar high on the opening day with a keynote address reminding us all of how awesome it is to be the &#8220;world&#8217;s first pure play experience designers,&#8221;Â  but also not to forget &#8220;whose reality needs to be augmented most,&#8221; and &#8220;to cut ourselves a space of our own.&#8221;</p>
<blockquote><p>â€œItâ€™s 9 am in the augmented reality industryâ€¦without  vision people perishâ€¦itâ€™s your chance to build a big pie before you  start slicing it upâ€¦itâ€™s time for you to get dressedâ€¦good luck to you,  Iâ€™ll be watching youâ€</p></blockquote>
<p>Wow! <a href="http://augmentedrealityevent.com/" target="_blank">Augmented Reality Event</a> was hotness.Â  I know, as one of the chairs, it is not like I am a neutral observer.Â  But, if you were there, and you didn&#8217;t pick up on the &#8220;warm, affirmative feeling,&#8221; as Bruce put it, and the optimism, fun, and cool uber geekery, of are2010, I guess I will have to hear from you in the comments &#8216;cos the feedback has been incredibly upbeat, so far.</p>
<p>Anyway,Â  I am still basking in that rare glow of a geekgasm at 9am &#8211; the  augmented reality industry embracing lovers whose dreams are in still  in full flight &#8211; no burst bubbles, betrayals, train wrecks, or tragedies  in sight yet?Â  Although Roger Cormanesque horrors were threaded throughÂ  Jesse &#8216;the man with the x-ray specs&#8221; Schell&#8217;sÂ  closing keynote, &#8220;Seeing.&#8221;Â  So keeping watching Ori&#8217;s Vimeo stream for Jesse&#8217;s keynoteÂ  -Â  it will be up soon and it&#8217;s quite a ride!</p>
<p>I leave for England tomorrow to celebrate my mother&#8217;s eightieth birthday, so this will be a brief post for now.Â  But I am looking forward to posting soon a long interview with Bruce Sterling on his experience of are2010.</p>
<p>Bruce Sterling hung out with with Rudy Rucker (see Rudy&#8217;s <a href="http://www.rudyrucker.com/blog/2010/06/06/augmented-reality-painting-twitter/" target="_blank">post and pics on are2010 here</a>), attracting fans wherever they went, visiting the expo, going to sessions,Â  talking generously with all &#8211; their table always crowded.Â  Bruce said he hadn&#8217;t had so much fun in a while.Â  And, apparently, Rudy was touched when Maarten Lens-FizgGerald from <a href="http://www.layar.com/" target="_blank">Layar</a> said that he met the Layar  co-founder at a Rudy Rucker lecture. Awesome!</p>
<p>As <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">Ori says here</a>:</p>
<blockquote><p><em>&#8220;are 2010 is over. It was a blast.Â  Many thanks to 400 AR enthusiasts  who joined us for 2 days of AR goodness.</em></p>
<p><em>Special thanks to our 90 speakers from 40 augmented reality  companies, our exhibitors, sponsors, and above all â€“ for Qualcomm.&#8221;</em></p></blockquote>
<p>People&#8217;s slides will be up on the <a href="http://augmentedrealityevent.com/" target="_blank">are2010 web  site</a> soon.<strong> </strong> My presentation, <a href="http://www.slideshare.net/TishShute/ar-wave-a-proof-of-concept-federation-game-dynamics-semantic-search-mobile-social-communications" target="_blank">AR Wave: Federation, Game Dynamics, and Mobile  Social Communications</a>, is already up on slideshare.</p>
<p>You can catch up on some of the highlights of are2010 in <a href="http://gamesalfresco.com/2010/06/06/weekly-linkfest-54/" target="_blank">Rouli&#8217;s linkfest,</a> and check out these pointers he gave to following the event for more.</p>
<ul>
<li><a href="http://www.twitter.com/chrisgrayson">Chris Graysonâ€™s twitter  account</a></li>
<li>Patched Realityâ€™s <a href="http://www.twitter.com/patchedreality">Patrick  Oâ€™Shaughnessey</a> and the augmented citizen <a href="http://www.twitter.com/dromescu">Dan Romescu</a> are also doing  their share on twitter.</li>
<li>Sophia Parafina ( <a href="http://twitter.com/spara" target="_blank">@spara</a> ) live <a href="http://www.locativemedia.org/?p=6">blogs  the event using Google Wave</a>.</li>
</ul>
<p>I will have a mega post up soon.Â  But for now here are a few photos that will give you a taste of some of the <a href="http://www.google.com/search?q=%23are2010&amp;hl=en&amp;client=firefox-a&amp;hs=nVr&amp;rls=org.mozilla:en-US:official&amp;prmd=iu&amp;tbs=mbl:1&amp;tbo=u&amp;ei=VpcMTPG9JcX_lge4qZ3cDg&amp;sa=X&amp;oi=realtime_result_group_more_results_link&amp;ct=title&amp;resnum=9&amp;ved=0CFMQ5QUwCA" target="_blank">#are2010</a> magic.</p>
<p>Bruce gets a t-shirt from <a href="http://www.tonchidot.com/" target="_blank">tonchidot</a> CEO, Iguchi Takahito ( <a href="http://twitter.com/iguchi" target="_blank">@iguchi</a> ), after checking out Helen Papagiannis&#8217; AR art</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/HelenPapagiannis.jpg"><img class="alignnone size-medium wp-image-5498" title="HelenPapagiannis" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/HelenPapagiannis-300x199.jpg" alt="HelenPapagiannis" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/sekaicamera1post.jpg"><img class="alignnone size-medium wp-image-5497" title="sekaicamera1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/sekaicamera1post-300x199.jpg" alt="sekaicamera1post" width="300" height="199" /></a></p>
<p>IfÂ  an event with Bruce Sterling, Rudy Rucker, Jesse Schell and  Blaise Aguera y Arcas there wasn&#8217;t already enough awesomeness, check  out how Sophia Parafina, <a href="http://www.locativemedia.org/?p=6" target="_blank">Locatively</a>, and Patrick  O&#8217;Shaughnessy of <a href="http://patchedreality.com/" target="_blank">Patched  reality </a>got to hang out with Will Wright &#8211; hacking the <a href="http://ardrone.parrot.com/parrot-ar-drone/en" target="_blank">Parrot AR Drone</a>.Â  And then, when a dead power supply stopped that adventure in its  tracks, Will played some of Patrick&#8217;s AR games and gave him feedback.Â  OMG!Â  Will&#8217;s only the  most important game designer the whole world.Â  There is already some videos shot from the front row up of Jesse Schell&#8217;s, Blaise Aguera y Arcas&#8217; and Will Wright&#8217;s keynotes up, but the hi res versions are coming soon!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willandpatrick3.jpg"><img class="alignnone  size-medium wp-image-5502" title="willandpatrick3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willandpatrick3-300x199.jpg" alt="willandpatrick3" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post.jpg"><img class="alignnone size-medium wp-image-5500" title="willpatrickparrot2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrickparrot2post-300x199.jpg" alt="willpatrickparrot2post" width="300" height="199" /></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willandpatrick4.jpg"><img class="alignnone size-medium wp-image-5503" title="willandpatrick4" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willandpatrick4-300x199.jpg" alt="willandpatrick4" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrick5.jpg"><img class="alignnone size-medium wp-image-5504" title="willpatrick5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/06/willpatrick5-300x199.jpg" alt="willpatrick5" width="300" height="199" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/06/07/a-geekgasm-at-9am-in-the-augmented-reality-industry-are2010-bruce-sterlings-keynote-will-wright-and-the-parrot-ar-drone/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Augmented Reality DevCamp NYC: The Big ARNY &#8211; A Collaborative AR Game Project Modeled After Swarm of Angels</title>
		<link>http://www.ugotrade.com/2009/12/06/augmented-reality-devcamp-nyc-the-big-arny-a-collaborative-ar-game-project-modeled-after-swarm-of-angels/</link>
		<comments>http://www.ugotrade.com/2009/12/06/augmented-reality-devcamp-nyc-the-big-arny-a-collaborative-ar-game-project-modeled-after-swarm-of-angels/#comments</comments>
		<pubDate>Sun, 06 Dec 2009 13:20:50 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR DevCamp]]></category>
		<category><![CDATA[AR DevCampNYC]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[ARDevCampNYC]]></category>
		<category><![CDATA[aygmented reality]]></category>
		<category><![CDATA[Goblin XNA]]></category>
		<category><![CDATA[Google Wave Protocol for AR]]></category>
		<category><![CDATA[marker based augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microvision]]></category>
		<category><![CDATA[mobile social augmented reality]]></category>
		<category><![CDATA[mobile social games]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[semantic web and augmented reality]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[The Big ARNY]]></category>
		<category><![CDATA[The Big ARNY Game]]></category>
		<category><![CDATA[The Open Planning Project]]></category>
		<category><![CDATA[TOPP]]></category>
		<category><![CDATA[TOPPLABS]]></category>
		<category><![CDATA[Wave enabled AR]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4996</guid>
		<description><![CDATA[First an incredibly big thank you to The Open Planning Project office (TOPP) &#8211; @TOPPLabs, and Sophia Parafina, @spara,Â  for organizing, hosting,Â  sponsoring and providing so much inspiration for this event. There is an incomplete list of attendees below, and there were about 70 people at one point watching the Ustream (thank you Dimitri Darras [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="300" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="flashvars" value="offsite=true&amp;lang=en-us&amp;page_show_url=%2Fphotos%2Fugotrade%2Fsets%2F72157622945515856%2Fshow%2F&amp;page_show_back_url=%2Fphotos%2Fugotrade%2Fsets%2F72157622945515856%2F&amp;set_id=72157622945515856&amp;jump_to=" /><param name="allowFullScreen" value="true" /><param name="src" value="http://www.flickr.com/apps/slideshow/show.swf?v=71649" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="400" height="300" src="http://www.flickr.com/apps/slideshow/show.swf?v=71649" allowfullscreen="true" flashvars="offsite=true&amp;lang=en-us&amp;page_show_url=%2Fphotos%2Fugotrade%2Fsets%2F72157622945515856%2Fshow%2F&amp;page_show_back_url=%2Fphotos%2Fugotrade%2Fsets%2F72157622945515856%2F&amp;set_id=72157622945515856&amp;jump_to="></embed></object></p>
<p>First an incredibly big thank you to <a title="http://openplans.org/contact/" rel="nofollow" href="http://openplans.org/contact/">The Open Planning Project office (TOPP)</a> &#8211; <a href="http://twitter.com/TOPPLabs" target="_blank">@TOPPLabs,</a> and Sophia Parafina, <a href="http://twitter.com/spara" target="_blank">@spara</a>,Â  for organizing, hosting,Â  sponsoring and providing so much inspiration for this event.</p>
<p>There is an incomplete list of attendees below, and there were about 70 people at one point watching the Ustream (thank you <a href="../../tridarras.com/#http://www.dimitridarras.com/images/dd_work.jpg" target="_blank">Dimitri Darras</a> and friend &#8211; sorry I missed getting your card!) for setting this up.</p>
<p>There were at least ten or more people participating in a live skype conference moderated by Sophia with great skill.</p>
<p>I am sorry I didn&#8217;t get everyone&#8217;s contact info.Â  But please feel free to add you name into the comments of this post if I have missed you out.</p>
<p>After a gearheady morning, we spent the afternoon and evening brain storming the &#8220;The Big ARNY&#8221; &#8211; &#8220;a collaborative game development project modeled after a <a href="http://aswarmofangels.com/" target="_blank">Swarm of Angels</a>.&#8221;</p>
<p>Some of the morning tech discussion highlights included:</p>
<p>*skype presentations on <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave</a> from <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia, </a><a href="http://matthieupierce.com/" target="_blank">Matthieu Pierce</a>.</p>
<p>*the <a href="http://www.youtube.com/watch?v=h4HmYQPejFk">beginnings of an iphone client</a> from the <a href="http://code.google.com/p/pygowave-server/" target="_blank">PyGoWave</a> Crew.</p>
<p>*discussing <a href="http://www.microvision.com/wearable_displays/index.html" target="_blank">Microvision</a>, Augmented Reality eyewear &#8211; and trying out<a href="http://twitpic.com/s9zjt"> Nomad Unit</a> courtesy of <a href="http://augmentation.wordpress.com/" target="_blank">Noah Zerkin</a>, @NoaZark</p>
<p>*an awesome deep dive into the code of the <a href="http://www1.cs.columbia.edu/~ohan/" target="_blank">open Goblin XNA VR/AR platform</a> &#8211; courtesy of <a href="http://www.cs.columbia.edu/~ohan/" target="_blank">Ohan Oda</a> (pic below) <a href="http://www.ustream.tv/recorded/2719336" target="_blank">video of presentation here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/ohanodapost.jpg"><img class="alignnone size-medium wp-image-5016" title="ohanodapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/ohanodapost-300x199.jpg" alt="ohanodapost" width="300" height="199" /></a></p>
<p>Ori Inbar<a href="http://gamesalfresco.com/2009/12/05/live-from-nyc-augmented-reality-dev-camp/" target="_blank"> live blogged the morning sessions on Games Alfresco. </a></p>
<p>But, during the afternoon, Ori presented, and we all got so caught up in the brainstorming ofÂ  &#8220;The Big ARNY Game&#8221; that live blogging, skyping, and twittering ground to a near halt.Â Â  The &#8220;meat space&#8221; (perhaps the slide show captures some ofÂ  incredible coolness of the location) was alive with brilliant ideas that were matched by an incredibly high level of technical input &#8211; see the AR DevCamp attendees list below.</p>
<p>During the game session we really had a master class in augmented reality tech. Â  <a href="http://www1.cs.columbia.edu/~feiner/" target="_blank">Steven Feiner&#8217;s</a> awesome discussion of markers really opened my mind to exploring markers in a new way.Â  And the geolocated data discussion with Sophia Parafina, <a href="http://www.maploser.com/?page_id=6" target="_blank">Kate Chapman,</a> <a href="http://phil.ashlock.us/" target="_blank">Philip Ashlock</a>,Â  and Steve Feiner at dinner was very interesting.Â  The opportunity to break out into smaller in depth discussions during the day was one of the valuable opportunities of AR DevCamp, so I can&#8217;t possibly mention them all.Â  But thank you everybody!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/BigARNYpost.jpg"><img class="alignnone size-medium wp-image-5005" title="BigARNYpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/BigARNYpost-300x199.jpg" alt="BigARNYpost" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/ardevcampnycpost.jpg"><img class="alignnone size-medium wp-image-5013" title="ardevcampnycpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/ardevcampnycpost-300x199.jpg" alt="ardevcampnycpost" width="300" height="199" /></a></p>
<p>We did have some fun with low tech AR too &#8211; courtesy of Thomas Wrobel &amp; Bertine van HÃ¶vell, <strong><a href="http://www.lostagain.nl/" target="_blank">Lost Again</a></strong> (their business card is the coolest AR card I have seen to date).Â  In the pic on the right, I try out their business card/AR overlay on @comogard as he presents.Â  The lighting does not do the overlay justice in my photo (on right), but I think you get the idea at least.</p>
<p>Unfortunately we didn&#8217;t manage to hook up our afternoon live session with <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">The Mountain View AR DevCamp</a>, as we lost the streaming laptop.Â  But hopefully we will be able to catch up on each other&#8217;s activities with session notes on the<a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank"> AR DevCamp Wiki.</a> There is also a public wave,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BTfPQziYJA" target="_blank">AR Dev Camp NYC Shared Notes</a>.</p>
<p><a href="http://www1.cs.columbia.edu/~swhite/" target="_blank">Sean White</a> set the afternoon off to a great start by collecting topics and organizing topics of interest on the board.Â  While we didn&#8217;t get time to cover everything, it was interesting how, by working on developing a collaborative game project, we had to tackle many of the topics suggested, and come up with workable approaches.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Seanwhitenotespost.jpg"><img class="alignnone size-medium wp-image-5006" title="Seanwhitenotespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Seanwhitenotespost-300x199.jpg" alt="Seanwhitenotespost" width="300" height="199" /></a></p>
<p>Next weekend <a title="http://openplans.org/contact/" rel="nofollow" href="http://openplans.org/contact/">TOPP</a> <span><span> will host the <a href="http://opennyforum.org/2009/11/open-ny-summit-09/" target="_blank">OpenNY Summit &amp; Codeathon</a> on Dec 11 &amp; 12, an event &#8220;</span></span><span>produced by open government practitioners and volunteers.&#8221;Â  This would be another great place to explore some of the citizen 2.0 mobile, social AR game ideas that came up at AR DevCampNYC.  In addition, Ori Inbar has started an <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/">AR New York Meetup</a>.<br />
</span></span></p>
<p>Below is an incomplete list of AR DevCampNYC attendees.</p>
<p><strong>Sophia Parafina</strong>, OpenGeo, @spara, organizer<strong> </strong></p>
<p><strong>Marco Neumann</strong>, <a href="http://www.konallc.com/" target="_blank">KONA</a>, @neumarcx, interested in developing Semantic Web based Augmented Reality demos.<strong> </strong></p>
<p><strong>Tish Shute</strong>, <a title="http://www.ugotrade.com" rel="nofollow" href="../../">Web</a>,<a title="http://twitter.com/tishshute" rel="nofollow" href="http://twitter.com/tishshute">@tishshute</a>, Open Distributed AR, Google Wave Protocol for AR, Imagining the Future of the Outernet<strong> </strong></p>
<p><strong>Dimitri Darras</strong>, @dimitridarras, Visual designer, web developer, and virtual worlds content creator. Interested in multimodal input and AR/Virtual Worlds integration.<strong></strong></p>
<p><strong>Heidi Hysell</strong>, @heidihysell, Creative Technologist &amp; Software Engineer; Interested in the application of AR for entertainment technology for print, web and video.<strong></strong></p>
<p><strong>Joe Lamantia</strong>, Muku / ARWave, <a title="http://joelamantia.com" rel="nofollow" href="http://joelamantia.com/">@mojoe</a> Amsterdam, interested in creating open frameworks, social augmented experiences, emerging media &#8211; (attending via skype)<strong></strong></p>
<p><strong>Kate Chapman</strong>, Web Developer, FortiusOne, @wonderchook<strong></strong></p>
<p><strong>Matthieu Pierce</strong>, <a title="http://matthieupierce.com" rel="nofollow" href="http://matthieupierce.com/">itinerant poet</a>, @matthieupierce, Pittsburgh, PA.  Interested in <a title="AR Use Cases" href="http://www.ardevcamp.org/wiki/index.php?title=AR_Use_Cases">AR Use Cases</a> and observation. Attending via Skype.</p>
<p><strong>Ori Inbar</strong>, <a title="http://ogmento.com" rel="nofollow" href="http://ogmento.com/">ogmento</a> <a title="http://gamesalfresco.com" rel="nofollow" href="http://gamesalfresco.com/">games alfresco</a> Let&#8217;s get together to brainstorm on the &#8220;Big AR NY Game&#8221;: The first location-based, social, augmented reality game designed for New York by New Yorkers.<strong></strong></p>
<p><strong>Noah Zerkin</strong>, <a title="http://augmentation.wordpress.com" rel="nofollow" href="http://augmentation.wordpress.com/">[1]</a> &#8211; AR software and hardware interfaces; Exploring the idea of an AROS.<strong></strong></p>
<p><strong>Ohan Oda</strong>, <a title="http://www.cs.columbia.edu/~ohan" rel="nofollow" href="http://www.cs.columbia.edu/%7Eohan">webpage</a> &#8211; Columbia University; NYC<strong></strong></p>
<p><strong>Sean White</strong>, <a title="http://www.cs.columbia.edu/~swhite" rel="nofollow" href="http://www.cs.columbia.edu/%7Eswhite">webpage</a> &#8211; Augmented reality research at Columbia University and Smithsonian Institution.<strong></strong></p>
<p><strong>Steve Henderson</strong>, Columbia University, <a title="http://www.cs.columbia.edu/~henderso" rel="nofollow" href="http://www.cs.columbia.edu/%7Ehenderso">webpage</a>, <a title="http://twitter.com/stevehenderson" rel="nofollow" href="http://twitter.com/stevehenderson">@stevehenderson</a><strong></strong></p>
<p><strong>Omer Gunes</strong>, [<a title="http://www.cs.nyu.edu/~ofg201" rel="nofollow" href="http://www.cs.nyu.edu/%7Eofg201">[2]</a> webpage] &#8211; NLP, Speech Recognition, Mobile Software Development<strong></strong></p>
<p><strong>Steve Feiner</strong>, Computer Graphics and User Interfaces Lab, Dept. of Computer Science, Columbia University, <a title="http://www.cs.columbia.edu/~feiner" rel="nofollow" href="http://www.cs.columbia.edu/%7Efeiner">personal</a>, <a title="http://www.cs.columbia.edu/graphics/top.html" rel="nofollow" href="http://www.cs.columbia.edu/graphics/top.html">lab</a> &#8211; Augmented reality, mobile/wearable computing.<strong></strong></p>
<p><strong>Jon Russek</strong>, NYC, <a title="http://www.russek.org" rel="nofollow" href="http://www.russek.org/">website</a>, <a title="http://twitter.com/filmaddict" rel="nofollow" href="http://twitter.com/filmaddict">@filmaddict</a> &#8211; AR as applied to film/theater/art.<strong></strong></p>
<p><strong>Daniel Leslie</strong>, <a title="http://reflexionsdata.com" rel="nofollow" href="http://reflexionsdata.com/">Reflexions Data, LLC</a> <a title="http://twitter.com/dan_leslie" rel="nofollow" href="http://twitter.com/dan_leslie">@dan_leslie</a>, Principal at application consulting/development firm where we&#8217;re working on a mobile app for proximity-based real time social graph analysis.<strong></strong></p>
<p><strong>Donald Schwartz</strong>, NYC, <a title="http://twitter.com/Ishkahbibel" rel="nofollow" href="http://twitter.com/Ishkahbibel">@Ishkahbibel</a>virtual worlds, social media, technology writer</p>
<p><strong>David Oliver</strong>, <a title="http://olivercoady.com" rel="nofollow" href="http://olivercoady.com/">Oliver+Coady, Inc. NYC</a>, <a title="http://twitter.com/davidmoliver" rel="nofollow" href="http://twitter.com/davidmoliver">@davidmoliver</a> mobile strategy, mobile product definition, mobile development.</p>
<p><strong>Chris Grayson</strong>, NYC, Twitter: <a title="http://twitter.com/chrisgrayson" rel="nofollow" href="http://twitter.com/chrisgrayson">@chrisgrayson</a> | Blog: <a title="http://gigantico.squarespace.com" rel="nofollow" href="http://gigantico.squarespace.com/">GigantiCo</a> | Contributor: <a title="http://hplusmagazine.com" rel="nofollow" href="http://hplusmagazine.com/">H+ Magazine</a> | Web developer and marketing consultant &#8212; Interests: Future of commercial mobile AR / Outernet (GeoSearch &amp; OOH marketing convergence); Future AR Form Factors; AR/Virtual Worlds integration re: distance learning &amp; collaboration.</p>
<p><strong>Saul Devitt</strong>, NYC<strong></strong></p>
<p><strong>Bert Picot</strong>, NYC via Skype probably around 10:30 am for a few hours. Very interested in learning the value chain for AR applications and the development of applications for Festivals and live entertainment.</p>
<p><strong>MZ </strong>â€“ startup to develop a platform to use semantic data to enable virtual worlds</p>
<p><strong>Jon Russek</strong> â€“ film production + law + internet. Interested in AR as artistic medium for creativity</p>
<p><strong>Davide Byron</strong> â€“ developed the game <a href="http://www.youtube.com/watch?v=k2BK9VAk3RY" target="_blank">Spads and Fokkers</a> and <a href="http://spadsandfokkers.sourceforge.net/" target="_blank">code</a></p>
<p><strong>Philip Ashlock </strong><a href="http://twitter.com/philipashlock" target="_blank">@philipashlock</a>, The Open Planning Project</p>
<p><span><strong>Michael Keating</strong>, The Open Planning Project</span></p>
<p><strong>Yohan Baillot</strong>, <a title="http://twitter.com/yohanBaillot" rel="nofollow" href="http://twitter.com/yohanBaillot">@yohanBaillot</a> future of commercial mobile AR, emerging AR standards</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/12/06/augmented-reality-devcamp-nyc-the-big-arny-a-collaborative-ar-game-project-modeled-after-swarm-of-angels/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>The AR Wave Project: An Introduction and FAQ by Thomas Wrobel</title>
		<link>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/</link>
		<comments>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/#comments</comments>
		<pubDate>Sat, 05 Dec 2009 02:50:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Blps]]></category>
		<category><![CDATA[AR DevCamp]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[AR Wave project]]></category>
		<category><![CDATA[AR Wave Wiki]]></category>
		<category><![CDATA[ARBlip]]></category>
		<category><![CDATA[ARDevCampNYC]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented Realit]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[multiuser multisource augmented reality]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[PyGoWave Qt-Based Desktop Client]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[storing geolocated data on Wave Servers]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Wave enabled augmented reality]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4960</guid>
		<description><![CDATA[ImagesÂ  from Mitsuo Iso&#8217;s Denno Coil (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator. Thomas Wrobel, Sophia Parafina, Joe Lamantia, Matthieu Pierce, and I will lead a Â session tomorrow for AR DevCampNYC introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM.png"><img class="alignnone size-medium wp-image-4961" title="Screen shot 2009-12-04 at 6.43.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM-300x181.png" alt="Screen shot 2009-12-04 at 6.43.24 PM" width="300" height="181" /></a><br />
</strong></p>
<p><em>ImagesÂ  from Mitsuo Iso&#8217;s<a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank"> Denno Coil</a> (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator.</em></p>
<p><a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://opengeo.org/about/team/sophia.parafina/" target="_blank">Sophia Parafina</a>, <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia, </a><a href="http://matthieupierce.com/" target="_blank">Matthieu Pierce</a>, and I will lead a Â session tomorrow for<a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank"> </a><a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and I will both be at <a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> at the <a title="http://openplans.org/contact/" rel="nofollow" href="http://openplans.org/contact/">The Open Planning Project office (TOPP)</a>.Â  The <a href="http://pygowave.net/" target="_blank">PyGoWave</a> crew will be introducing <a href="http://livestream.com/pygowave" target="_blank">PyGoWave via LiveStream</a>.</p>
<p>At 1.30pm EST to 2.30pm EST there will be a shared <a href="http://pygowave.net/" target="_blank">PyGoWave</a>/AR Wave session <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">with Mountain View </a>(if bandwidth permits).</p>
<p>The skype conference will be at ardevcampnyc . Â To participate in Wave,Â  please join the public Wave, Â <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BH83lcj6RA" target="_blank">AR Wave: AR DevCamp Session</a>. Â There is also a <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki up now &#8211; see here</a>.</p>
<p><a href="tridarras.com/#http://www.dimitridarras.com/images/dd_work.jpg" target="_blank">Dimitri Darras </a>(avatar Dimitri Illios) is working on streaming the AR DevCampNYC sessions into Second Life,Â  <a href="http://slurl.com/secondlife/Ambleside/228/247/25" target="_blank">SLURL here</a>.</p>
<p>Thomas has done a very nice introduction and FAQ below.Â  This should help people new to this project to get up to speed quickly.</p>
<p>There are already several Waves that show the history of this project including: <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB" target="_blank">AR Wave: Augmented Reality Framework Development</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BeyLQLb4ED" target="_blank">AR Wave Use Cases</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252Bok4URyFyR" target="_blank">PyGoWave AR Tech Discussion</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BJAcNzz16A" target="_blank">AR Wave Augmented Reality Wave Development</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252B0VnNxxoOB.1" target="_blank">AR Wave / Muku Organization and Admin</a>.</p>
<p>Also I have several posts for people interested in more of the background, including: <a title="Permanent Link to The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!" rel="bookmark" href="../../2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/">The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</a>, <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">AR Wave: Layers and Channels of Social Augmented Experiences</a>, <a title="Permanent Link to Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave" rel="bookmark" href="../../2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/">Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave.</a></p>
<p>Thomas uses the term Arn (augmented reality network) which is one of the candidate names for the project, Muku (crest of a Wave) is another suggestion.Â  Thomas&#8217; intro and FAQ below can also be found <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">here</a>.</p>
<p><strong><br />
</strong></p>
<h3><strong>What is the AR Wave Project?</strong></h3>
<p><strong> </strong></p>
<p>In simple terms its a protocol for storing <a id="zblc" title="geolocated" href="http://en.wikipedia.org/wiki/Geolocation">geolocated</a> data on Wave servers that&#8217;s currently being developed.</p>
<p>We believe this will help lay the foundations for an open, universally accessible, and decentralised system for shared augmented reality overlays which various clients can connect to and use.</p>
<p>This AR Network should spark a lot more rapid adoption of AR technologies, give existing browsers more functionality, and provide the network infrastructure, allowing many of the fictional depictions of AR to become a reality one day.</p>
<p><strong>The AR Network.</strong></p>
<p>When we speak of a future AR Network, we mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.</p>
<p>Where people can just point their phone, webcam, or pair of AR glasses anywhere where a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.</p>
<p>The Arn should be an inclusive and open platform where any number of devices can connect to, and anyone can make and host their own location-specific models or data.</p>
<p>It should allow people to communicate both publicly and privately, and not have their vision constantly cluttered with things they donâ€™t want to see.</p>
<p>This is our vision, and we think a Wave protocol will help it become a reality.</p>
<p><strong>Why Wave?</strong></p>
<p>Wave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</p>
<p>These are all incredibly useful properties for any AR-experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the user: a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secure: data is only exchanged when necessary, and will stay local to just one server if no one else needs to view it.</p>
<p>Wave allows bots to run on it and thus allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</p>
<p>For all these reasons and a few more, Wave makes a great platform for AR.</p>
<p><strong>How?</strong></p>
<p>In basic terms, we will diverse a standard way to geolocate a bit of data and store it as aÂ <a id="u0cd" title="Blip" href="http://google.about.com/od/b/g/google_wave_blip.htm">Blip</a> within a wave.</p>
<p>This data could be a 3d mesh, a bit of text, or even a piece of audio.</p>
<p>Then various clients on various devices could logon, locate, interpret and display this data as they see fit.</p>
<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4962" title="Screen shot 2009-12-04 at 7.56.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM-300x168.png" alt="Screen shot 2009-12-04 at 7.56.58 PM" width="300" height="168" /></a></p>
<p><em>Click on image above to enlarge.</em></p>
<p>A typical example of this might be holding up your phone and seeing messages written by your friends and family in the locations which they are relevant.</p>
<p>You could see an arrow hovering over the cafÃ© your meeting a friend at, notes above their flat saying if they are in or out, or messages by shops telling you to pick up the particular brand of cereal they like.</p>
<p>This data would be personal to just yourself and whoever you invite to share that wave with.</p>
<p>Other forms of data could be public, like city-maps, online games, or historical landmarks being recreated. Custom views of the world with data for entertainment, commercial, environmental or informative purposes.</p>
<p>The possibilities with geolocated data are endless, as are the various ways to display and make use of them.</p>
<p>One of the things I&#8217;m most passionate about is people being able to see many different types of data, both public and private at the same time and from many different sources at once.</p>
<p>For instance, if your playing a AR game, why shouldn&#8217;t your chat window be viewable at the same time?</p>
<p>If you have skinned your environment with a custom view of the world, why shouldn&#8217;t you also see mapping or restaurant recommendations?</p>
<p>The ways to present these layers of data and toggle them on/off in the most intuitive and flexible ways would be a task for the client markers, and I&#8217;m sure we will see many innovations in those areas.</p>
<p>But by using Wave it at least provides the framework for having multiple information sources controlled by many different people yet accessible, and user-submittable, via the same protocol.</p>
<p><strong>Who?</strong></p>
<p>This idea first sprouted from a paper I route focusing on the potential for IRC to be used for AR;</p>
<p><a id="ig44" title="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf" href="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf">http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf</a></p>
<p>I suggested near the end Wave might be a better alternative (using Google Wave was an idea Tish Shute, Ugotrade, brought up in response to the Arn prototype design on IRC), and it quickly became apparent that Wave was a very suitable medium.</p>
<p>Since then, there was a lot of interest, and numerous people have offered to help.</p>
<p>In particular, recently, the <a id="vms1" title="PygoWave" href="http://pygowave.net/blog/">PygoWave</a> team is helping us out, as they have an existing server supporting c/s protocol, which is currently being actively developed.</p>
<p><strong>Where?</strong></p>
<p>You can join the general discussion here;<br />
<a id="wvja" title="Augmented Reality Wave Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BJAcNzz16A">Augmented Reality Wave Development</a></p>
<p>Technical side here;<br />
<a id="qw95" title="Augmented Reality Wave Framework Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Augmented Reality Wave Framework Development</a></p>
<p><strong>When?</strong></p>
<p>There&#8217;s lots still to do, and we are at an early stage.</p>
<p>Our current targets: (last updated 11/12/2009)</p>
<ul>
<li>Getting reading/writing of prototype ARBlips to the PygoWave sever. (the PygoWave team have already made a standalone client and have the protocol for this sorted!)</li>
<li>Establishing a minimal spec for ARBlips to be later expanded.</li>
<li>Writing a very simple prototype online client showing how to store/retrieve the data.</li>
<li>Expanding client to work for some use-cases.</li>
<li>Establish a logo/branding for the project.</li>
</ul>
<p><strong>Other FAQs.</strong></p>
<p><strong>Where&#8217;s the catch?</strong></p>
<p>While we believe Wave is highly suitable for development, it has the drawbacks of being a new system with just a few servers worldwide, which (at the time of writing this), have not yet been federated together yet.</p>
<p>Naturally, as a new technology, its likely to have some growing pains. And building a new technology on other new technology will multiply that somewhat. The first pain is the lack of a standard client / sever protocol. PygoWave have stepped in to the rescue a bit here, by being not just one of the most developed Wave server other then Google, but also leaping ahead with support for Json based c/s interaction. Google has stated they want community to take the lead on on a c/s protocol, so we are hoping they will adopt a Json variant, or a XMPP one and add it to the spec. We hope in much the same way as POP3/IMAP have been a standard for email server interaction, a similar one will develop for Wave.</p>
<p>In the meantime we plan to keep the code for writing ARBlips somewhat abstracted so as to make it easy to adapt in future.</p>
<p>As for the newness of Wave and other potential problems it will bring, we aren&#8217;t that worried as its built on <a id="jnw1" title="XMPP" href="http://en.wikipedia.org/wiki/XMPP">XMPP</a>, which has proved reliable already.</p>
<p>The other catch is we are unfunded, which slows development down considerable as we have to fit it around our other jobs.</p>
<p><strong>I&#8217;m making my own AR Browser, and am slightly interested in maybe supporting you.</strong></p>
<p>We are naturally very keen for support, and particularly for those with skills and visions to give feedback on the proposed protocol. Specifically: what do you want stored in a blip?</p>
<p>That&#8217;s what&#8217;s important at this stage.</p>
<p>We don&#8217;t see the Arn as a replacement for existing browser systems at the moment. We don&#8217;t want to restrict innovation or development in this fast developing market as we are very impressed at what&#8217;s been achieved so far. In many ways our task is small in comparison to what&#8217;s already accomplished.</p>
<p>However, we do believe the Arn will make a good addition to existing browser systems. It will allow users contribute data and have social features without having to worry about accounts or hosting.</p>
<p>It will still be quite some work to support; new GUIs will need to be developed to make it easy to submit data from the devices, as well as to login to waves.</p>
<p>However, we hope over time to build a set of example libs to make the read/writing of ARBlips as as easy as possible to implement in your software.</p>
<p>Perhaps a good way to think about it is existing AR Browsers are like word-processors, supporting the Arn will be like adding support for *.txt, but doesn&#8217;t limit what you can do with your own format.</p>
<p><em>Eventually</em> we do hope ARBlips hosted on Wave will become the majority of AR data, and its functionality will be analogous to the internet is today. We truly believe in the long run a standard is essential.</p>
<p>But for now we think merely getting a baseline format established for how AR data can be communicated will increase user-ability, usefulness, and help the market grow.</p>
<p><strong>Can I help?</strong></p>
<p>Sure.</p>
<p>We particularly need people with technical skills in relevant fields. (both gwt/javascript web programming and c++(/qt)standalone programming help very welcome!).</p>
<p>But we also welcome people just with vision to help focus use-cases and to conceptualise what we want to be able to do with the system.</p>
<p>Please either join the relevant AR Waves or <a href="http://arwave.wiki.zoho.com/HomePage.html">Wiki</a></p>
<p>We are especially interested in those with JSON and Comet experience. Specifically those with the abilities to make standalone applications to read/write to a sever using these methods.</p>
<p><strong>What type of data will a AR Blip store?</strong></p>
<p>This is still actively being decided, but essentially its a physical hyperlink.</p>
<p>A connection between a physical location (or object, see below) and a piece of data.</p>
<p>Specifically, we are thinking about the following fields;</p>
<p>Location in X,Y,Z,<br />
Coordinate System used for the above,<br />
Orientation,<br />
MIMEType <span style="color: #666666;">[the type of data stored]</span><br />
DataItself <span style="color: #666666;">[either a http link for 3d meshs and other larger data, or an inline text string if its just a comment]</span><br />
DataUpdateTimestamp <span style="color: #666666;">[so clients know if its necessary redownload]</span><br />
Editors <span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="color: #666666;"><span style="background-color: #ffffff;">[the user/s that edited/created this blip]</span></span></span></span></span></span><br />
ReferanceLink <span style="color: #666666;">[data needed to tie the object at a non-fixed location, such as an image to align it to an object in realtime],</span><br />
Metatags <span style="color: #666666;">[to describe the data]</span></p>
<p><strong>Are you purely tying stuff to fixed geolocations?</strong></p>
<p>Certainly not <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
As part of of the spec we wish to be able for people to be able to link data to dynamically moving objects, trackable by image or other methods.</p>
<p>The idea being that one day someone could link a piece of text or 3d mesh to an image on a t-shirt they are wearing, or perhaps link a dynamically updating twitter feed, or perhaps provide information on a product (based on its logo).</p>
<p>There&#8217;s a large number of possibility&#8217;s for image-based linking alone, and that&#8217;s not even considering possibilities like linking RFIDs, or other forms of less precise but invisible binding data.</p>
<p>We need a lot of feedback from those companies already doing markless tracking. What types of images do you need, idly to link a mesh to an object? is one enough?</p>
<h3><strong>Summary of AR Wave Work to Date</strong></h3>
<p><strong>Purpose:</strong> To provide an open, distributed, and universally accessible platform for augmented reality. To allow the creation of augmented reality content to be as simple as making an html page, or contributing to a wiki.</p>
<p><strong>Specific Goal:</strong> To establish a method for geolocating digital data in physical space (or linking it to physical objects) using wave as a platform.</p>
<p>(For justification as to why we are using Wave see: <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">our faq</a> )</p>
<p><strong>Wave as a platform</strong></p>
<p>We are developing on the <a title="PyGoWave" href="http://code.google.com/p/pygowave-server/" target="_blank">PyGoWave</a> server at the moment but the goal is to be compatible with all Wave servers</p>
<p>PyGoWave has already achieved an important aspect in enabling the project in being a waveserver with a working and well documented server protocol. This allows both standalone and webbased clients to interface with it already.Â  See -Â <a href="http://github.com/p2k/pygowave-qt">The PyGoWave Qt-Based Desktop Client</a></p>
<p>This is one of the reasons why we have chosen to develop for the Pygo server at this stage.</p>
<p>However, the overall goal of AR Wave is to have a framework compatible with all servers using the Wave Federation Protocol. As more wave servers get c/s protocols then ARblips (the data needed to geolocate objects) could be posted and retrieved from various servers using the same client software. For this a standard should emerge. Just as websites don&#8217;t have to be hosted on specific servers, neither should AR data need to be hosted on specific wave servers.</p>
<p>In order to reach our goal, there are a few very achievable steps involved &#8211; see below.</p>
<p><strong>Feedback</strong></p>
<p>We are still actively seeking feedback, so feel free to join the <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Wave discussions, </a>and see the history of how the specifications of the protocol evolved. You can also read the justification for some of the choices already made. Note a new discussion for AR DevCamp will be begin at <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BH83lcj6RA">AR Wave: AR DevCamp Session</a></p>
<p>This will, of course, only be the first draft of the specification, and it is sure to develop much in future.<br />
The important thing now is to make working prototypes while maintaining flexibility.</p>
<p>So what do we need to do?</p>
<p><strong>Steps :</strong></p>
<p><strong>* Establish the overall method &#8211; Done.</strong></p>
<p>Each Wave will be a layer on reality which an individual or a group can create.Â  Each Blip in this Wave refers to either a small piece of inline data (like text) or a remote piece of larger data (like a 3D mesh) as well as the data needed to pin-point it in either relative or absolute real space.<br />
We call these blips: ARblips. They are simply blips that stored the data necessary to augment a single object onto a specific bit reality.</p>
<p>It is up to the clients how they interpret and display the data. They could interpret it as a simple 2d list of nearby objects, or as an advanced 3D overlay, whereby multiple waves from different sources could to be viewed at once. Whatâ€™s important is that there is a standard way to link the digital data to the real world space.</p>
<p>* Establishing the specification for the ARblip &#8211; In progress<br />
We have a good idea of whatâ€™s needed to be stored in an ARblip, and we have hammered out a rough format.<br />
The data might be stored as blip-annotations, but this has yet to be finalised.<br />
A rough outline of the type of data stored can be seen in this c++/qt header for ARblip data can be seen at the end of this document.</p>
<p>* Storing and retrieving these pieces of ARblip data on the PyGo server &#8211; In progress.<br />
The Pygowave team has made some excellent libraries that should make reading and writing data on the PyGoWave server very trivial for those with c++ skills.<br />
This, however, is a real critical step, so more developers with C++ skills are very welcome!</p>
<p>* Making the above client mobile, and using a devices gps device to place the data. &#8211; Not started.<br />
The next step would be to port the code to a mobile phone and use it&#8217;s gps-inputÂ  to post geolocated data and view what others have posted. This would be a fairly simple and not to useful app in itself. However, it would mark the first time anyone could post AR data and anyone could view it, all using open-source infrastructure.<br />
As a bonus, because we are using wave infrastructure, the updates to any ARblip should appear in near realtime.</p>
<p>* To continue with the proof of concept, we would like to have simultaneous wave input from a PC<br />
and mobile phone at the same time. &#8211; Not started.<br />
For example, someone could post a pin on Google maps API and have that data posted to a ARBlip in a wave. Someone logged into that wave on their mobile device would then see the data posted appear.<br />
More so we hope that when the Google map pin is dragged about, the mobile phone viewer, with just a few seconds lag, will see its location updated in real time.</p>
<p>We hope to make a modest yet practical app at this stage.</p>
<p>* After all this, we can go onto the interesting things:<br />
3D data, camera-overlays, data fixed to objects and many more.Â  There&#8217;s plenty of existing software using these features (such as Wikitude, Layer) and some that are even open source software (like Gamaray and Flashkit). The open source code can give us a leg-up. However, we prefer to establish the protocol first. So naturally, these fancy features aren&#8217;t a priority for us. Rather we think our energy is better spent establishing the protocols and infrastructure so that other people can build more advanced bit of software easier.</p>
<p>However, once our primary goals are established, we will look to make a open source augmented reality browser ourself which will surely feature many of these features.</p>
<p>Overall, we hope once we have a simple proof of concept, there will be many groups, both existing and new, wanting to use this Wave system for their own apps, games and data.</p>
<p><strong>Conclusion</strong>:<br />
Really it&#8217;s now all about growing the community. We hope as soon as we show how great Wave can be for augmented reality, that lots of individuals and teams will start making their own clients to read/write geolocated data.<br />
Overall we don&#8217;t think anything we make will be that impressive in itself. That&#8217;s not our goal.<br />
We instead hope that our project will enable AR-content to be made as easily as web content. That games, information and apps will be able to be created without the creators having to worry<br />
about the infrastructure behind it.</p>
<p><strong>Technical information -</strong><strong> </strong></p>
<p><strong><br />
</strong><strong>Current ARBlip header file</strong></p>
<p>(below is a c++/qt header file for an ARBlip object that should illustrate the data being stored)</p>
<hr />class <strong>arblip</strong></p>
<p>{</p>
<p align="left"><strong>public</strong>:</p>
<p align="left">arblip();</p>
<p>~arblip();</p>
<p>arblip(QString,QString,double,double,double,int,int,int,QString);</p>
<p>QString getDataAsString();</p>
<p>QString getEditors();</p>
<p>QString getRefID();</p>
<p>QString getXAsString();</p>
<p>QString getYAsString();</p>
<p>QString getZAsString();Â bool isFaceingSprite();Â <strong> </strong></p>
<p><strong><br />
private</strong>:</p>
<p>//ID reference. This would be a unique identifier for the blip. Presumably the same as Wave uses itself.</p>
<p>QString ReferanceID;</p>
<p>//Last editor(s)</p>
<p>QString Editors;</p>
<p>int PermissionFlags = 68356; Â // default 664 octal = rw-rw-r&#8211;</p>
<p>//Location</p>
<p>double Xpos;Â Â  // left/right</p>
<p>double Ypos;Â Â  // up/down</p>
<p>double Zpos;Â  // front/back</p>
<p>//Orientation</p>
<p>// names, ranges and directions are taken from aeronautics.</p>
<p>// If no orientation is specified, itâ€™s assumed to be a facing sprite.</p>
<p>// Roll: rotation around the front to back (z) axis. (Lean left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects right side down.</p>
<p>int Roll;</p>
<p>// Pitch: rotation around the left to right (x) axis. (tilt up or down)</p>
<p>// Range +/- 90 degrees with + values moving the objects front up. (looking up)</p>
<p>int Pitch;</p>
<p>// Yaw: rotation around the vertical (y) axis. (turn left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects face to its right.</p>
<p>int Yaw;</p>
<p>bool FacingSprite; //if no rotation specified, this should default to true</p>
<p>//if set to true when a rotation is set, then it keeps that rotation relative to the viewer</p>
<p>//not relative to the earth.</p>
<p>//Data format</p>
<p>QString DataMIME;</p>
<p>QString CordinateSystemUsed; //The co-ordinate system used. This should be a string representing a Open Geospatial Consortium standard. This could be earth-relative for gps co-ordinates, or in some cases relative to the viewer, for data to be displayed in a HUD like style.</p>
<p>//Data itself</p>
<p>QString Data;</p>
<p>QString DataUpdatedTimestamp; //Time the Data was updated changed</p>
<p align="left">//Note; A seperate timestamp should be used for updates that dont effect the data itself.<br />
//(such as if a 3d object moves, but its mesh isnt changed)</p>
<p>//Data metadataÂ QMap&lt;QString, QString&gt; Metadata;</p>
<p>};</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Composing Reality and Bringing Games into Life: Talking with Ori Inbar about Mobile Augmented Reality</title>
		<link>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/</link>
		<comments>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/#comments</comments>
		<pubDate>Wed, 06 May 2009 14:50:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Kids With Cameras]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternative reality gaming]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR goggles]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented times]]></category>
		<category><![CDATA[Better Place]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Caryatids]]></category>
		<category><![CDATA[Come Out and Play]]></category>
		<category><![CDATA[composing reality]]></category>
		<category><![CDATA[Cory Doctorow]]></category>
		<category><![CDATA[eyewear for augmented reality]]></category>
		<category><![CDATA[game development conference]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[games for preschoolers on the iphone]]></category>
		<category><![CDATA[games on the iphone]]></category>
		<category><![CDATA[GDC 2009]]></category>
		<category><![CDATA[GE augmented reality ad]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Int 13]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[iPhone OS 3]]></category>
		<category><![CDATA[iphone versus the android]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[jane mcgonigal]]></category>
		<category><![CDATA[julian Bleeker]]></category>
		<category><![CDATA[Kati London]]></category>
		<category><![CDATA[Kweekies]]></category>
		<category><![CDATA[Loopt]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[Microsoft Tag]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile gaming]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Netweaver]]></category>
		<category><![CDATA[open source augmented reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pookatak]]></category>
		<category><![CDATA[Pookatak Games]]></category>
		<category><![CDATA[reality experiences]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[Shai Agassi]]></category>
		<category><![CDATA[smart environments]]></category>
		<category><![CDATA[smart objects]]></category>
		<category><![CDATA[The End of Hardware]]></category>
		<category><![CDATA[the Pong for augmented reality]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubiquitous augmented reality]]></category>
		<category><![CDATA[ubiquitous experience]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[WARM 09]]></category>
		<category><![CDATA[Wattzon]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[WikiMouse]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3448</guid>
		<description><![CDATA[Recently, I talked to Ori Inbar (above), formerly senior vice- president at SAP.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of Pookatak Games &#8211; a video game company, &#8220;with a vision to upgrade the way people experience the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost.jpg"><img class="alignnone size-medium wp-image-3449" title="oriinbarpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/oriinbarpost-300x199.jpg" alt="oriinbarpost" width="300" height="199" /></a></p>
<p>Recently, I talked to <a href="http://gamesalfresco.com/">Ori Inbar</a> (above), formerly senior vice- president at <a href="http://www.sap.com/">SAP</a>.Â  Ori is on a mission to make augmented reality commercially successful not in 5, 10, or 15 years, but now. Ori is the founder of <a href="http://gamesalfresco.com/about/" target="_blank">Pookatak Games</a> &#8211; a video game company, <strong>&#8220;with a vision to upgrade the way people experience the world.&#8221;</strong> Ori will be participating May 20th, in<a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank"> O&#8217;Reilly&#8217;s Where 2.0 panel, &#8220;Mobile Reality</a>&#8221; -Â  an event not to be missed IMO.</p>
<p>The taste for computing anywhere anytime has entered human culture via the iphone and is spreading like chocolate cake and pizza at a preschool party (see <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_self">why the iPhone changed everything</a>).Â  And while the full flowering of the next step is yet to come &#8211; computing anywhere, anytime by anyone and <strong>anything </strong><a href="http://en.wikipedia.org/wiki/Internet_of_Things" target="_blank">(&#8220;the internet of things&#8221;</a>), our love for these first devices capable of being <strong>mediating artifacts for ubiquitous computing</strong> (Adam Greenfield) is a vital first step to free us from our tethers to computer screens, and fulfill the promise of augmented reality.</p>
<p>If you need more convincing on the pivotal role augmented reality will play as the web moves into the world, check out Tim O&#8217;Reilly&#8217;s recent comments in <a id="iz1_" title="this video clip on Augmented Times" href="http://artimes.rouli.net/2009/04/tim-oreilly-on-recognition-rfid-and-web.html" target="_blank">this video clip posted on Augmented Times</a> and <a id="wtf4" title="here" href="http://radar.oreilly.com/2008/02/augmented-reality-a-practical.html" target="_blank">here</a> early last year.</p>
<p>From another perspective, the gloomy specter of economic and environmental catastropheÂ  is driving a movement to &#8220;<a id="h5pf" title="infuse intelligence into the way the world work's&quot;" href="http://news.bbc.co.uk/2/hi/technology/7992480.stm" target="_blank">infuse intelligence into the way the world work&#8217;s.&#8221;</a> But the challenge for a smart planet is not just about making environments smart, it is about using smart environments to enable people to act smarter (<a href="http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">see my interview with Adam Greenfield</a>).</p>
<p>We need a rapid upgrade in both the way the world works, and the way we experience the world.</p>
<p>((Note:Â  It is time to read (if you haven&#8217;t already) <a href="http://search.barnesandnoble.com/The-Caryatids/Bruce-Sterling/e/9780345460622" target="_blank">Bruce Sterling&#8217;s Caryatids</a> (<a href="book of the year for 2009" target="_blank">Cory Doctorow&#8217;s book of the year for 2009</a>) &#8220;as a software design manual&#8221; (<a href="http://www.nearfuturelaboratory.com/2009/03/17/design-fiction-a-short-essay-on-design-science-fact-and-fiction/" target="_blank">see Julian Bleeker</a>) because Caryatids reveals the Gordian knots of human folly, greed, compassion and desire entwined in near future designs for technologies to save the world.))</p>
<p>Ori Inbar, worked with Shai Agassi (Shai is now leading the world changing <a id="v5ow" title="Better Place" href="http://www.betterplace.com/" target="_blank">Better Place</a> ) driving <a id="gf_5" title="Netweaver" href="http://en.wikipedia.org/wiki/NetWeaver" target="_blank">Netweaver</a> from a mere concept to a &#8220;major, major business for SAP.&#8221; So Ori has already been through the cycle of working in a very small startup and growing it into a billion dollar business.Â  He has both the experience and the passion to realize his vision for augmented reality.</p>
<p>At Pookatak, he explains :</p>
<p><strong>&#8220;We design â€œreality experiencesâ€ that make usersâ€™ immediate environments more significant to them. We wish to free young and old from getting lost in front of the screen. By delivering the worldâ€™s information to peopleâ€™s field of view, and by weaving real world objects into interactive narratives, we help people rediscover the real world.&#8221;</strong></p>
<p>Pookatak will release their first game this summer. Currently it is under wraps. But Ori gives us some glimpses of what is to come in the interview below.</p>
<p>In addition to founding Pookatak, Ori is involved in a broader effort to move augmented reality forward. On his blog, <a id="ie5s" title="Games Alfresco" href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> &#8211; he recently welcomed <a href="http://gamesalfresco.com/about/" target="_blank">a new partner, Rouli Nir</a>, Ori has focused his eye of wisdom on every significant recent advance in Augmented Reality (check out <a id="zr9y" title="this essence of Ori's thinking in a fast paced video" href="http://gamesalfresco.com/2009/03/09/augmented-reality-today-ori-inbar-speaks-at-warm-2009/" target="_blank">this essence of Ori&#8217;s thinking in a fast paced video</a> presentation for <a href="http://gamesalfresco.com/2009/02/12/live-from-warm-09-the-worlds-best-winter-augmented-reality-event/" target="_blank">WARM â€˜09</a>).</p>
<p>Also Ori is one of the organizers of the interactive media track at <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a>.Â  At ISMAR this year, Ori explained,<strong> &#8220;we are trying to bring in people that develop interactive experiences for consumers, beyond the traditional attendees coming from a research perspective.</strong>&#8221;</p>
<p>In the interview below, Ori explains much of his thinking on how augmented reality will become commercially successful.Â  Enjoy it, think about it, and share it. And most importantly, if you can, get involved with ISMAR 2009.</p>
<p>OriÂ  has inspired me to participate in <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> this year.Â  Ori pointed out:</p>
<p><strong>The </strong> <a href="http://campwww.informatik.tu-muenchen.de/ismar09/lib/exe/fetch.php?id=ismar09%253Astart&amp;cache=cache&amp;media=ismar09:ismar09-cfp_090211_final.pdf" target="_blank">call for papers</a> <strong>is on, and this year it targets well beyond the typical research papers audience and into interactive media and art folks. </strong></p>
<p><strong>There are plenty of opportunities such as:</strong></p>
<p><strong>Art Gallery</strong></p>
<p><strong>Demonstrations</strong></p>
<p><strong>Tutorial</strong></p>
<p><strong>Workshops</strong></p>
<p>It&#8217;s a huge opportunity to shape the emergence of augmented reality.<br />
<br /></br></p>
<h2><strong> Interview With Ori Inbar</strong></h2>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png"><img class="alignnone size-full wp-image-3479" title="picture-41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-41.png" alt="picture-41" width="107" height="146" /></a><br />
<h3>Making Augmented Reality Commercially Successful</h3>
<p><strong>Tish Shute: </strong>You are considered a key trail blazer in AR and you have the go to blog for augmented reality!Â  What are the most important lessons you have learned researching, writing, and developing AR in the last couple of years?</p>
<p><strong>Ori Inbar: You need to have a vision. You need to know where this is going to go in ten or fifteen or twenty years. But you&#8217;ve got to start with something really simple that makes use of the technology you have on hand. And do something that is practical, that people will like, and something they would actually want to buy. Its as simple as that. I&#8217;m currently looking at what we could do with existing technology. First of all, you have to put it in front of people. Right now most people have never heard about the term augmented reality. Go into the street, and ask 100 people about it, maybe 2 would know about it. So you need to put it in front of people because most people think it&#8217;s still science fiction or a special effect you see in movies, not something you can experience in real life. </strong></p>
<p><strong>Tish: </strong>It seems to me to that for augmented reality applications to become popular with existing technology the key breakthrough would be getting people to hold up their phones. What are the obstacles to getting people to use their mobile devices like this?</p>
<p><strong>Ori: There&#8217;s a really nice cartoon by </strong><em> </em><strong><a href="http://www.tonchidot.com/">Tonchidot</a> (below) &#8211; the Japanese company behind the Sekai Camera. It&#8217;s an illustration showing the evolution of man, from ape to man (holding a cell phone looking down), to the developed man holding a device like a camera &#8211; in front of its eyes.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37.png"><img class="alignnone size-medium wp-image-3454" title="picture-37" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-37-300x221.png" alt="picture-37" width="300" height="221" /></a><strong></strong></p>
<p><strong>Which is exactly what you&#8217;re talking about. People ask, &#8220;are people going to walk with this like that all day long?&#8221; Probably not. I mean you have to build it in a way that doesn&#8217;t require them to hold it like that all the time. People are used to this gesture with the ubiquitous digital cameras. I tested one of my prototypes on a two and a half year old girl. She had no problem holding it just like she holds a camera.<br />
</strong><br />
<strong>Tish:</strong> <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank"> Blair MacIntyre</a> mentioned, &#8220;The problem with the mobile phone as a AR device is a problem of awareness,&#8221; i.e., you have to have a way of letting people know when there&#8217;s something interesting wherever they are. One of the issues regarding this is if you get too many alerts, then you tune them out.</p>
<p><strong>Ori: First of all Blair is one of the people in academia that get it. Because he looks at it from an experience perspective. Not just as an interesting technical problem to solve. Let&#8217;s start with getting people to enjoy this new experience. The AR demos so far were mostly eye candies, and mostly for advertising &#8211; the<a href="http://ge.ecomagination.com/smartgrid/#/landing_page" target="_blank"> GE AR ad</a> created a lot of buzz; but you look at it for 10 seconds and you forget about it.Â  You need to build something that people would want to experience over time and would be willing to pay for. I think that&#8217;s the big test, right?</strong></p>
<p><strong>Now in terms of having a ubiquitous experience where you&#8217;re continously connected, it doesn&#8217;t have to be an overwhelming experience. Just like some of the social media tools we&#8217;re using today, we decide when to connect, and we filter out the trash. You could get alerts only for things that really matter to you, not for everything that happens in your immediate environment. </strong></p>
<p><strong>There will be many layers of information, and it&#8217;ll be up to you to pick the ones you want to experience. The real benefit is that you get the information in your own field of view and in context of where you are or what you do.</strong></p>
<p><strong>Tish:</strong> So what are you working on these days?</p>
<p><strong>Ori: We are working on a little app that targets a very different audience than what you&#8217;d expect: pre schoolers. We think we can encourage them to get away from a PC or TV screen and learn something while playing &#8211; in the real world. You&#8217;ll hear more about it as soon as this summer. Nuff said.</strong></p>
<p><strong>But, it is a small application that will run on the iPhone. People ask how many pre-schoolers own iPhones? Well, their parents do. </strong></p>
<p><strong>Tish:</strong> Yes there are certainly many New York kids with iPhones &#8211; my kid now has my old iphone.Â  He has pretty much switched from playing games on his DS to the iPhone. I noticed in your WARM video you place a big emphasis on AR as something that will get kids away from screens and engaged with reality.Â  This is something parents will approve of!</p>
<p><strong>Ori: Yes I saw something really interesting at my kids&#8217; party one day; they were all sitting around the room &#8211; looking down at their own DS screens.Â  You could play the DS anywhere, but kids would usually play it on the sofa, looking at the screen, isolated from the world. With an iPhone and a camera, and the application we&#8217;re producing, reality becomes part of the game. Yes that makes it all of a sudden much more interesting for parents. Because kids are spending so much time in front of the screen, all of a sudden they&#8217;re something that will encourage them to interact with real objects, real things. Every parent I&#8217;ve talked to loves that idea.</strong></p>
<p><strong>Tish:</strong> Yes that is what is cool about the work of <a href="http://www.katilondon.com/" target="_blank">Kati London</a> &#8211; I think I saw someone say this on Twitter, &#8220;Kati puts the computer in the game not the game in the computer.&#8221;</p>
<p><strong>Ori: Yes, kids are spending more time in front of games and the computer because it&#8217;s more interesting. It captivates them with &#8220;<a id="x_z0" title="game pleasures" href="http://8kindsoffun.com/">game pleasures</a> &#8221; that tap into their brain&#8217;s dopamine circuitry &#8211; constantly seeking reward and satisfaction. So you&#8217;re not going to be able to tell them to go back to playing in reality without these pleasures. We have to study these mechanics from games and bring them into reality. It&#8217;s about programming real life; and augmented reality helps you achieve that.</strong></p>
<p><strong>Here&#8217;s an example: cause and effect; in a game when you do something you always get an immediate effect. You&#8217;re good, you get a reward. You&#8217;re not good, you get a cue to improve. In real life you do things and you could wait 2 or 3 years until you actually get feedback (if you&#8217;re lucky). Augmented Reality allows you to bring these mechanics into the real world. I think that&#8217;s going to help kids rediscover reality, in a new sense, which is what every parent is dreaming about.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know how much you can say about your app. But in regard to doing augmented reality on the iPhone.. there&#8217;s no compass. Is this a limitation?</p>
<p><strong>Ori: True, no compass yet. But the camera gives you a lot of information that you can interact with. When you run the application, you see the world in front of you, and if the app can recognize real life objects &#8211; it can put virtual elements on top of it.</strong></p>
<p><strong>Tish:</strong> But not with any accuracy unless you&#8217;re using markers. Are you using markers?</p>
<p><strong>Or</strong><strong>i: We&#8217;re using natural feature recognition. It doesn&#8217;t have to be an ugly looking marker. It can be any image.</strong></p>
<p><strong>Tish:</strong> So you&#8217;re using image recognition. Are you working with one of these image recognition startup companies (<a id="nws6" title="list here" href="http://www.educatingsilicon.com/2008/11/25/a-round-up-of-mobile-visual-search-companies/" target="_blank">list here</a> )?</p>
<p><strong>Ori: We&#8217;re working with one of those. What&#8217;s unique about it is it runs very nicely on any cell phone, and on the iPhone it works the best. For this first app, it doesn&#8217;t really matter where you are physically; the geolocation is not part of the experience. </strong><span style="background-color: #ffff00;"><br />
<strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong> For a truly engaging AR experience we will need more of a backend than is currently available?</span><br />
</span><br />
<strong>Ori: I call the backend the cloud, where you have all this information and ways to access it from anywhere. Actually I think it&#8217;s become pretty mature today. If you look at the different elements required to enable an augmented reality experience to work, you have &#8211; first &#8211; the user whose always in the center. Then you have the lens. The lens can be an iPhone, or glasses, even a projector. The lens allows you to watch, sense and track information in the real world: people, places, things. Then in the backend you have the cloud where you store and retrieve information.</strong></p>
<p><strong>So if you look at the maturity of these different elements, I think the cloud is in pretty good shape. Because there&#8217;s so much information we&#8217;re collecting and storing. Anything from Google, Wikipedia, Facebook, all that kind of stuff, it&#8217;s a lot of useful information you can access from anywhere using APIs. And a lot of it is also starting to include geolocation information. Take <a id="zhag" title="Loopt" href="http://www.loopt.com/" target="_blank">Loopt</a> or Google&#8217;s <a href="http://www.google.com/latitude/intro.html" target="_blank">friends service</a> that allows you to see where your friends are and what they&#8217;re doing. There&#8217;s tons of information out there and it&#8217;s pretty easy to access it. Now what do you do with it is the question?</strong></p>
<p><strong><a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> is such a simple and brilliant application and nobody thought about doing it until this guy from Salzburg did. It doesn&#8217;t have any sophisticated visual tracking. It knows your position and it&#8217;s simply looking at the angle you&#8217;re pointing to. Based on these parameters it brings information from Wikipedia that pertains to your field of view. So most of it was already there. It&#8217;s just a matter of connecting the pieces in an experience that is valuable for people.</strong></p>
<p><strong>Tish: </strong>It is the uptake of even a very simple technology that puts the magic in it.</p>
<p><strong>Ori:Â  Yes, take Twitter. If you go to its homepage it looks like a very simple boring app but it is something that is both enjoyable and very useful to people.</strong></p>
<h3><strong>Why you should participate in ISMAR 2009</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40.png"><img class="alignnone size-medium wp-image-3478" title="picture-40" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-40-222x300.png" alt="picture-40" width="222" height="300" /></a><br />
<strong>Tish: </strong>I know that you are involved in organizingÂ  <a id="seky" title="ISMAR" href="http://www.ismar09.org/" target="_blank">ISMAR</a> (picture above from Ori&#8217;s post on <a href="http://gamesalfresco.com/2009/02/23/ismar-2009-the-worlds-best-augmented-reality-event-wants-you-to-contribute/" target="_blank">&#8220;ISMAR 2009: The World&#8217;s Best Augmented Reality Event&#8230;,</a>&#8220;) and there is a call out for papers and for volunteers, can you tell me more about it?</p>
<p><strong>Ori: Yes, we hope to have the first ISMAR where we practice what we have just discussed: let&#8217;s build on all the research invested so far and instead of thinking only about 5-10 years from now, let&#8217;s see what we can do today. So we are bringing people in from other disciplines &#8211; artists, interactive media developers and people from the entertainment industry.Â  The goal is to use the technology to make something interesting for people &#8211; again, something that people would buy, and making it commercially successful.Â  Many people either don&#8217;t know about ISMAR because in the past it was a pure engineering-orientated event and peopleÂ  from a commercial perspective of AR weren&#8217;t attracted to it.Â  The Chair of the Event this year is based in Florida and he is going to bring in a lot of people from the entertainment industry such as Disney. I think this will transform this event into something more like SIGGRAPH &#8211; more of an industry event.Â  As one of the organizers of the interactive media track we are trying to bring in people that want to build applications for consumers.</strong></p>
<p><strong>Tish:</strong> In terms of AR applications what are the flagships today?</p>
<p><strong>Ori: There are very few because it&#8217;s just the beginning. There&#8217;s one tiny studio in France called <a id="z1ln" title="Int 13" href="http://www.int13.net/en/" target="_blank">Int 13</a> . They&#8217;ve created maybe the first commercial game running on a mobile device using AR technology. It&#8217;s called <a href="http://www.youtube.com/watch?v=Te9gj22M_aU" target="_blank">Kweekies</a>. It was one of the contenders for the Nokia Mobile innovation awards. They were one of the ten finalists, but they didn&#8217;t win it. It&#8217;s looks really cool. It&#8217;s somethng that runs on your desk, with a marker. Many AR folks say markers are the past, markers are ugly. But it&#8217;s still a cool experience. I think people will go for it.</strong></p>
<p><strong>Tish:</strong> Yes I think we will have to look to small companies that are free to think creatively to lead the way.Â  It seems many games companies are tied up pulling off huge big budget projects and enterprise is still catching up on how to use social media!</p>
<p><strong>Ori: Yes, last year I was in the game development conference (GDC); there was no mention of augmented reality &#8211; not on the exhibition floor, none of the sessions, nobody talked about it. I was stunned. Then this year, there was a little a change. There were like three demos on the exhibition floor, <a href="http://www.metaio.com/" target="_blank">Metaio,</a> <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and a Dutch company called <a href="http://www.augmented-reality-games.com/" target="_blank">Beyond Realit</a>y.Â  And then there was Blair&#8217;s talk, which was very very cool. The room was packed with people. And after the talk there were dozens of people lining up to talk with him about the topic. There was definitely interest, but still on the very edge. The video game industry is still a hit driven business and publishers spend upward of 20-30 million dollar to create the best AAA game possible. They just can&#8217;t take the risk. So it&#8217;s going to come from smaller companies, from outsiders coming in with a vision and understanding on how to put the AR pieces together to create a totally new experience.</strong></p>
<p><strong>Tish:</strong> But the basic tool set is there isn&#8217;t it?</p>
<p><strong>Ori: I talked to some folks at the games developer conference, many folks with MMO background, and they have great ideas about AR. It&#8217;s great to see different people with different views on what&#8217;s needed first. &#8220;Joe the Programmer&#8221; had this idea of creating a small piece of hardware that you can put in every house and provide accurate geospatial information in your home. That couldÂ  open up many opportunities for AR experiences in homes.</strong></p>
<p><strong>Tish:</strong> Don&#8217;t you think we have enormous resources in terms of image databases that provide a great basis for augmented reality.Â  I was talking to Aaron Cope at ETech about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> &#8211; Flickr&#8217;s vernacular mapping project using all the geotagged photos in Flickr. That is such cool project. <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron will be speaking at Where 2.0</a> also.</p>
<p><strong>Ori: Think of Google Earth. Google Earth leveraged communities to basically map all the major cities around the world into 3D models. And that is an essential step to be able to do augmented reality outdoors. Because if you had to model everything from scratch, it wouldn&#8217;t be realistic.</strong></p>
<h3><strong>Augmented Reality and Becoming Greener.</strong></h3>
<p><strong>Tish:</strong> I am really interested in how AR interfaces might be useful to some of the emerging energy identity/metering projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> and <a href="http://www.wattzon.com/" target="_blank">WATTZON</a> because I think it is very important that people have very intuitive, immediate, and enjoyable ways to relate to energy data so they can make greener choices.</p>
<p><strong>Ori: Back in the day I had an idea to build an Augmented Reality application to become greener. You look at things around your home with the camera and itÂ  recognizes its green gas footprint and makes recommendations to reduce it.Â  I guess it was a bit too early to do that based on visual recognition alone&#8230;you&#8217;d needÂ  additional sensors that would provide related information about what you are looking at.</strong></p>
<p><strong>Tish:</strong> Well as there is more interest in Green technology do you think we may see VC interest in some green AR projects now?</p>
<p><strong>Ori: I talked to some of the investment folks, Angels as well as VC&#8217;s about AR and they had no clue what it is. There&#8217;s a need for a whole lot of education. And there are no proof points (as in successful investments in this domain), and counter to popular belief &#8211; they don&#8217;t like risk so much&#8230;</strong></p>
<p><strong>Tish:</strong> And consumer adoption must lead the way, right?</p>
<p><strong>Ori: Just like with every emerging technology in history, people never bought the technology, they bought the content, the apps, the benefits that came on top of the technology. Whether it was VHS winning over Beta Max, or BluRay winning over HD. It&#8217;s always because of more/better content. Look at the video game console war: Xbox, and Nintendo did better than Sony just because they had more and better games. Even Windows was a success thanks to its applications. People bought it for the applications not the OS. The content is the first to drive demand.</strong></p>
<p><strong>Tish:</strong> One of the challenges to giving people new ways to relate to their energy consumption is that you can just have them looking at graphs of how bad they have been in the past you &#8211; that may make them feel bad but that doesn&#8217;t necessarily give them ways or motivation to change. There perhaps needs to be more immediate relationship to the data to facilitate change. I think the mantra for optimization of anything from energy usage to supply chains is timely, actionable data?</p>
<p><strong>Ori: There are a lot of ideas about measuring information and displaying it to people. For example, the Prius hybrid car, one of its interesting features &#8211; which is kind of game like &#8211; is a constant display of your current fuel consumption. That alone changes how people drive because they try to beat the &#8220;Score&#8221; and as a result conserve more fuel. That model can be applied to our homes&#8230;</strong></p>
<p>Tish: Yes that is something I am very interested in. I have been following several projects in this area &#8211; one of my favorites is the <a href="http://www.arduino.cc/" target="_blank">Arduino</a>, <a href="http://www.currentcost.com/" target="_blank">Current Cost</a>/<a href="http://www.ladyada.net/make/tweetawatt/" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> integrations <a href="http://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/" target="_blank">I saw at Homecamp</a>.</p>
<p>You joined a start up with Shai Agassi which was bought out by SAP right? He has a brilliant approach with Better Place.</p>
<p><strong>Ori:Â  I think what&#8217;s really unique about Better Place&#8217;s approach is that he doesn&#8217;t require people to change their behavior. People are still going to have their own cars. They&#8217;ll be able to drive as far as they want, and for the same (or lower cost). Its not necessarily about a new technology, electric cars have been around for a long time but there was no way people were going to be limited by the 50 or 70 mile range and Better Place is solving that problem. With its infrastructure of charging spots and battery switching stations, drivers are going to be able to drive anywhere. And it&#8217;ll be similar to having to stop once in a while to refuel your car. The price maybe even lower than what you pay today for your transportation needs &#8211; and you&#8217;ll stop generating green gas. It&#8217;s a clever way of taking technology to a whole new level without changing the behavior of people.</strong></p>
<p><strong>Tish: </strong>Better Place is a classic example of things as a service isn&#8217;t it?Â  It is basically a utility company.</p>
<p><strong>Ori: It is similar to a phone carrier model.Â  You pay for a membership that gives you access to the car (equivalent to the phone) and electricity (equivalent to the phone line) for the same price of fuel cost today. And as bonus you get to save the world.</strong></p>
<h3><strong>How the iphone changed the game for AR &#8211; and the iphone versus Android</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38.png"><img class="alignnone size-medium wp-image-3472" title="picture-38" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-38-300x198.png" alt="picture-38" width="300" height="198" /></a><em></em></p>
<p><em>Picture from Ori&#8217;s post</em><strong><em>, <a href="http://gamesalfresco.com/2009/03/23/gdc-2009-why-the-iphone-just-changed-everything/" target="_blank">&#8220;GDC 2009: Why the iphone changed everything&#8221; </a></em></strong></p>
<p><strong>Ori: And back to AR, you have to take the same approach, because nobody&#8217;s wants to don those huge head mounted displays or backpacks. You have to take advantage of people&#8217;s current behavior: they already carry their iPhones or similar devices.</strong></p>
<p><strong>Tish:</strong> As we discussed, you just have to get people raising up their phones and looking through them when that is a useful thing to do. Both Wikitude and Nathan Freitas&#8217;s graffiti app were enough to get me interested in the evolutionary step of raising my phone! Nathan&#8217;s graffiti app is nice. You leave a marker for your graffiti so other people can find view/add their own &#8211; a nice primal experience like pissing on the lamp post to let your pack know where youâ€™ve been.Â  Also the graffiti app taps into a long history ofÂ  NYC street culture around tagging and graffiti art (see my interview, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG finally for Augmented Reality?&#8221;</a>).</p>
<p><strong>Ori: The app store has fundamentally changed the mobile gaming industry. Last year they were in shambles. There was no growth. Everybody was complaining, &#8220;we can&#8217;t handle it, there&#8217;s a million phones, and you have to test it on each phone. And carriers suck, they don&#8217;t care about sharing and promoting your content. Everything was bad. This year mobile gaming is the hottest thing. And it&#8217;s all because of the iPhone. It changed the game.</strong></p>
<p><strong>Tish: </strong>How do you think Android is going to get traction against the iphone?</p>
<p><strong>Ori: Well the number one thing is the form factor &#8211; the iPhone is just much cooler than the G1. Its OK but it doesn&#8217;t have the same feel. People thought it was going to be easy to clone the iPhone but none of the attempts succeeded so far.</strong></p>
<p><strong>Tish: </strong>How much does it matter for AR not being able to runs things persistently in the background on the iphone?</p>
<p><strong>Ori: Actually they have add a such a capability in OS 3.Â  You can now make use of a background service.</strong></p>
<p><strong>Tish:</strong> OS 3 will open up new possibilities for AR?<strong> </strong></p>
<p><strong>Ori: The access to the video API is still not public.Â  But there is a new Microsoft application &#8211; Microsoft Tag that makes use of that API which means it is probably OK to use it.</strong></p>
<p><strong>Tish: </strong>(I ask Ori for his card and he shows me how to read it with my iphone.) Oh nice you have an AR card, of course!</p>
<h3><strong>In Search of Pong for Augmented Reality</strong></h3>
<p><strong>Tish: </strong>So how will AR begin to, as Blair&#8217;s friend put&#8217;s it, &#8220;facilitate a killer existence,&#8221; particularly as we are probably looking at some new and perhaps pricey hardware?</p>
<p><strong>Ori: You could take the Better Place approach. We&#8217;re going to give you a great experience and we&#8217;ll include the devices as part of that experience for the same price. Let&#8217;s say you subscribe to an AR experienceÂ  which offers access to multiuser, support, and all the information you need wherever you go &#8211; exactly according to the vision. You pay for a subscription on a monthly basis and included in that cost we give you a better device that offers aÂ  better AR experience. It&#8217;s following the phone carrier approach, but in a good way.</strong></p>
<p><strong>But first of all we do need our Pong! I was sitting with a couple of AR game enthusiasts at the GDC and we were asking ourselves, &#8220;how do we create the first pong for AR?&#8221;</strong></p>
<p><strong>Was Pong a multiplayer game? Not necessarily! Did it connect to the network? No! We have to create the first dot in a long line of dots that will bring us to our destination.</strong></p>
<p><strong>Tish: </strong>You haven&#8217;t seen a Pong yet have you?</p>
<p><strong>Ori: Not yet. I mean there&#8217;s maybe a handful of games and apps out there, but I don&#8217;t think any of them is a Pong yet. Still, it&#8217;s getting closer.</strong></p>
<p><strong>Tish: </strong>Kati London is doing some very interesting work on bringing games into reality, isn&#8217;t she?</p>
<p><strong>Ori: Yes, she works with Frank Lanz at <a href="http://playareacode.com/" target="_blank">Area/Code</a>. He teaches at NYU and has designed games for the <a href="http://www.comeoutandplay.org/" target="_blank">&#8220;Come Out and Play&#8221;</a> festival here in Manhattan. And a lot of these games are actually low tech.</strong></p>
<p><strong>Tish:</strong> Yes I have a big alternate reality game blog brewing that I haven&#8217;t had time to write yet!</p>
<p><strong>Ori: The city is the gameboard is their slogan. It&#8217;s going to be a great playground for AR games. The city becomes a theme park. The city could become an even bigger touristic attraction. People will come to the city to be part of these games. So you&#8217;re having thousands of people running around the city playing all sorts of games from laser-tag style to history adventures, to treasure hunts.</strong></p>
<h3><strong>Composing Reality</strong></h3>
<p><strong>Tish: </strong>So why haven&#8217;t you focused on one of these kinds of games with your company?</p>
<p><strong>Ori: We have a couple of scenarios along these lines that we&#8217;re planning for 2010-11. But first focus on what&#8217;s possible today.</strong></p>
<p><strong>Tish: </strong>And what&#8217;s stopping you from doing those kind of games today?</p>
<p><strong>Ori: Many things. The devices are not there yet, location services are not accurate enough, ubiquitous sensors are notÂ  there yet.</strong></p>
<p><strong>Tish: </strong>You think alternate reality gaming needs more &#8220;ubiquity&#8221; than is currently available?</p>
<p><strong>Ori: Not necessarily. People are doing alternate reality games with no &#8220;ubiquity&#8221; at all. But my interest is to add the visual aspect. I believe humans are mostly driven visually.</strong></p>
<p><strong>Jane McGonigal said in a talk at GDC, that AR would allow us to program reality, which is exactly how I look at it. Once you can recognize things, some of it with WiFi and RFID and all sorts of sensors. But visual sensors is always going to be the ultimate way to recognize things. And once you recognize things and know what they are, and can pull information about those things (or people and places) from the internet, you can program it (visually). You could program it to be fictional, like in a video game, or it could be programmed as non-fictional, like a documentary. And that allows you to do things that before were unimaginable.</strong></p>
<p><strong>Tish: </strong>But you can&#8217;t forget the visual, it is primary the connection to peoples&#8217; primary sensory relationships.</p>
<p><strong>Ori: Yes, it&#8217;s like you go to a grocery store and you pick your vegetables, a lot of it is by sight and by touch. And what if you could also see just by looking at it that it&#8217;s from a local store, and that it&#8217;s organic?</strong></p>
<p><strong>Tish:</strong> It goes beyond overlays really?</p>
<p><strong>Ori: By the way, I don&#8217;t like the term &#8216;overlay&#8217;. I know that&#8217;s how it looks: you either overlay or superimpose, but I&#8217;m still searching for a better term. A term I prefer to use is &#8220;composing reality&#8221;. Just like painters, they use brushstrokes and colors and compose a painting. We need to take the real element and the virtual element and compose them into something new. It&#8217;s not just about slapping one on top of the other.</strong></p>
<p><strong>Tish: </strong>yes I think the idea of dashboards is not so appealing.</p>
<h3><strong>Pookatak Games</strong></h3>
<p><strong>Tish: </strong>Do you want to explain the evolution of your company? You have an interesting history of success with high end enterprise applications.</p>
<p><strong>Ori: Since I was a kid I wanted to invent and create things. When I discovered software, that was a really cool way of actually creating things from nothing. From thin air; and you can do it very quickly. That&#8217;s what brought me into software. But I was always looking for the intersection between technology and art. Looking for ways to bring these things together. In the early nineties virtual reality was doing it. It had the appeal of cutting edge technology that can be combined with art. But then, as we all know, it crashed. So I joined Shai Agassi&#8217;s startup (who is now doing Better Place) back in the early nineties. I was one of the first employees in his startup which was developing multimedia products. I was leading the development of one of its flagship product. At some point we realized the technology could be great for an enterprise environment.</strong></p>
<p><strong>It was a really great experience. First going through this cycle from a very small startup and growing into this multi billion dollar business. I was responsible for defining and marketing SAP&#8217;s platform, which was called Netweaver. It was just an idea when we joined SAP and by the time I left it was a major, major business for SAP. I learned about the challenges of building a platform. No matter what purpose you&#8217;re building it for, it typically has similar rules. It&#8217;s definitely not just about the technology; the content that comes with it is really key to making a platform successful.</strong></p>
<p><strong>The third part of this platform trifecta is the community. If you don&#8217;t build a community, you won&#8217;t get the critical mass required for adoption. It may be your own platform but it&#8217;s not necessarily the people&#8217;s platform. That experience is very key to what we&#8217;re doing today. Now, a new industry is being born on the basis of a remarkable technology. But to drive adoption, first we&#8217;ll need good content. The content will be created using today&#8217;s technology with internal tools developed to simplify the process. Next step would be to make the tools used internally &#8211; available to other developers. Help scale the industry, enable innovation on a larger scale. That way we have a chance to create a platform. So it isn&#8217;t really just about my company. I&#8217;m so passionate about augmented reality, I want to it to become a healthy and successful industry for the next 5, 10, 15 years.</strong></p>
<p><strong>Tish: </strong>Yes I am so ready to be liberated from the sitting behind a computing screen! And I know that all this hardware is murdering the environment.</p>
<p><strong>Ori: There&#8217;s &#8216;s the book by Rolf Hainich which is called &#8220;<a id="ba8p" title="The End Of Hardware" href="http://www.theendofhardware.com/">The End Of Hardware.</a> &#8221; It&#8217;s about hardware for augmented-reality. Once you use goggles or other AR interfaces you eliminate the need for screens, laptops, etc. It&#8217;s going to be great for the environment. You have read Rainbow&#8217;s End, right? According to the book in few years there will barely be any (visible) hardware. At least it&#8217;ll have a much smaller footprint for the environment. And it&#8217;ll touch every aspect of life, everything you do. It&#8217;ll change the way you interact with the world.</strong></p>
<h3><strong>The Illusive Eyewear for Immersive AR.</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost.jpg"><img class="alignnone size-medium wp-image-3469" title="retroar-googlespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/retroar-googlespost-300x225.jpg" alt="retroar-googlespost" width="300" height="225" /></a><br />
<em>Friend of Ori&#8217;s in San Francisco wearing retro AR goggles (from <a href="http://gamesalfresco.com/2009/05/04/gdc-2009-roundup-a-tiny-spark-of-augmented-reality/" target="_blank">Games Alfresco, Ori&#8217;s roundup of GDC 2009</a>)</em></p>
<p><strong>Tish:</strong>OK lets talk about goggles.</p>
<p><strong><strong>Ori: Goggles are going to happen, we want to be hands free.</strong></strong></p>
<p><strong>It&#8217;s going to happen because it&#8217;s just a more intuitive way to use this technology. But above all it has to look cool. Because if it&#8217;s not, if it&#8217;s a big headset, then maybe a small percent of the population might use it, but most people won&#8217;t. It has to look like an accessory, like new cool eyeglasses that you just must wear.</strong></p>
<p><strong>I recently talked to a friend, who runs an industrial design firm, and has experience in designing such glasses for companies like Microvision and Lumux. He says that when you try to bring the images so close to our eyes &#8211; there are some really hard problems to solve. Otherwise it can become really annoying and cause dizzyness.</strong></p>
<p><strong>But I&#8217;m optimistic. I believe it&#8217;s going to happen 3 to 5 years from now. It&#8217;s already starting now: Vuzix announced goggles that will be available this year. Some AR apps that are going to take advantage of next year. Initially only a fraction of the population will use it. And that&#8217;s going to help advance it and make it better and better. But it&#8217;s going to take time until it reaches the mass market.</strong></p>
<p><strong>Tish:</strong> In virtual worlds we have seen, I think, a lot of mistakes in terms of reinventing the wheel and producing too many proprietary versions of the same thing and not enough concerted effort on standards and open platforms that could create a vibrant ecosystem.Â  How can augmented reality not make the same mistakes?</p>
<p><strong>Ori: There are some early AR open source efforts ARTookit, ARtag but it is not a movement yet.Â  One of the things we&#8217;re trying to do at ISMAR this year is to put togetherÂ  discussions around key industry issues, such as standards. Some people say it&#8217;s too early, you have to have a defacto standard to start from. But pretty soon it&#8217;s going to be too late. Just like with virtual worlds, all of a sudden you have all these islands that don&#8217;t talk to each other. Why get to that point if we can plan to avoid it? Let&#8217;s start thinking about it right now. On the other front there are devices. There are pockets of people working on adapting devices for AR, second guessing the hardware companies. Why not get them together with the Intels and Nvidias of the world, and discuss what this device should be able to do. And then compete to make it happen.</strong></p>
<p><strong>Tish: </strong>How much luck are you having with this discussion part?</p>
<p><strong>Ori: People are very interested in doing this. We proposed these panels for ISMAR. And I&#8217;ve got some key people already on board. They have tons of input, they want to get involved. We&#8217;ll see how much we can actually get out of it.</strong></p>
<p><strong>Tish: </strong>In virtual worlds it was a while before vibrant opensource communities developed.Â  OpenSim has I think been the breakthrough community in this regard.</p>
<p><strong>Ori: You have to think about the elements up front. The dream job is to architect the industry. Say we agree on the required pieces. Then we could help the right companies succeed in delivering the pieces. Next, we have to collaborate so that these pieces talk to each other. And eventually these communication methods will become defacto standards and most developers will adopt it.</strong></p>
<p><strong>Tish: </strong>So I&#8217;m going to put you in the role. You&#8217;ve got your dream job. You&#8217;re going to architect this community. So what are the key pieces and where would you like to see the open source communities take hold first?</p>
<p><strong>Ori: Open source will not be exclusive. It&#8217;s going to live side by side with proprietary technology.</strong></p>
<p><strong>The key pieces? You have the user at the center. And the user interacts with a lens. The lens includes both the hardware and the software. And then the lens senses and interacts with the world, which includes people, things and places. And these people-things-places emit information &#8211; about who they are, where they are, what they&#8217;re doing, etcÂ  &#8211; which is then stored in the cloud.</strong></p>
<p><strong>And then you have the content providers, the people and companies, composers who weave AR experiences through the pieces we mentioned before. These composers need a platform that glues these pieces together. Pieces of the platform will be on the lens, and in the world, and in the cloud. If you manage to remove the frictions, and connect these pieces into an experience that people like &#8211; then you have a platform. What the platform does it reduces the overhead and accelerates innovation.</strong></p>
<p><strong>Tish: </strong>Another problem virtual worlds faced in their development was their isolation from the world wide web.Â  Will augmented reality avoid this plight?</p>
<p><strong>Ori:Â  Yes, I believe the key, like you said before, is not to reinvent the wheel. The cloud is already there.Â  Take Wikitude for example, all <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a> had to do is buildÂ  a relatively simple client app, connected to wikipedia, and all of a sudden it offered a wealth of information in your field of view.</strong></p>
<p><strong>I think we can learn a lot from web 2.0. For example, in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, you&#8217;ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So let&#8217;s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So let&#8217;s find ways to make people create information that can be used for AR.</strong></p>
<p><object width="425" height="344" data="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/GTXtW3W8mzQ&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /></object></p>
<p><em>Ori Inbar directed <a title="Wiki Mouse" href="http://www.youtube.com/watch?v=GTXtW3W8mzQ" target="_blank">Wiki Mouse</a> &#8211; a WIKI Film co-created by a swarm of movie makers around the world.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Towards a Newer Urbanism: Talking Cities, Networks, and Publics with Adam Greenfield</title>
		<link>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/</link>
		<comments>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/#comments</comments>
		<pubDate>Sat, 28 Feb 2009 04:28:06 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[crossing digital divides]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Adam Greenfield]]></category>
		<category><![CDATA[aggregating the world's energy data]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Anne Galloway's forgetting machine]]></category>
		<category><![CDATA[antisocial networking]]></category>
		<category><![CDATA[antisocial networking systems]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[cities and networks]]></category>
		<category><![CDATA[connecting environments]]></category>
		<category><![CDATA[context aware]]></category>
		<category><![CDATA[context aware applications]]></category>
		<category><![CDATA[context aware mediators]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[deliberative democracy]]></category>
		<category><![CDATA[Eben Moglen on privacy]]></category>
		<category><![CDATA[EEML]]></category>
		<category><![CDATA[Erving Goffman]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[flexible identity]]></category>
		<category><![CDATA[information processing]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative is a mood]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[mobile computing]]></category>
		<category><![CDATA[mobile phones and sensors]]></category>
		<category><![CDATA[mobility]]></category>
		<category><![CDATA[next generation internet]]></category>
		<category><![CDATA[Nurri Kim]]></category>
		<category><![CDATA[onto]]></category>
		<category><![CDATA[ontome]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[privacy in networked environments]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[self-describing networked objects]]></category>
		<category><![CDATA[smart homes]]></category>
		<category><![CDATA[smart products]]></category>
		<category><![CDATA[social networking systems]]></category>
		<category><![CDATA[sousveillance]]></category>
		<category><![CDATA[speedbird]]></category>
		<category><![CDATA[spime wrangle]]></category>
		<category><![CDATA[spime wrangling]]></category>
		<category><![CDATA[spimes]]></category>
		<category><![CDATA[spimy]]></category>
		<category><![CDATA[sustainable cities]]></category>
		<category><![CDATA[the big now]]></category>
		<category><![CDATA[the city is here for you to use]]></category>
		<category><![CDATA[the future of the internet]]></category>
		<category><![CDATA[the long here]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp technologies]]></category>
		<category><![CDATA[ubiquitous systems]]></category>
		<category><![CDATA[unbook]]></category>
		<category><![CDATA[uncanny valleys]]></category>
		<category><![CDATA[urban informatics]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[web of things]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2969</guid>
		<description><![CDATA[Adam Greenfieldâ€™s new book, The City Is Here For You To Use, is coming soon (photo above by Pepe Makkonen is from Adam Greenfieldâ€™s Flickr stream). Adam told me: â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg"><img class="alignnone size-full wp-image-2970" title="adamgreenfieldpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldpost.jpg" alt="adamgreenfieldpost" width="333" height="500" /></a></p>
<p>Adam Greenfieldâ€™s new book, <em><strong><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></strong></em>, is coming soon (photo above by Pepe Makkonen is from <a id="souo" title="Adam Greenfield's Flickr stream" href="http://www.flickr.com/photos/studies_and_observations/">Adam Greenfieldâ€™s Flickr stream)</a>. Adam told me:</p>
<p style="text-align: left;"><strong>â€œIâ€™m aiming at a free v1.0 PDF release on 05 June 2009, with the book shipping as quickly thereafter as humanly possible. There will be a version zero or public alpha in about six weeks.â€</strong></p>
<p>I am not good at waiting for books I really want to read to arrive. But, on the upside, it brings out my already pretty highly developed investigative instinct. So when Adam very generously agreed to do an interview, impatience turned into delight in tasting what is to come. And Adam is encouraging this kind of engaged anticipation. He writes (<a id="v80w" title="see post" href="http://speedbird.wordpress.com/2009/02/19/of-books-and-unbooks/">see post</a>) that <em>The City Is Here For You To Use</em>, is shaping up:</p>
<p><strong>â€œas something of an <a id="oj:9" title="unbook" href="http://theunbook.com/2009/02/18/what-is-an-unbook/">unbook</a><em> avant la lettre. </em>Itâ€™s why weâ€™ve [<a href="http://www.nurri.com/">Nurri Kim</a> and Adam Greenfield] always insisted on keeping you in the loop as to the bookâ€™s <a href="http://speedbird.wordpress.com/2009/01/22/bookproject-update-005-year-two/">fitful progress</a>, itâ€™s why I take every opportunity to <a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/">test its ideas here</a>, itâ€™s why I make explicit the fact that your response to those ideas is crucial to their evolution and expression. And itâ€™s why, even though the process is inevitably going to result in a static, physical document as one of its manifestations &#8211; and hopefully a very nice one indeed &#8211; weâ€™ve committed to offering a free and freely-downloadable Creative Commons-licensed PDF of every numbered version of <em>The City</em>, from zero onward.</strong></p>
<p><strong>You buy the book if you want the object. The ideas are free.â€</strong></p>
<p>I found the opportunity to ask Adam questions about some of his subtle renderings of technology, culture, and being in urban environments challenging and very illuminating.Â  Although I definitely get the feeling I am asleep at the wheel on some of the critical areas he is thinking and writing on.</p>
<p>Knowing the depth and range of Adam&#8217;s thought in his seminal book, <em><a id="you9" title="Everyware" href="http://www.studies-observations.com/everyware/">Everyware</a></em>, and his blog, <a id="r22r" title="Speedbird" href="http://speedbird.wordpress.com/">Speedbird</a>, before I began the conversation I asked Adam to point me to some of his posts that reflect key ideas he is working on at the moment (Adam has recently posted<em> </em><a href="http://speedbird.wordpress.com/2009/02/14/the-city-is-here-table-of-contents/" target="_blank"><em>The City Is Here</em>: Table of contents</a>).Â  Adam directed me to these three posts.</p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">Antisocial networking</a></p>
<p style="text-align: left;"><a href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/" target="_blank">More songs about context and mood</a></p>
<p><a href="http://speedbird.wordpress.com/2007/01/29/messenger-space-messenger-body-messenger-mesh/" target="_blank">Messenger, space, messenger body, messenger mesh</a></p>
<p>I may ramble and diverge, as is my nature, but these posts inspired many of the questions I ask.</p>
<p>Adam is currently head of design direction for service and user-interface design at Nokia and living in Helsinki, so I did not have the opportunity to do the interview in person. But I have glimpsed Adamâ€™s world through his Flickr stream and some of these images have found their way into this post. But I suggest you browse Adamâ€™s photography for yourself. I cannot do justice to the thousands of nuanced perceptions of cities, networks and publics you will find there. In the meantime, here are three glyphs of Adam Greenfield that I liked a lot.</p>
<p><strong><em><a id="r315" title="&quot;My favorite shoes&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074835498/">â€œMy favorite shoes,â€</a> <a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">â€œMy favori</a><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/">te chairâ€</a> </em></strong><em>and</em><strong><em> </em></strong>photo by Adam Greenfield, <em><strong><a id="cg3n" title="&quot;My favorite chair,&quot;" href="http://www.flickr.com/photos/studies_and_observations/2074042711/"> </a><a id="vjz1" title="&quot;Favoriteplace&quot;" href="http://www.flickr.com/photos/studies_and_observations/1849426174/">â€œFavoriteplaceâ€</a></strong></em></p>
<p><strong><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg"><img class="alignnone size-full wp-image-2984" title="favoriteshoespost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteshoespost.jpg" alt="favoriteshoespost" width="225" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1.gif"><img class="alignnone size-medium wp-image-2975" title="favoritechair1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoritechair1-300x225.gif" alt="favoritechair1" width="300" height="225" /></a></em></strong></p>
<p><a href="../wp-content/uploads/2009/02/favoriteplace.jpg"><br />
</a><br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2.jpg"><img class="alignnone size-medium wp-image-2992" title="favoriteplace2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/favoriteplace2-300x225.jpg" alt="favoriteplace2" width="300" height="225" /></a></p>
<h3>A Conversation (in gdoc) with Adam Greenfield</h3>
<p><strong> Tish Shute:</strong> Could you explain a little about the evolution of your thoughts on urban environments, ubicomp and interaction design? What shifts in your thinking have taken place over the last few years re the dawning of the age of ubiquitous computing? It is a couple of years now since <a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a>, what aspects of the uptake of <em>Everyware</em> have most surprised, disappointed or inspired you? Which of the many thesis you discuss in <em>Everyware</em> have become the most crucial for <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>?</a></p>
<p><strong>Adam Greenfield: You know, thereâ€™s a little passage in the liner notes to the second Throbbing Gristle album that I always think of when Iâ€™m asked questions along these lines. As part of their stance, theyâ€™d adopted the dry tone of a corporate annual report, and the preamble began by saying, â€œSince our last report to you, many things have changed. Indeed, it would be foolish to assume that it could be otherwise.â€ And I think thatâ€™s just exactly right: the world keeps moving, and the positions weâ€™d staked ourselves to not so long ago may no longer be correct, or even relevant, to the one we find ourselves inhabiting now.<br />
</strong><br />
<strong>So, first, I think itâ€™s important to cop to all the places in <em>Everyware</em> where I just outright got things wrong. Thereâ€™s a passage in Thesis 50, for example, where I unaccountably mock the idea that â€œthe mobile phoneâ€¦will do splendidly as a mediating artifact for the delivery of [ubiquitous] services.â€ OK, this was admittedly written in a pre-iPhone world &#8211; and was correct <em>for</em> that world &#8211; but you can really see my parochialism showing here. It took the iPhone to make the proposition as blazingly self-evident to me in North America as it had been for quite some time to folks in Europe and Asia.</strong></p>
<p><strong>Having said that, though, I think Iâ€™m justified in taking a little pride in what the book got right. The broader trends the book set out to discuss &#8211; the colonization of everyday life by information processing &#8211; well, take a good look around you. And so one of the points of departure for the new book is taking everything posited in <em>Everyware</em> as a given: the urban environment, and most everything in it as well, has been provisioned with the kind of abilities you mention. So what now?</strong></p>
<p><strong>How do you go about designing informatic systems so they donâ€™t undermine the wonderful things about cities? How do you design cities so they can incorporate networked informatics to greatest advantage? How, especially, do you accomplish these things when the disciplinary communities involved barely speak the same language? And how do you keep everyoneâ€™s eyes on the prize, which is the ordinary human being asked to make sense of these new propositions? These are the questions<em> </em><em>The City Is Here For You To Use </em>sets out to address.</strong></p>
<p><strong><em><br />
</em></strong></p>
<p><a href="../wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg"><img class="alignnone size-full wp-image-2993" title="adamgreenfieldthelonghere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/adamgreenfieldthelonghere.jpg" alt="adamgreenfieldthelonghere" width="500" height="321" /></a></p>
<p><em>Adam talking about the <a href="http://www.flickr.com/photos/studies_and_observations/3181518615/" target="_blank">â€œLe Long Iciâ€</a> in Paris (also see Adamâ€™s post, <a href="http://speedbird.wordpress.com/2008/05/04/the-long-here-and-the-big-now/" target="_blank">â€œThe long here and the big nowâ€</a>)</em><strong></strong></p>
<p><strong>TS:</strong> You mention that the hardest parts ofÂ  producing <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em></a> wasnâ€™t <em><strong>â€œkeeping on top of all the emergent manifestations of urban informatics, or even developing a satisfying spinal argument about their significanceâ€</strong></em> but getting the voice right.Â  It seems that now is the perfect time for a book that would really speak to a wide audience.Â  But also it seems that the city that is here for you to use is manifesting quite differently in different parts of the world?Â  You seem to be somewhat of a nomad, Japan to NYC to Helsinki.Â  Can putting together different views of urban informatics give us more depth perception on the emergence of ubiquitous computing?</p>
<p><strong>AG: Thereâ€™s no question in my mind that the long-term experience of everyday life in Tokyo, New York, and now Helsinki has been an invaluable asset to me, as I imagine it would be to anybody interested in thinking or writing about the networked city. Itâ€™s given me a certain amount of parallax, you know? And that, in turn, throws a really interesting light onto how the selfsame technology can appear in substantially different guises in different social contexts.</strong></p>
<p><strong>But explaining those things &#8211; those complicated, delicate negotiations &#8211; getting them right, doing them justice, doing so in a way that doesnâ€™t dumb anything down, and still remaining accessible? Itâ€™s a challenge, let me tell you. You want to remain approachable and humane, but you also want to explain things like different jurisprudential takes on property, or how advocates of RESTful architectures think that REST is the reason why Internet adoption spread as rapidly as it did. If you want to enjoy even one chance in a hundred of getting your message across, youâ€™ve got to start with an understanding that those subjects are MEGO territory for most people &#8211; whether they hail from Shibuya, Shoreditch or San Pedro.</strong></p>
<p><a href="../wp-content/uploads/2009/02/everywareicon.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg"><img class="alignnone size-full wp-image-2996" title="everywareicon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon.jpg" alt="everywareicon" width="136" height="135" /></a></p>
<p><em><strong><a href="http://www.flickr.com/photos/studies_and_observations/89045331/" target="_blank">Everyware icons: Information processing dissolving into behavior</a></strong></em><em><strong> </strong>(Icons inspired by <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall</a>; design by Adam Greenfield and <a href="http://www.nurri.com/">Nurri Kim</a>).Â  [Adam notes on his Flickr page that he tweaked <a href="http://www.flickr.com/search/?w=14112399%40N00&amp;q=everyware+icons&amp;m=text" target="_blank">these icons </a>as section headers for </em><em><a href="http://www.studies-observations.com/everyware/" target="_blank"><em>Everyware</em></a></em><em>]</em></p>
<p><strong>TS:</strong> Could you explain more about what you term â€œontoâ€ and â€œontomeâ€ and how this differs from spimes and spime wrangling?<strong><br />
</strong><strong><br />
AG: You know, I never did get to develop that idea as much as I would have liked. In my mind, at least, â€œontomeâ€ referred to the totality &#8211; the global environment of addressable, queryable, scriptable objects. (An â€œonto,â€ then, would be any given such object.) I guess I was looking for words that would do two things: allow us to distinguish between the instantiation and the class, and leave us with a better word than â€œspime.â€</strong></p>
<p><strong>TS: </strong>When you say better word than spime this is this becauseâ€¦.<br />
<strong><br />
AG: Euphony, primarily. : . )</strong></p>
<p><strong>TS:</strong> When I first used the Android app,Â  <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a>, on Broadway, NYC &#8211; a street I have traveled thousands and thousands of times, and it offered up new information about itself, it was definitely an â€œOMG this is big!â€ moment for me. Like the first time I clicked on a screen and Amazon sent out a book in the early nineties (something so ordinary now it seems impossible that it was exciting but I remember it was to me!). But if I understand <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">your post here</a> correctly, isnâ€™t Android with compass the first easy-to-use context-aware mediator for wrangling onto, ontome and spimes?<strong><br />
</strong><br />
<strong>AG: Wikitude sure looks pretty impressive, and maybe even useful. But I would never, ever call it â€œcontext-aware.â€<br />
</strong><br />
<strong>To my mind, at least two more things would need to happen before we could comfortably think of it a â€œcontext-aware spime wrangler.â€ First, the buildings and other public objects around you would actually have to be spimy &#8211; theyâ€™d have to report something of their past and current state to the network. And then, some application running on your phone would somehow have to cross-reference that state information with some fact about your current state of being, and deliver you relevant information.</strong></p>
<p><strong>S</strong><strong>o, letâ€™s take your Wikitude example. Youâ€™re walking down Broadway and you pass an unfamiliar building, and for whatever reason you want to know more about it. Your phone pings the buildingâ€™s dynamic self-description, and it replies to the effect that Andy Warhol had his Factory there between 1973 and 1984. If Wikitude chooses to share this particular piece of information with you, and not some other potentially germane factoid from the buildingâ€™s history, on the strength of the fact that â€œThe Velvet Underground and Nicoâ€ was in your last.fm playlist? That would constitute some small measure of context-awareness.</strong></p>
<p><strong>But you see how hard we had to try just to come up with an example, how forced it is, how</strong><em><strong> so-what. </strong></em><strong>And I have to say that &#8211; short of some infinitely supple system that really could model your innermost desires ahead of real time, and present appropriate responses to them &#8211; most so-called â€œcontext-awareâ€ applications and services are like this. Theyâ€™re either trivial, or wildly overambitious.</strong></p>
<p><strong>Maybe we donâ€™t need for things to be context-aware for them to be useful, anyway. Certainly a great many objects in the world are starting to report their own status, and many more will do so in the fullness of time. And for the most part, all youâ€™ll need to avail yourself of them is a Web browser running on a device that knows where it is in the world. An iPhone or an Android device will work splendidly &#8211; I called the iPhone â€œthe first real everyware deviceâ€ the day it came out and I was able to play with it for the first time &#8211; and in that way, the answer to your question is â€œyes.â€ Not to be longwinded or anything. ; . )</strong></p>
<p><a href="../wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties.jpg"><img class="alignnone size-medium wp-image-3000" title="objectwithimperceptibleproperties" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/objectwithimperceptibleproperties-300x212.jpg" alt="objectwithimperceptibleproperties" width="300" height="212" /></a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/206984090/#DiscussPhoto" target="_blank">This Object has imperceptible properties. </a> [Adam notes on his Flickr page: &#8220;This is a custom RFID-enabled transit pass that <a href="http://www.elasticspace.com/" target="_blank">Timo Arnall </a>had made up for me here in Seoul. I&#8217;ve (clumsily) tagged it with the icon that Nurri and I developed to represent just such emergent situations as this in the everyware milieu &#8211; that there&#8217;s no way for anyone to understand that this object has puissance beyond the obvious simply by examining it.&#8221;]</em></p>
<p><strong>TS: </strong>It seems thatÂ  we are just at the beginning of understanding how to create networks of spimes (e.g. <a href="http://www.pachube.com/" target="_blank">Pachube</a>). Gavin Starks of <a id="ya:2" title="AMEE" href="http://www.amee.com/">AMEE</a> (â€the worldâ€™s energy meterâ€) once suggested to me that AMEE could be described as a facilitator of networked spimes (everything will have an energy identity). I think you may be familiar with AMEE because you keynoted next to Gavin at<a href="http://2007.xtech.org/public/schedule/grid/2007-05-16" target="_blank"> Xtech 2007</a>.</p>
<p>I would be interested to hear your thoughts on AMEE?</p>
<p>When <a href="http://speedbird.wordpress.com/2008/08/19/worth-a-thousand-words-etc/" target="_blank">you discussed onto and ontome in this post</a>, you noted:</p>
<blockquote><p><em><strong>â€œThe greater part of the places and things we find in the world will be provided with the ability to speak and account for themselves. That theyâ€™ll constitute a coherent environment, an <a href="http://www.graphpaper.com/2006/03-23_a-spime-is-a-species">ontome</a> of <a href="http://flickr.com/photos/studies_and_observations/89092744/">self-describing networked objects</a>, and that weâ€™ll find having some means of handling <a href="http://web.archive.org/web/20050117141647/www.v-2.org/greenfieldspime.pdf">the information flowing off of them</a> very useful indeed.â€</strong></em></p></blockquote>
<p>Is the idea of â€œenergy identityâ€ that AMEE proposes an ontome?Â  <em><br />
<strong><br />
</strong></em><strong>AG: See below for a prÃ©cis of my feelings regarding environmental/sustainability initiatives, AMEE included. Uhâ€¦is AMEE an ontome? No. Thereâ€™s just one ontome, and itâ€™s coextensive with what folks now call the Internet of Things. It sounds like individual AMEE sensors would be â€œontos.â€</strong></p>
<p><strong>But I think the difficulty weâ€™re having is a pretty good indicator that the terminology is more trouble than itâ€™s worth. Sometimes a coinage, as satisfying as it may be lexically, just doesnâ€™t work for people. These days Iâ€™m trying to get out of the neologism trade.</strong></p>
<p><strong>TS: </strong>I know <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">when Usman Haque talks about Pachube</a> he talks about spimes and spime wrangling. I asked Usman for his thoughts on spimes and onto/ontome and he gave me some comments.</p>
<p><strong>Usman Haque:</strong> I think I had somehow missed the conversation about onto and ontome but backtracked through blog posts to piece it together (unfortunately some posts at v-2 and Studies &amp; Observations no longer exist!). There are a couple of things that have made me uncomfortable about the word â€™spimeâ€™: (a) the fact that it might be too easy to confuse with an â€œobjectâ€. A â€™spimeâ€™ should also encompass relationships between things, and not just the â€œthingnessâ€ itself. (b) the sound of it (as Adam noted above). But then I am reminded of that horrible gooey interface used to plug into people in <a href="http://www.imdb.com/title/tt0120907/">eXistenZ</a> &#8211; it somehow seems appropriate that it should be a horrible gooey word, and not something that can disappear politelyâ€¦ So I like onto/ontome because it speaks to my first concern about â€™spimeâ€™; but my second concern, it turns out, is not the problem I thought it was, and so onto/ontome might beâ€¦ ahemâ€¦ too euphonic! On the question of this thing people are calling the â€œInternet of Thingsâ€, Iâ€™ve tried in lectures to reframe it as the â€œEcosystem of Environmentsâ€. Further, Vlad Trifa makes a delicious point that just as â€˜webâ€™ is different from â€˜internetâ€™, so too should we consider the â€œWeb of Thingsâ€<strong> </strong>rather than the â€œInternet of Thingsâ€, something I agree with.</p>
<p><strong>TS: </strong>It seems like this point about the difference between â€œthe web of thingsâ€ and the â€œinternet of thingsâ€ is pretty important?<br />
<strong><br />
AG: The parallel distinction between Web and Internet sure is! Theyâ€™re two completely different things, right? And http is far from the only protocol that runs over the Internet. Now, as to what Vlad means by extending this particular distinction to the domain of networked objects, I donâ€™t yet know, I havenâ€™t had time to check it out. But sure, in principle Iâ€™d totally be willing to go along with the idea that thereâ€™s a meaningful distinction between two environments named that way.</strong></p>
<p><strong><br />
</strong></p>
<p><em><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg"><img class="alignnone size-full wp-image-3010" title="everywareicon3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/everywareicon3.jpg" alt="everywareicon3" width="142" height="139" /></a><br />
</em></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/89045326/in/photostream/" target="_blank">No information is collected here; network dead zone</a></em></p>
<p><strong>TS: </strong>I was just going over <a id="yo_s" title="Greenfield's principles of ubiquitous computing" href="http://www.we-make-money-not-art.com/archives/2006/10/adam-greenfield.php">Greenfieldâ€™s principles of ubiquitous computing</a>.Â  I am not sure that I see any current manifestations of ubicomp that hold to these priniciples yet?</p>
<p><strong>AG: Oh, sure there are. Look at the work Tom Coates has done on <a href="http://fireeagle.yahoo.net/" target="_blank">Yahoo!â€™s Fire Eagle</a>; look at <a href="http://www.dopplr.com/" target="_blank">Dopplr</a>. And look at some of the steps other, less compassionate developers (e.g. Facebook) have been forced to take by their own users.</strong></p>
<p><strong>Look, those principles are just codifications of common sense and basic neighborly virtues, expressed in language appropriate to the domain of application. The best, smartest and most ethical developers have never needed guidelines to do the right thing. But especially inside companies and other complex organizations, people who want to implement compassion in their design of a technical system may occasionally find it useful to have some color of authority to invoke in their struggles</strong><strong>. Thatâ€™s all those five principles are there for, and Iâ€™m well satisfied that people have been able to use them that way.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome.jpg"><img class="alignnone size-medium wp-image-3005" title="smarthome" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smarthome-300x225.jpg" alt="smarthome" width="300" height="225" /></a><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank"><br />
</a></p>
<p><em><a href="http://www.flickr.com/photos/studies_and_observations/501331002/" target="_blank">Boffiâ€™s take on the smart home</a>- photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> In your post, <a id="klme" title="More Songs About Context And Mood" href="http://speedbird.wordpress.com/2008/08/25/more-songs-about-context-and-mood/">More Songs About Context And Mood,</a> you suggest a direction for interaction design that you point out is not far from Yvonne Rogersâ€™ ideas in â€œMoving on from Weiserâ€ about a switch in goal of ubicomp from Weiserâ€™s vision of calm living (â€computers appearing when needed and disappearing when notâ€) to engaged living &#8211; ubicomp technologies not designed to to do things for people but to help people engage more actively in things that they do (ensembles, ecologies of resources).</p>
<p>You also suggest interaction designers should be:</p>
<blockquote><p><strong><em>&#8220;parsimonious about the interaction design challenges our organizations do take on, with an eye toward reducing the complications of context (and the attendant opportunities for default, misunderstanding, misfire, time-wasting, and humiliation) to some manageable minimum.&#8221;</em></strong></p></blockquote>
<p>As you have pointed out, â€œwe donâ€™t do â€œsmartâ€ very well yet.â€ But paradoxically smart grids, smart homes, smart products etc. etc. are ubiquitously coming to market right now.</p>
<p>Yvonne Rogers suggests interaction designers should be:</p>
<blockquote><p><em>moving from a mindset that wants to make the environment smart and proactive to one that enables people, themselves, to be smarter and proactive in their everyday and working practices</em><em> </em></p></blockquote>
<p>What areas might interaction designers most productively direct their attention towards?<br />
<strong></strong></p>
<p><strong>AG: You note that things called â€œsmart homesâ€ and â€œsmart productsâ€ are coming onto the market, and that sure would seem to be the case. But as to whether or not these things are genuinely smart, we donâ€™t have anything more to go on than the marketing departmentâ€™s word. I think you can already see that I tend to take language very seriously, and I really donâ€™t uses like the â€œsmartâ€ here, or the â€œawareâ€ in â€œcontext-aware.â€ They overpromise, they cannot help to set us up for failure and disappointment.</strong></p>
<p><strong>You know what Iâ€™d really like to see interaction design wrestle with? I would love to see a rigorous, no-holds-barred examination of the complexities of the self and its performance in everyday life, and how these condition our use of public space (and personal media in public space). I would love to see the development of ostensibly â€œsocialâ€ platforms informed by some kind of reckoning with issues like vulnerability, dishonesty, the fact of power dynamics. In other words, before we deign to go about â€œhelpingâ€ people, wouldnâ€™t it be lovely if we understood what they perceived themselves as needing help with, and why?</strong></p>
<p><strong>Iâ€™d also pay good money to see talented interaction designers turn their efforts toward tools for the support of deliberative democracy, for the navigation of complex multivariate decision spaces, and for conflict resolution.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg"><img class="alignnone size-full wp-image-3071" title="locativeasamood" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/locativeasamood.jpg" alt="locativeasamood" width="500" height="375" /></a><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank"><br />
</a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2521894341/" target="_blank">Locative is a mood</a> &#8211; photo by Adam Greenfield</em><strong><br />
</strong></p>
<p><strong>TS:</strong> I know you said this would take too long to explain but I couldnâ€™t help noticing that you seem to be, perhaps, skeptical about the role of everyware can play in sustainable living and yet, it seems at the moment, in the hacker and business communities at least, the role of everyware in reducing carbon footprint/energy management etc, is the great green hope?</p>
<p>Will everyware enable or hinder fundamental changes at the level of culture and identity necessary to support the urgent global need &#8211; â€œto consume less and redefine prosperity?â€<strong><br />
</strong><br />
<strong>AG: Iâ€™m not skeptical about the potential of ubiquitous systems to meter energy use, and maybe even incentivize some reduction in that use &#8211; not at all. Iâ€™m simply not convinced that anything we do will make any difference.</strong></p>
<p><strong>Look, I think we really, seriously screwed the pooch on this. We have fouled the nest so thoroughly and in so many ways that I would be absolutely shocked if humanity comes out the other end of this century with any level of organization above that of clans and villages.</strong><strong> Itâ€™s not just carbon emissions and global warming, itâ€™s depleted soil fertility, itâ€™s synthetic estrogens bioaccumulating in the aquatic food chain</strong><strong>, itâ€™s our inability to stop using antibiotics in a way that gives rise to multi-drug-resistance in microbes</strong><strong>. </strong></p>
<p><strong>Any one of these threats in isolation would pose a challenge to our ability to collectively identify and respond to it, as itâ€™s clear anthropogenic global warming already does. Put all of these things together, assess the total threat they pose in the light of our societiesâ€™ willingness and/or capacity to reckon with them, and I think any moderately knowledgeable and intellectually honest person has to conclude that itâ€™s more or less â€œgame over, manâ€ &#8211; that sometime in the next sixty years or so a convergence of Extremely Bad Circumstances is going to put an effective end to our ability to conduct highly ordered and highly energy-intensive civilization on this planet, for something on the order of thousands of years to come.</strong></p>
<p><strong>So (sorry <em>again</em>, Bruce) I just donâ€™t buy the idea that weâ€™re going to consume our way to Ecotopia. Nor is any symbolic act of abjection on my part going to postpone the inevitable by so much as a second, nor would such a sacrifice do anything meaningful to improve anybody elseâ€™s outcomes. Iâ€™d rather live comfortably &#8211; hopefully not obscenely so &#8211; in the years we have remaining to us, use my skills as they are most valuable to people, and cherish each moment for what it uniquely offers.</strong></p>
<p><strong>Maybe some people would find that prospect morbid, or nihilistic, but I find it kind of inspiring. It becomes even more crucial that we not waste the little time we do have on broken systems, broken ways of doing things. The primary question for the designers of urban informatics under such circumstances is to design systems that underwrite autonomy, that allow people to make the best and wisest and most resonant use of whatever time they have left on the planet. And who knows? That effort may bear fruit in ways we have no way of anticipating at the moment. As it says in the Quâ€™ran, gorgeously: â€œAt the end of the world, plant a tree.â€</strong></p>
<p><strong><a href="../wp-content/uploads/2009/02/biowall2.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg"><img class="alignnone size-full wp-image-3008" title="biowall2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/biowall2.jpg" alt="biowall2" width="375" height="500" /></a><br />
</strong></p>
<p><em><a href="http://www.flickr.com/search/?q=biowall&amp;w=14112399%40N00" target="_blank">Biowall! </a>- photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>In <a href="http://speedbird.wordpress.com/2007/12/09/antisocial-networking/" target="_blank">your post â€œAntisocial Networking,â€</a> you make some telling comments on the sorry state of social networking systems.</p>
<div style="margin-left: 40px;"><strong><em>â€œAll</em> <em>social-networking systems, as currently designed, demonstrably create social awkwardnesses that did not, and could not, exist before. All social-networking systems constrain, by design and intention, any expression of the full band of human relationship types to a very few crude options &#8211; and those static! A wiser response to them would be to recognize that, in the words of the old movie, â€œthe only way to win is not to play.â€</em></strong></div>
<p>But you do also state:</p>
<div style="margin-left: 40px;"><strong><em>â€œBut itâ€™s past time for me to acknowledge that while the discourse of social networking may at first blush seem marginal to my core concerns, itâ€™s far more central to those concerns than I might wish.â€</em></strong></div>
<p>Which of your concerns is social networking more central to than you might wish and why?</p>
<p><strong>AG: Well, you know Iâ€™m interested in social interaction, interpersonal behavior, and in how these things play out in networked environments. Thereâ€™s virtually no way for me to avoid dealing with Facebook, as wretched as I think it is</strong><strong>.</strong></p>
<p><strong>Facebook is pretty hegemonic, in that its reach and influence extend further than the universe of people who use it. I bump up against it constantly, in a few different ways. People send me links I canâ€™t access, because Iâ€™m not on Facebook. People spend time and energy trying to convince me that Iâ€™m really missing out, because Iâ€™m not on Facebook. The last few months, thereâ€™s even been a few people who feel justified in expressing some kind of </strong><strong>exasperation, that theyâ€™re really pissed offâ€¦because they canâ€™t find me on Facebook. Itâ€™s become the sovereign interface to any kind of life in public</strong><strong>, and as a result a great many people donâ€™t question its modes, tropes and metaphors.</strong></p>
<p><strong>So when it comes time to build some kind of situated interpersonal mediation framework, some kind of intervention in the fabric of the city, those are the tropes they reach for: accounts, profiles, friend counts, friendings and unfriendings, nudges and pokes. And as a member of a team tasked with the design of such systems, as a potential user of them, and certainly as someone exposed to the social rhetoric flowing downstream from their use, you bet these tropes become central to my concerns.</strong></p>
<p><strong>But what if we admitted that Facebook and the whole paradigm itâ€™s built on are broken? What would things look like if we started from a more sensitive understanding of the interaction between self and others? Say, the understanding Erving Goffman was offering us as far back as the late 1950s? Then youâ€™d understand the need for provisions like a â€œbackstage,â€ a place to swap out one mask for another, the ability to present oneself differently to different communities and networks. Thatâ€™s what Iâ€™m interested in exploring.</strong></p>
<p><strong>TS: </strong>Social networking systems in their current form are crude and express a very narrow bandwidth of human relationship. But already people are connecting everywareâ€™s networked social acts to existing social networking systems. At the ITP winter show there was <a id="eo:2" title="kickbee" href="http://gizmodo.com/5109297/kickbee-now-the-world-can-know-what-your-fetus-is-up-to">kickbee</a> &#8211; networked fetal communication (and <a id="kwj6" title="tweetmobile" href="http://tweetmobile.com/">tweetmobile</a> which used twitter as an acctuator for an ambient display) and green everyware (energy monitoring) is showing up in a number forms on existing social networks. But rather than just hooking up everyware to these existing flawed social network systems, does everyware require a reimagining of networked social interactions and social networking systems?<strong><br />
</strong><br />
<strong>AG: Thatâ€™s a great question, and I think the answer is clearly â€œyes.â€ Itâ€™s one thing to confine the consequences of that brokenness to the Web, and entirely another to let it bleed out into the world.</strong></p>
<p><strong>Does that mean any such reimagining is <em>going</em> to happen, that people will somehow refrain from plugging real-world outputs into these terribly flawed frameworks? Not a chance in hell. Itâ€™s too late to put a fence on that particular cliff. But maybe thereâ€™s still time to park an ambulance in the valley</strong><strong> below.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg"><img class="alignnone size-full wp-image-3074" title="earthssurface" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/earthssurface.jpg" alt="earthssurface" width="375" height="500" /></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2970558731/" target="_blank">&#8220;A graphic representation of a portion of the Earth&#8217;s surface, as seen from above&#8221;</a> &#8211; photo by Adam Greenfield<br />
</em></p>
<p><strong>TS: </strong>I saw you tweet that you met Usman Haque from <a href="http://www.pachube.com/" target="_blank">Pachube</a> recently. What do you find most interesting about Pachube and <a href="http://www.eeml.org/" target="_blank">EEML</a>? Will you design a project for Pachube to push the conversation further?Â  Did Usman ask you to take a role in the future of Pachube. How does Pachube enable the vision of<em> <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a></em>? I could go on for ever with questions,Â  so please do tell!</p>
<p><strong>AG: OK, I should probably reiterate that my fundamental interest is in people, and in what they choose to make and do with technology, not the technology itself. For the last few years, Iâ€™ve particularly been trying to understand how people interact with each other and with the urban environments around them when those environments have been provisioned with the ability to gather, process and take action on data. And this is how I come about my interest in what Usman is up to with Pachube, because those â€œgather,â€ â€œprocessâ€ and â€œtake action uponâ€ functions are generally accomplished by different systems, designed by different groups of people, at different times and to different ends. What Pachube aims to do is make the difficult and not-particularly-glamorous work of connecting these pieces a whole lot easier.</strong></p>
<p><strong>Think of it as a step toward enabling the ontome, this so-called Internet of Things we&#8217;ve been talking about, the same way basic protocols like HTTP and HTML enabled the wildfire spread of the Internet weâ€™re familiar with. What Pachube offers is a way &#8211; a relatively straightforward and self-explanatory way &#8211; to plug any given compatible input into a similarly compatible output. So if youâ€™ve got an air-quality sensor or a soil-pH sensor or a personal biometric monitor, you can plug it into Pachube, and someone else can grab the data those things generate and use it to drive a visualization, or the state of a physical system like a window, or whatever else they can imagine. Itâ€™s as close as anyoneâ€™s yet come to providing a plug-and-play backbone for the creation of responsive environments.</strong></p>
<p><strong>And I think itâ€™s absolutely brilliant that itâ€™s designed to work with Arduino and Processing, two lightweight, open-source frameworks that hobbyists and researchers (and even one or two more serious developers) around the world are already using to build things. (Arduinoâ€™s a kit of parts for doing basic physical computing &#8211; using data to drive lights, motors, and other actuators that have effect out here in the world &#8211; while Processing is a very accessible language to do dynamic and interactive graphics for screen-based media). Given both its openness and modularity, and its willingness to build on top of the very popular frameworks that already exist, Iâ€™m very excited to see what people make of and with Pachube.</strong></p>
<p><strong>I have to be honest and admit that personally, I couldnâ€™t really care less about the environmental angle, for reasons that I went into at embarrassing length above. What Iâ€™m engaged by in Usmanâ€™s work is the idea that Pachube is helping to create an open platform for people to share data more readily. And while, no, he hasnâ€™t explicitly asked me to take any particular stake in things, Iâ€™m always happy to lend a hand in whatever way would be most useful. I think itâ€™s a project worth supporting.</strong></p>
<p><strong>As to how Pachube enables some of the ideas in</strong><em><strong> The City Is Here</strong></em><strong>, the answer has to do with the bookâ€™s call for every â€œpublic objectâ€ &#8211; every lamppost, bus shelter, commercial faÃ§ade, and so forth &#8211; to support an open API. Somethingâ€™s got to string all those objects together, present them to people as resources to be taken up and used, and Usmanâ€™s offered us a critical first step in that direction.</strong><em><strong><br />
</strong></em><br />
<strong>TS:</strong> Usman suggested, it might be interesting to ask you about â€œthe tension between â€˜couldâ€™ and â€™should.â€™</p>
<p><strong>Usman Haque: </strong>There are a whole bunch of things that we â€œcanâ€ do, technologically speaking; how do we decide what we â€™shouldâ€™ do, as we find ourselves in an age where we can build almost anything we can imagineâ€¦? particularly with reference to technology/privacy/security triumvirate. e.g., leaving aside that the majority of the world is *not* in the technology â€˜paradiseâ€™ that weâ€™re in, here in the west, only a small fraction of people are currently producing the technology that the rest of us use; one aim is to get people more engaged in the productive process, but, in a sense that will also mean the whole wide ecosystem of technology will be even bigger, both â€œgoodâ€ stuff and â€œbadâ€ (that qualification firmly placed on how itâ€™s used), as opposed to now when we can focus on quite specific things that government &amp; industry are doing and saying â€œthat shouldnâ€™t be happeningâ€¦.â€. part of this relates to something <span class="nfakPe">adam </span>said on his blogÂ  in the comments (see <a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank">here</a>).â€Â <strong><a href="http://speedbird.wordpress.com/2007/12/02/urban-computing-pamphlet-is-go/" target="_blank"> </a></strong></p>
<p><strong>AG: I think the first part of answering that question has to involve figuring out who â€œweâ€ are in any given situation. A â€œweâ€ composed of seven Helsinki-based Linux developers would most likely arrive at very different answers than the United States Air Force Materiel Command or Samsungâ€™s board of directors, right? So clearly, a first challenge is getting to some kind of pragmatically useful alignment between those local and occasionally even painfully parochial perspectives with whatâ€™s best for the Big We. And this challenge is only going to become more vexing as the ability to imagine, design, build and deploy informatic componentry gets more and more widely distributed. In this respect the spread of simple, modular, low-barrier-to-entry tools only makes things worse!</strong></p>
<p><strong>The primary issue that I can see here is that the inherent clock speed of technical development is so very much faster than that of any meaningful deliberative process â€œweâ€ might bring to bear on it. A concomitant concern is that the sources of technical innovation and production are now so widely distributed that you can be reasonably certain that somebody, somewhere will implement any given technically feasible idea, no matter how offensive, poorly thought-out, socially disruptive or frankly stupid. A public toilet you have to SMS to unlock and use? A â€œFriend Finderâ€ visualization with high locational precision and no privacy features whatsoever? A first-person rape-simulation â€œgameâ€? A clunky brown iPod knockoff? Somebody thought each one of these things was worth the time, expense and effort to actually go about making it. They exist.</strong></p>
<p><strong>But Iâ€™m pretty old-fashioned in some ways, in that I think the good old Habermasian idea of the public sphere still has some life left in it. And I think it should be self-evident by now that thereâ€™s no necessary contradiction between even the newest (cough) â€œsocial mediaâ€ and the formation of such a sphere. So youâ€™ve provided a forum, and in it I get to express my belief that these things are stupid and pointless and probably should not have been built. And if somebody gets all het up about that, they can argue right back at me in comments. And eventually one or another of these positions begins to tell, in terms of regulation, legislation, and other tools of the juridical order, in terms of protest campaigns or organized boycotts or litigationâ€¦in terms of nonexistent sales!</strong></p>
<p><strong>Thereâ€™s nothing new in any of this, of course, though indubitably some of the dynamics are amplified or accelerated by e-mail, Twitter and YouTube. My main contention is that informatic technology now has such deeply pervasive implications, and for things like presentation of self that previous waves of technical development barely touched, that â€œweâ€ as societies need to be very much more conscious of the consequences before committing to any one course of action.</strong></p>
<p><strong>I should also point out that I do not, at all, believe that weâ€™re â€œin an age where we can build almost anything we can imagine,â€ though I might buy â€œâ€¦<em>two or three of</em> almost anything we can imagine.â€ On the contrary, as I implied above, I think the global constraints on our ability to operate freely are already becoming quite evident, and will continue to grow teeth over the next few decades.</strong></p>
<p><strong><br />
TS: </strong>Also UsmanÂ  added &#8230;</p>
<p><strong>Usman Haque:</strong> ..where Adam said: <em>in this regard, I very much *do* have a problem with â€œjust showing up.â€ â€” </em>something I feel that as well. but i always wonder: What happens when one appears to be mandating participationâ€¦?</p>
<p><strong>AG: Look, I happen to have a strong &#8211; maybe some would say obnoxious or hyperactive or overdeveloped &#8211; sense of personal responsibility and accountability. I think one is basically committed to some measure of responsibility for the commonweal simply by surviving to the age of majority. The</strong><strong> choice of how, particularly, to discharge that responsibility</strong><strong> can only be yours and yours alone, but it canâ€™t be ducked or gotten around without severe and entirely predictable consequences. So to Usman Iâ€™d respectfully suggest that Iâ€™m not the one mandating participation. Life is.</strong></p>
<p><em><strong><br />
</strong></em></p>
<p><strong>TS:</strong> It seems we have grown accustomed to striking a Faustian bargain on the internet today -Â  in order to share and distribute parts of our identity we are expected to give up key information to one site to store and disperse our data. <strong> </strong>I took part in<a href="http://www.ugotrade.com/2007/12/21/a-conversation-with-eben-moglen-on-second-life/" target="_blank"> a discussion with David Levine, IBM and Eben Moglen on privacy</a> last year.Â  And Eben Moglen gave a succinct description of the elements of privacy and how they have been treated in the American Constitution that is, I think, relevant to unpacking some of the challenges of ubiquitious computing. Here are some extracts from that conversation where, Eben notes:</p>
<blockquote><p><em>there are three elements that are mixed up in privacy and we tend not to notice which one we are talking about at any given moment.</em></p>
<p><em>There is secrecy &#8211; that is the data should not be readable by or understandable by anybody except me or people I designate. There is anonymity which is the data can be seen by anybody but about whom it is should be knowable only by me or people that I designate. And there is autonomy which isnâ€™t about either secrecy or anonymity but which is about my right to live under circumstances which reinforce my sense that I am in control of my own fate. And this form of privacy is actually the one we talk about in the constitutional structure when we talk about the right to get an abortion or use birth control.</em></p></blockquote>
<p>â€œAnonymityâ€ is a condition that is a deep structuring characteristic of the internet as you, Lessig and others have commented on.Â  And frequently we are promised (questionably) â€œsecrecyâ€ or anonymity as privacy protection by services handling our data on the internet.Â  But Eben (one of the USâ€™s great constitutional lawyers) points out that â€œautonomyâ€ is a key form of privacy in theÂ  US constitutional structure that is often compromised in situations where our digital selves may constrain our non-digital selves.</p>
<blockquote><p><em>The real issue here is about the forcing of choices on usâ€¦digital aspects of identity can quickly acquire an inflexibilty that constrains our non-digital selves.</em></p>
<p><em>I see again and again the ways in which people now find themselves unable to make certain life choices easily because there digital self has acquired an inflexibility that constrains their non-digital self.</em></p></blockquote>
<p>As we go beyond the end to end internet and we lose the structuring characteristic that has privileged anonymity: How do you see these three elements of privacy, anonymity, secrecy and most importantly autonomy, being worked out in a networked world beyond the end to end internet?</p>
<p>Are there any new structuring characteristics that could privilege autonomy? (which Eben indicates is linked to having a flexible identity).</p>
<p><strong>AG: If we accept for the moment a definition of autonomy as a feeling of being master of oneâ€™s own fate, then absolutely yes. One thing I talk about a good deal is using ambient situational awareness to lower decision costs &#8211; that is, to lower the information costs associated with arriving at a choice presented to you, and at the same time mitigate the opportunity costs of having committed yourself to a course of action. When given some kind of real-time overview of all of the options available to you in a given time, place and context &#8211; and especially if that comes wrapped up in some kind of visualization that makes anomaly detection and edge-case analysis instantaneous gestalts, to be grasped in a single glance &#8211; your personal autonomy is tremendously enhanced. <em>Tremendously</em> enhanced.</strong></p>
<p><strong>But as to how this local autonomy could be deployed in Moglenâ€™s more general terms, I donâ€™t know, and Iâ€™m not sure anyone does. Because heâ€™s absolutely right: Bernard Stiegler reminds us that the network constitutes a <em>global mnemotechnics</em>, a persistent memory store for planet Earth, and yet weâ€™ve structured our systems of jurisprudence and our life practices and even our psyches around the idea that information about us eventually expires and leaves the world. Its failure to do so in the context of Facebook and Flickr and Twitter is clearly one of the ways in which the elaboration of our digital selves constrains our real-world behavior. Let just one picture of you grabbing a cardboard cutoutâ€™s breast or taking a bong hit leak onto the network, and see how the career options available to you shift in response.</strong></p>
<p><strong>This is whatâ€™s behind Anne Gallowayâ€™s calls for a â€œforgetting machine.â€ An everyware that did that &#8211; that massively spoofed our traces in the world, that threw up enormous clouds of winnow and chaff to give us plausible deniability about our whereabouts and so on &#8211; might give us a fighting chance.</strong><br />
<strong><br />
TS: </strong>The concept of autonomy is signaled clearly in the title you have chosen for your next book, <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"><em>The City Is Here For You To Use</em>,</a> and is a theme of all your writing!Â  While you talk about many of the possible constraints to presentation of self and potential threats to a flexible identity that ubicomp poses, your next book signals optimism. What are your key grounds for optimism?</p>
<p><strong>AG: Itâ€™s not optimism so much as hope. Whether itâ€™s well-founded or not is not for me to decide. I guess I just trust people to make reasonably good choices, when theyâ€™re both aware of the stakes and have been presented with sound, accurate decision-support material.</strong></p>
<p><strong>Putting a fine point on it: I believe that most people donâ€™t actually want to be dicks. We may have differing conceptions of the good, our choices may impinge on one anotherâ€™s autonomy. But I think most of us, if confronted with the humanity of the Other and offered the ability to do so, would want to find some arrangement that lets everyone find some satisfaction in the world. And in its ability to assist us in signalling our needs and desires, in its potential to mediate the mutual fulfillment of same, in its promise to reduce the fear people face when confronted with the immediate necessity to make a decision on radically imperfect information, a properly-designed networked informatics could underwrite the most transformative expansions of peopleâ€™s ability to determine the circumstances of their own lives.</strong></p>
<p><strong>Now thatâ€™s epochal. If that isnâ€™t cause for hope, then I donâ€™t know what is.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg"><img class="alignnone size-full wp-image-3076" title="obamannook1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/obamannook1.jpg" alt="obamannook1" width="375" height="500" /></a></strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/3246420459/" target="_blank">Newson Obamanook</a> &#8211; photo by Adam Greenfield, &#8220;The fact that it was one of the happiest days of my adult life may have colored my appreciation of this space. A bit, anyway.&#8221;</em></p>
<p><strong>TS:</strong> In your writing you seem to imply that we will not find answers to our new relationship with Everyware by transposing the internet onto things for convenienceâ€™s sake but rather like the bike messengers -Â  we must explore the rich and complex terrain of the city that is ours to use in a give an take relationship.Â  Through our own exertions we find- how â€œanything reasonably smooth and approximately horizontal can become a thoroughfare,â€Â  rather than be served up the city as something for us to consume.</p>
<p>You seem to be suggesting our city becomes ours to use because of the way we use it in our personal journeys -like â€œthe messenger subconsciously maps the contours of an economic geography &#8211; known sources and sinks of courier assignments, or â€œtagsâ€ &#8211; and a threat landscape, this latter comprised of blind corners, cable-car and metro tracks, and traffic lanes.</p>
<p>But bike messengers are the lone ranger of our big cities. Others surf the city in tribes that ride the roiling tides of highly networked information together. How are the â€œnaturalâ€ gestures of these tribes, e.g. day traders, who yoked to the tracings of a hive mind, part of the city that is here for us to use?Â  I thought the comment <a href="http://twitter.com/ginsudo" target="_blank">@ginsudo</a> made shortly after joining Twitter and setting up TweetDeck particularly poignant:</p>
<blockquote><p><em><span class="status-body"><span class="entry-content">â€œwatching Tweetdeck is like watching stock market of your personality ebb and flow. needs analytics to maximize inherent self-involvement.â€</span></span></em></p></blockquote>
<p>But, for many of us our work has more in common with the day trader than the bike messenger, and are we pretty hooked on the ever growing possibilities for â€œcontactâ€ and identity sharing/construction, social media has producedÂ  (with all theâ€Here Comes Everybody,â€ C. Shirky, benefits and risks).Â  Early theorizing of a â€œcalm,â€ invisibleâ€ ubicomp seems out of synch with the excitable, active, engaged, contact driven, â€œusersâ€ that are <span class="status-body"><span class="entry-content">watching stock market of their personality (or personal brand) ebb and flow.</span></span></p>
<p>How will these excitable/exciting processes of contact and identity sharing that have captured of a pretty large segment of popular imagination (not confined to the West -services like <a id="f9mb" title="Gupshup" href="http://www.smsgupshup.com/">Gupshup</a> does much of the same curating, linking and distributing of identity that web based social media does in SMS) be/ or not be part of <a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank"> The City Is Here For You To Use</a>?<strong><br />
</strong><br />
<strong>AG: Letâ€™s remember that ubicomp itself, as a discipline, has largely moved on from the Weiserian discourse of â€œcalm technologyâ€; Yvonne Rogers, for example, now speaks of â€œproactive systems for proactive people.â€ You can look at this as a necessary accommodation with the reality principle, which it is, or as kind of a shame &#8211; which it also happens to be, at least in my opinion. Either way, though, I donâ€™t think anybody can credibly argue any longer that just because informatic systems pervade our lives, designers will be compelled to craft encalming interfaces to them. That notion of Mark Weiserâ€™s was never particularly convincing, and as far as Iâ€™m concerned itâ€™s been thoroughly refuted by the unfolding actuality of post-PC informatics.</strong></p>
<p><strong>All the available evidence, on the contrary, supports the idea that we will have to actively fight for moments of calm and reflection, as individuals and as collectivities. And not only that, as it happens, but for spaces in which weâ€™re able to engage with the Other on neutral turf, as it were, since the logic of â€œsocial mediaâ€ seems to be producing</strong><em><strong> Big Sort</strong></em><strong>-like effects and echo chambers. We already â€œmaximize inherent self-involvement,â€ analytics or no, and the result is that the tools allowing us to become involved with anything but the self, or selves that strongly resemble it, are atrophying.</strong></p>
<p><strong>So when people complain about K-Mart and Starbucks and American Eagle Outfitters coming to Manhattan, and how it means the suburbanization of the city, I have to laugh. Because the real</strong> <strong>suburbanization is the smoothening-out of our social interaction until it only encompasses the congenial. A gated community where everyone looks and acts the same? <em>Thatâ€™s</em> the suburbs, wherever and however it instantiates, and I donâ€™t care how precious and edgy your tastes may be. Richard Sennett argued that what makes urbanity is precisely the quality of necessary, daily, cheek-by-jowl confrontation with a panoply of the different, and as far as I can tell heâ€™s spot on.</strong></p>
<p><strong>We have to devise platforms that accommodate and yet buffer that confrontation. We have to create the safe(r) spaces that allow us to negotiate that difference. The alternative to doing so is creating a world of ten million autistic, utterly atomic and mutually incomprehensible tribelets, each reinforced in the illusion of its own impeccable correctness: duller than dull, except at the flashpoints between. And those become murderous. Nope. Unacceptable outcome.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg"><img class="alignnone size-full wp-image-3075" title="uncannyvalleys" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/uncannyvalleys.jpg" alt="uncannyvalleys" width="500" height="369" /></a></strong><br />
<em><a href="http://flickr.com/photos/studies_and_observations/3119708407/" target="_blank">Uncanny Valleys </a>- Adam comments,&#8221;Our apartment in NYC as rendered in Google Earth, with realtime traffic, weather, daylight and shadow as well as geodetic, street grid and service overlays. Camera view is South; that&#8217;s First Avenue just left of center-screen.&#8221;</em></p>
<p><strong><br />
TS:</strong> Smart phoneâ€™s are now drawing everyware data into the system and the net is reaching into who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, etc..</p>
<p><a id="u:ys" title="Nathan Freitas" href="http://openideals.com/">Nathan Freitas</a> says Android:<em> </em>â€œseems to be the platform most likely to socialize the idea that sensor data could be a piece of every application.â€ (Android APIs for a wide range of sensor data.)</p>
<p>What in your view will be the most likely platform, Android or what?, to socialize the idea that sensor data could be a piece of every application?</p>
<p><strong>AG: An open platform. A platform with lots of hooks and ways to plug things into it, a strong developer community, a shallow learning curve and/or an easy-to-use, high-level development environment.</strong></p>
<p><strong>I donâ€™t have a dog in this race, mind you. I couldnâ€™t care less who gets there first.</strong></p>
<p><strong>TS: </strong>New location based services, e.g., <a id="kvue" title="Xtify" href="http://xtify.com/featured">Xtify</a> and <a id="fajp" title="ViaPlace" href="http://www.viaplace.com/">ViaPlace</a>, are offering us ways to share location data across lots of different applications (eg Xtify and a dating application like <a id="yixz" title="MeetMoi" href="http://www.meetmoi.com/welcome">MeetMoi</a> ). In return for services that allow us to share information, we must give up key information up to one site to store and disperse (although there are many differences in approach to our data, from the Twitter stance â€œshow but donâ€™t ownâ€ as opposed to Facebookâ€™s stance &#8211; â€œin order to show we must have rights to itâ€). But the basic model of Twitter &#8211; to provide a white noise platform for people to build service on top off seems to be being transposed to location based services. Obvious questions arise like what happens to our data in a start up like MeetMoi if they go belly up?Â  Apparently in the dot.com bust data was the first thing to go on the auction block in bankrupcy cases.</p>
<p>Also, I suppose it is hardly surprising (if disappointing to me) that some of the early location based services are trying to get mindshare by picking up on the glue celebrities give to mass culture. At the last New York Tech Meetup, <a href="http://m.twitter.com/omgicu" target="_blank">OMGICU</a> demoed a rather terrifying new pre-launch location based â€œparticipatory celebrity gossip applicationâ€ which seems to combine all the worst features of social media with celebrity stalking, plus a narrative to change the notion of celebrity itself by â€œturning D listers into A listers.â€</p>
<p>Hopefully location based applicationsÂ  will not get stuck on â€œstalker, stalker, stalkerâ€ apps like OMGICU .</p>
<p>David Oliver, <a id="qgz3" title="Oliver Coady" href="http://olivercoady.com/">Oliver Coady</a> gave me a good question: &#8220;How does timeliness and location-independence change our ideas of social media?</p>
<p>And how can we design new architectures that can reinforce the sense that I am in control of my own fate?</p>
<p><strong>AG: But weâ€™ve already come so far in terms of turning D-listers into A-listers! On a daily basis, Iâ€™m exposed to almost as many cues insisting I attend to nonentities and dullards like Robert Scoble as those insisting I attend to nonentities like Madonna or Thomas Friedman.</strong><strong> Itâ€™s gotten ridiculous.</strong></p>
<p><strong>Now, how does timeliness and location change our ideas of social media? It makes them dangerous!</strong></p>
<p><strong>Look, even a proud Z-lister like myself &#8211; Iâ€™m a public person only in the most debased and degraded meaning of that word &#8211; Iâ€™ve had experiences that shook me up, like having someone approach me while I was quietly hanging out in the back of St. Markâ€™s Books, and wanting to strike up a conversation based on some talk theyâ€™d seen me give a year or so previously. Now part of learning to deal with this kind of thing is shrugging it off, being grateful and flattered that someone thinks youâ€™re interesting enough to single out for that kind of attention, or chalking this up to Sennettâ€™s observation about the constitution of urbanity. Or doing all three at once.</strong></p>
<p><strong>But letâ€™s remember that at the end of the day, a â€œsocial networkâ€ is nothing but a group of arbitrarily distributed human beings joined by a communications channel, and those people have eyes and ears. The degree to which they recognize some shared interest gives them significance filters. If social capital accrues to those in the network who are able to claim some connection with a â€œcelebrity,â€ no matter how fleeting, then such connections are going to be mobilized, made explicit. And now say the network has been provided with the tools allowing it to plot the appearances of those putative celebrities in space and time, and what do you get? You get a circumstance in which it is very, very difficult to maintain any membrane between the private self and the world, for anyone whoâ€™s even remotely a public figure, whether they particularly want to be a public figure or not. You get network effects that amplify those locational traces, and further undermine any possibility of anonymity, even anonymity-by-suspension-of-interrogative-awareness (which is a clumsy way of referring to that blasÃ© matter-of-factness around famous people that most big-city folks eventually develop).</strong></p>
<p><strong>Am I letting myself off the hook? Not in the slightest. I passed Terence Stamp on the street not so long ago, and you bet I Twittered it. My only excuse was that I Twittered it to a closed loop of no more than a few dozen people. But then, who knows what those few dozen people will turn around and do with that fact, on the open networks to which they in turn belong?</strong><strong> And that, too, is my responsibility.</strong></p>
<p><strong>Iâ€™m not sure thereâ€™s anything to be done about any of this but cultivate our own urbanity, learn to say â€œso whatâ€ when we happen to find ourselves next to Philip Seymour Hoffman in the line at Whole Foods.</strong><strong><br />
</strong></p>
<p><strong>TS: </strong>Zittrain in <a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It</a>, foregrounds â€œgenerativityâ€ and a generative devices (as opposed to appliances) as the most fortuitous starting point for: â€œtools to bring about social systems to match the power of the technical one.â€</p>
<p>Are appliances a threat to the city that is here for you to use? How can generativity ensure <em><a id="pxeu" title="The project description for Adam Greenfield's upcoming book, The City Is Here For You To Use" href="http://speedbird.wordpress.com/2008/01/01/new-day-rising/" target="_blank">The City Is Here For You To Use</a></em> as Zittrain argues it has ensured, even if imperfectly, that the internet has been here for us to use?<strong><br />
</strong><br />
<strong>AG: You know, I havenâ€™t read the book, Iâ€™ve only heard him give the talk, so itâ€™s certainly possible thereâ€™s a subtlety to the argument that Iâ€™m missing. But Iâ€™m not sure Jonathan isnâ€™t simply wrong about this notion of generativity. Not that the concern is misplaced, but that heâ€™s insufficiently trustful in human agency. Is a car â€œgenerative,â€ by his definition? Certainly not. And yet look at all the cultural production that goes on around â€œthe car,â€ look at all the assemblages people make with cars, from Beach Boys songs to <a href="http://en.wikipedia.org/wiki/Ghost-riding">ghost riding the whip</a>, from J.G. Ballard novels and <em>Herbie the Love Bug</em> to <em>Tokyo Drift.</em></strong></p>
<p><strong>Or probably more to his point: look at the Japanese mobile-phone market &#8211; seemingly one of the most locked-down and unpropitious circumstances imaginable for the production of culture, in technical terms and Zittrainâ€™s both. And yet fully 50% of the bestselling books in Japan last year were written on mobile phones. Not <em>read</em>, which would already be impressive enough (if â€œimpressiveâ€ is indeed the word): </strong><em><strong><a href="http://www.nytimes.com/2008/01/20/world/asia/20japan.html">written</a>. </strong></em><strong>What does that imply for his argument?</strong></p>
<p><strong>So, yes, I think there are grounds for concern in that we don&#8217;t allow technologies and frameworks to appear that unduly limit the scope of human creativity</strong><strong>. Code is still law. But I also think people are quite amply able to reach into what would appear to be the least propitious technologies and tell their own stories with same.<br />
</strong></p>
<p><strong><br />
TS: </strong> One aspect of Everyware that seems in need of some visionary yoga is the how we will relate to pixels anywhere.</p>
<p>In <em><a href="http://www.lulu.com/content/1554599">Urban Computing and its Discontents</a></em> you mention how our technological trajectories often make it seem as if we seem to get fixated on particular scenes in movies, e.g., <em>Minority Report</em>. You point out that so many ambient informatics projects seem simply â€œto expand the reach of signage and advertising in dense urban spacesâ€¦.as if weâ€™ve become transfixed by the scene from <em>Minority Report</em> where heterosexual cop John Anderton is on the run from his colleagues.â€</p>
<p>Ideas from the <em>Minority Report</em> continue to hold sway in designs as we saw in the recent MIT demo of <a href="http://ambient.media.mit.edu/projects.php?action=details&amp;id=68" target="_blank">SixthSense</a> at TED.</p>
<p>But visions of augmented reality were pretty high profile in this years Super Bowl commercials this year (including a highly anthropomorphic imagining of ubicomp that was a kind of WoW mashup with a Pixar movie).</p>
<p>What recent movies/commercials have produced scenes mostly likely to be are new fixation fodder for ubicomp and why?</p>
<p><strong>AG: I donâ€™t think Iâ€™m qualified to answer that, actually. We donâ€™t have a TV, so I donâ€™t see much in the way of commercials, and most of the films I wind up seeing are the kind that play at Anthology Film Archives. What I can say is that science fiction is currently suffering in toto from an inability or disinclination to posit future scenarios that are any weirder or more visionary than those emerging from other sectors of the culture. And that would be fine, except sf has traditionally been the place where we wrestled with the imaginary.</strong></p>
<p><strong>We need that set of tools, badly. If for no other reason than something I glean from personal experience: essentially my entire professional career has simply been the leveraging of ideas and concepts I originally wrestled with in the encounter with William Gibson and Bruce Sterling when I was 16. Today&#8217;s visionary sf means tomorrow&#8217;s halfway-competent generalist.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg"><img class="alignnone size-full wp-image-3030" title="nurrikim" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/nurrikim.jpg" alt="nurrikim" width="375" height="500" /> </a></strong><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank"></a></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/531862201/" target="_blank">Nurri Kim in the waiting zone</a> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS: </strong>My AR friend, <a href="http://curiousraven.squarespace.com/about-me/">Robert Rice</a>, who is <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">working on a markerless AR platform,</a> notes that data visualization is one of the critical elements of AR in terms of â€œmake or break.â€ Robert says, â€œeven with the ultimate in ubiquitious data from everything, without good data vis it will all be uselessâ€</p>
<p>Also something Cory Doctorow said to me last year has really stuck in my mind. When I asked him what happens when Cyberspace everts, he talked about a reverse surveillance society:</p>
<div style="margin-left: 40px;"><em>â€œSurveillance is all about when people in authority know a lot about you. Instrumentation is when you know a lot about the world,â€</em></div>
<blockquote><p>C<em>ory: Well this is like Spook Country the new Gibson novel â€“ What happens when cyber space everts â€“ hmmm? Iâ€™m not sure I have anything very pithy to say on that EXCEPTâ€¦â€¦â€¦ </em><br />
<em> Apart from all the traditional kind of overlay reality stuff, if there is one thing I am actually interested seeing from a virtual world migrating to the real world its instrumentation. </em><br />
<em> I think lot of things that are characteristic of very successful internet based business is that they are extremely finally instrumented so like Amazon knows in aggregate on a second by second basis how their site is being used by people and they can twiddle the dials in real time. </em></p>
<p><em> As users of the world we have very little access to that kind of instrumentation. We donâ€™t even know how the tube is running. The tube knows how the tube is running and we kinda of donâ€™t. I would be really interested in seeing that. Youâ€™ve seen <a href="http://joi.ito.com/">Joi Itoâ€™s</a> WoW interface right. Have you seen it â€¦ </em></p></blockquote>
<p>Joi Itoâ€™s WoW interface seems a long way from the calm, invisible imaginings for ubicomp by early ubicomp visionaries?</p>
<p><strong>AG: Well, heâ€™s got a particular kind of neural wiring. And thereâ€™s not a thing thatâ€™s wrong with that, except that Iâ€™d never, ever want to assert that whatâ€™s appropriate for Joi Ito necessarily is or should be understood to be appropriate for anybody else. The point of calling for open systems and frameworks is to allow us maximum scope of diversity in the ways we choose to interface with the worldâ€™s richness and complexity.</strong><em><strong><br />
</strong></em> <strong><br />
TS: </strong>What new imaginings/possibilites do you see when pixels anywhere are linked to everyware?<strong><br />
</strong><br />
<strong>AG: Product placement. Commercial insertions and injections, mostly.</strong></p>
<p><strong>Beyond that: one of the places where Mark Weiser logic breaks down is in thinking that the platforms we use now disappear from the world just because ubiquitous computingâ€™s arrived. Weâ€™ve still got radio, for example &#8211; OK, now itâ€™s satellite radio and streaming Internet feeds, but the interaction metaphor isnâ€™t any different. By the same token, weâ€™re still going to be using reasonably conventional-looking laptops and desktop keyboard/display combos for awhile yet. The form factor is pretty well optimized for the delivery of a certain class of services, itâ€™s a convenient and well-assimilated interaction vocabulary, none of thatâ€™s going away just yet. And the same goes for billboards and â€œTVâ€ screens.</strong></p>
<p><strong>But all of those things become entirely different propositions in everyware world: more open, more modular, ever more conceived of as network resources with particular input and output affordances. We already see some signs of this with Microsoftâ€™s recent â€œSocial Desktopâ€ prototype &#8211; which, mind you, is a very bad idea as it currently stands, especially as implemented on something with the kind of security record that Windows enjoys &#8211; and weâ€™ll be seeing many more.</strong></p>
<p><strong>If every display in the world has an IP address and a self-descriptor indicating what kind of protocols itâ€™s capable of handling, then you begin to get into some really interesting and thorny territory. The first things to go away, off the top of my head, are screens for a certain class of mobile device &#8211; why power a screen off your battery when you can push the data to a nearby display thatâ€™s much bigger, much brighter, much more social? &#8211; and conventional projectors.</strong></p>
<p><strong>Then we get into some very interesting issues around large, public interactive displays &#8211; who &#8220;drives&#8221; the display, and so forth. But here again, we&#8217;ll have to fight to keep these things sane. It&#8217;s past time for a public debate around these issues, because they&#8217;re unquestionably going to condition the everyday experience of walking down the street in most of our cities. And that&#8217;s difficult to do when times are hard and people have more pressing concerns on their mind.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg"><img class="alignnone size-full wp-image-3045" title="citywarecrash" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/citywarecrash.jpg" alt="citywarecrash" width="500" height="375" /></a><br />
</strong></p>
<p><em><a href="http://flickr.com/photos/studies_and_observations/2786991056/" target="_blank">Citywarecrash</a> &#8211; photo by Adam Greenfield, &#8220;An occupational hazard for urban screens.&#8221;</em></p>
<p><strong>TS: </strong>I know in <em>Everyware</em> you mentioned that architects have play an important visionary role in imagining ubicomp and I know you work closely with your wife, artist <a href="http://www.nurri.com/">Nurri Kim</a>.Â  Robert Rice asked me the following question &#8211; which I will in turn ask you: &#8220;In terms of augmented reality do you think virtual worlds and virtual reality experts / leaders / are good pioneers for thought and guidance on AR? Or, should we look for new leaders, or where are new leaders emerging? Is the tech similar enough for the old crowd to be useful or is it different enough to be a disadvantage coming from the old models?.<strong>&#8221;<br />
</strong><br />
<strong>AG: I should make it clear that I have absolutely no interest in virtual worlds or virtual reality. The so-called virtual worlds Iâ€™ve experienced seem sad and really rather tatty &#8211; eversions of the most predictable adolescent fantasies of unlimited power, reinscriptions of all the usual politics &#8211; and completely lacking in just about everything that makes life resonant, meaningful and awe-inspiring. And anyway, to paraphrase J.G. Ballard, ordinary, everyday life is now far more vividly and fantastically weird than anything youâ€™ll see in Second Life. I mean, Garry Kasparov was heckled by a radio-control dildocopter, Joe the Plumberâ€™s off to Gaza as a war correspondent, a sea of dust-covered BMWs waits in the long-term parking lot at Dubai International for owners who are never, ever coming back.</strong></p>
<p><strong>Look to virtual worlds for insight into the hard work of negotiating the actual, with its physics, its entropy, its suffering, with all its constraints? Oh my goodness gracious, no.<br />
And look to leaders? Never.</strong><strong> Leaders are for followers, and who wants to be that? I donâ€™t mean you canâ€™t take inspiration and insight from the work of others &#8211; not at all &#8211; but use your own imagination, take some personal risk, do your own damn work.</strong></p>
<p><strong>Now, having said that. This opposition of virtual and physical worlds strikes me as increasingly a false one, as it does many people. The hard-and-fast distinction between â€œthe real worldâ€ and virtual environments make less and less sense, as righteously satisfying as making it can sometimes seem. There may be attributes of this physical environment that are impossible to see or make use of without access to the networked overlay, and those attributes may in time come to constitute the primary wellsprings of a given placeâ€™s meaning. And if youâ€™re offering me some insight that I think could be of utility in resolving the challenge of making this overlay accessible to all, equally, Iâ€™ll gladly accept it, no matter what domain or disciplinary background you claim</strong><strong> as your own. </strong></p>
<p><strong>Am I aware of any such insight coming out of virtual worlds? No. As Bryan Boyer notes, â€œIf you want to start talking about some serious cross-disciplinary pollination then you better take both sides of that disciplinary divide seriously. When your </strong><em><strong>ubi- </strong></em><strong>runs into my building with its boring HVAC, mundane load paths, typical finished floors, plain old foundations, etc., the transformative powers of </strong><em><strong>comp </strong></em><strong>are bracketed pretty seriously by the realities of the physical world.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg"><img class="alignnone size-full wp-image-3064" title="thecloudgate" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/thecloudgate.jpg" alt="thecloudgate" width="500" height="375" /></a><br />
<a href="http://flickr.com/photos/studies_and_observations/1904838102/" target="_blank"><em>The Cloud Gate has landed</em></a><em> &#8211; photo by Adam Greenfield, &#8220;Tell me this doesn&#8217;t look *just* like the descriptions of &#8220;stasis fields&#8221; in 70s SF. In fact, the picture looks practically CGId to me.&#8221;</em></p>
<p><strong>TS:</strong> Some people thought the whole world would have been plastered with RFID by now.Â  But before that has happened markerless AR seems to be in our sights.</p>
<p>If I understand it correctly marker versus markerless AR has quite different implications for how the cyberspace of ubicomp evolves?Â  I asked Robert Rice (he is developing a markerless AR platform) to explain some of the differences.Â  He said:</p>
<div style="margin-left: 40px;"><em>markers are discreet physical objects at worst, they are passive images that are linked to some sort of static data in a database somewhere (like a 3D object). If you destroy them, thats it. With markerless stuff, everything is persistent, dynamic, already linked in cyberspace. Marker based stuff requires a secondary infrastructure of hardware for telecommunications</em></div>
<p><em><br />
</em>Robert also pointed out to me that markerless AR may prove even more problematic for privacy:</p>
<div style="margin-left: 40px;"><em>Markers are easy to see, so you know where they are. RFIDs cant really be seen, but they can be detected. With markerless AR, there is nothing obvious to the naked eye you dont know if someone has active AR going on or not, so you could be tracked and not know it. Not much more than today with CCTVs all over the place so, it is the same [a surveillance issue] as marker based, but more subtle or inobvious.</em></div>
<p><em> </em></p>
<p>Do you have any thoughts about the different roles that markerless versus marker techinologies will play in AR and Ubicomp?</p>
<p><strong>AG: I need to admit that Iâ€™ve never until this moment heard the phrase â€œmarkerless AR,â€ although Iâ€™d think itâ€™s more or less self-explanatory to anyone whoâ€™s been following this stuff. Let me make the distinction explicit, shall I, for anyone who hasnâ€™t been? And you or Robert can correct me if Iâ€™ve gotten it wrong.</strong></p>
<p><strong>Augmented reality means that I have some mediating artifact that provides me with a visual overlay on the world</strong><strong>. This could be a phone, it could be a windshield, it could be a pair of glasses or contact lenses, doesnâ€™t matter. And youâ€™re going to use that overlay to superimpose some order of information about the world and the objects in it onto the things that enter my field of vision &#8211; onto what I see. So far, so good: thatâ€™s AR 101.</strong></p>
<p><strong>Now where does that information come from?</strong></p>
<p><strong>What youâ€™re calling marker-based AR implies that thereâ€™s some reasonably strong relationship between the information superimposed over a given object, and the object itself. That object is an onto, a spime, itâ€™s been provided with a passive RFID tag or an active transmitter. And itâ€™s radiating information about itself that Iâ€™m grabbing, perhaps cross-referencing against other sources of information, and superimposing over the field of vision. Fine and dandy.</strong></p>
<p><strong>But thereâ€™s another way of achieving the same end, right? Instead of looking at a suit jacket on a rack and having its onboard tag tell you directly that itâ€™s a Helmut Lang, style number such-and-such from menâ€™s Spring/Summer collection 2011, Size 42 Regular in Color Gunmetal, produced at Joint Venture Factory #4 in Cholon City, Vietnam, and packed for shipment on September 3, 2010, youâ€™re going to run some kind of pattern-matching query on it. And without the necessity of that object being tagged physically in any way, youâ€™re going to have access to information about it. But this set of information isnâ€™t, necessarily, what the object itself, or its creators or merchandisers, want you to know about it; it could be derived from online discussion fora or review sites, or blog posts, or whatever. All there needs to be is a lookup table, essentially, that tells you where to find information about any object in the field of vision whose identity can be established.</strong></p>
<p><strong>Do I have that right? And if I do, then as I understand it, the distinction is primarily a pragmatic one: itâ€™s just easier to get to an augmented world, by far, if we donâ€™t actually have to go to all the trouble of tagging everything in the world with its own dedicated RF transponder. Easier, and cheaper, and quicker, and more environmentally sound besides, because the relevant traffic is in bits not atoms.</strong></p>
<p><strong>Unless Iâ€™ve missed something, you donâ€™t, then, get the distinction between classes of objects and instances of same. Sometimes, when thereâ€™s a 1:1 correlation between the two, thatâ€™s not going to matter: Iâ€™m walking down the street in Madrid, and my glasses or whatever can easily recognize that this building is the Caixa Forum. Thereâ€™s only one of it, and I can get a positive ID via pattern recognition. But for some edge cases &#8211; twins and lookalikes, mostly &#8211; the same thing is generally true of people.</strong></p>
<p><strong>But other times it will matter. Is <em>this specific watch</em> a real, $10,000 Panerai or a $50 Kowloon fakery? How has <em>this</em> black 1998 Honda Civic over here differ from this other one in terms of its use and maintenance history? Does <em>this</em> O-ring gasket need to be replaced? I donâ€™t see how you extract data from specific instances of things without the necessary sensor instrumentation, transmitter, etc., being coextensive with the object in question or very closely colocated with it over time &#8211; in the terminology youâ€™re using, a â€œmarker.â€</strong></p>
<p><strong>So using these terms, Iâ€™d say that â€œmarkerlessâ€ AR comes first, is relatively easy to deploy, and generates not-insignificant value. But &#8211; again, unless Iâ€™m missing something &#8211; there are some things that it wonâ€™t ever be able to do, and for those things you need some provision for self-identification and self-location.</strong></p>
<p><strong>Ultimately I think it&#8217;s a distinction without a difference, from the user&#8217;s point of view. People will care much more about the source of whatever information shows up on their overlay than the precise technical means used to get it there.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg"><img class="alignnone size-full wp-image-3042" title="smileuroncctv" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/smileuroncctv.jpg" alt="smileuroncctv" width="394" height="500" /></a><br />
</strong></p>
<p><a href="http://flickr.com/photos/studies_and_observations/3274544108/" target="_blank"><em>The surrender to cynicism</em></a><em> &#8211; photo by Adam Greenfield</em></p>
<p><strong>TS:</strong> Much early thinking around ubicomp seems to have come from visionary architects and engineers but recently I was at the <a href="http://www.toccon.com/toc2009" target="_blank">O&#8217;Reilly Tools of Change for Publishing Conference</a> (publishing in the Digital Age) and I met several book futurists.Â  It struck me how ubicomp from the perspective of the book created some interesting questions for how particular material cultures will shape and be shaped by Ubicomp differently.</p>
<p><span class="status-body"><span class="entry-content">I noted, Google seemed well down the path to holy grail â€œconverting images to original intent XML.â€</span></span> And <a id="ricl" title="Peter Brantley" href="http://radar.oreilly.com/peter/">Peter Brantley</a> talked about machine parsed <span class="nfakPe">books</span>.</p>
<p>At TOC there were many suggestions about how b<span class="nfakPe">ooks</span> might manifest as everyware. (Although it did not seem that many people felt books had a special relationship to time and history and would not vanish as one of the great metaphors of calm and solitary enjoyment in our culture soon).Â  Books as everyware will, it seems, include, amongst other things:</p>
<p><span class="nfakPe">books</span> that read <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read context</p>
<p>context that reads <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> that read me</p>
<p><span class="nfakPe">books</span> linked to mobility &#8211; timeliness and location independence</p>
<p><span class="nfakPe">books</span> that are not <span class="nfakPe">books</span></p>
<p><span class="nfakPe">books</span> becoming babble</p>
<p><span class="nfakPe">books</span> bubbling up from the babble</p>
<p>There is an Institute of the Future of the Book. Will all former material cultures require their own institutes of the future to guide their cultures into everyware?Â  Do you think books transition into everyware is especially significant and why?</p>
<p><strong>AG: But all objects have a relationship to time and history, no?</strong></p>
<p><strong>TS: </strong>Yes! What I meant to convey really was the idea that many people expressed at TOC that books had a privileged relationship to knowledge in our culture that was valuable and related to some aspects of their current form, and that books as everyware, e.g. machine parsed books, and more sociallly generated forms would not replace that entirely.<br />
<em><strong><br />
</strong></em><strong>AG: Gotcha. Well, I certainly agree that books constitute an interesting category unto themselves &#8211; Iâ€™ve held onto my physical books, and in fact still spend a fortune buying new ones, where I stopped buying music on discs a long, long time ago. But I donâ€™t think this state of affairs can or should obtain forever.</strong></p>
<p><strong>Lately thereâ€™s been a good amount of thought around the notion of </strong><strong>&#8220;<a href="http://theunbook.com/about/">unbooks</a>,&#8221; which I regard as</strong><strong> a container for long-form ideas appropriate to an internetworked age. By building on some of the tropes of software development, mostly having to do with version control, open-endedness and an explicit role for the â€œuserâ€ community, unbooks can usefully harness the dynamic and responsive nature of discourse on the Web. At the same time, you preserve the things books are really good at: coherence, authorial voice and intent.</strong></p>
<p><strong>The important part is in acknowledging two points which have usually been understood as contradictory, but which are actually nothing of the sort: firstly, that the expression of ideas in written form has something to learn from the practices that have evolved around the collaborative creation of dynamic, digital documents over the half-century-long history of software; and secondly, that certain ideas require elaboration in the reasonably strongly-bounded form we know as a â€œbook,â€ and cannot meaningfully be shared otherwise. A third point, concomitant to the second, is that despite recent technical advances, screen-based media still cannot, and may not ever fully be able to, deliver the extratextual cues and phenomenological traces that support, inform and extend the meaning of written documents.</strong></p>
<p><strong>The unbook lets you have your cake and eat it too. So, for example, when we publish <em>The City Is Here</em>, one of its manifestations will be a static, physical document &#8211; and hopefully, if we do our jobs well, a very nice one indeed. But even before that, youâ€™ll be able to download a Creative Commons-licensed PDF of every numbered version of the manuscript, from zero onward. Bottom line: you buy the book if, and only if, you want the object. The ideas are free.</strong><br />
<strong><br />
TS: </strong><em><a id="ed35" title="David Brin" href="http://www.davidbrin.com/tschp1.html"> David Brin</a> sees two futures:1) the government watches everybody, and 2) everybody watches everybody (the latter he calls &#8220;sousveillance&#8221;).Â  My friend <a id="suag" title="Ben Goertzel" href="http://www.goertzel.org/">Ben Goertzel</a> says â€œhooking AI up to a massive datastore fed by ubicomp is the first step toward sousveillance?â€ What do you think the role of AI in ubicomp will be?Â  Is it worth thinking about what is the first important â€œAI meets ARâ€ app is?</em></p>
<p><strong>AG: I donâ€™t believe that artificial intelligence as the term is generally understood &#8211; which is to say, a self-aware, general-purpose intelligence of human capacity or greater &#8211; is likely to appear within my lifetime, or for a comfortably long time thereafter.</strong></p>
<p><strong>Having said that, your friend Ben seems to be making the titanic (and enormously difficult to justify) assumption that a self-aware artificial intelligence would share any perspectives, goals, priorities or values whatsoever with the human species, let alone with that fraction of the human species that could use a little help in countering watchfulness from above. â€œHooking [an] AI up to a massive datastore fed by ubicompâ€ sounds to me more like the first step toward enslavementâ€¦if not outright digestion.</strong></p>
<p><em><strong>Sousveillance </strong></em><strong>- the term is Steve Mannâ€™s, originally &#8211; doesnâ€™t imply â€œeverybody watching everybodyâ€ to me, anyway, so much as a consciously political act of turning infrastructures of observation and control back on those specific institutions most used to employing same toward their own prerogatives. Think Rodney King, think Oscar Grant.</strong><em><strong><a href="http://www.davidbrin.com/tschp1.html"><br />
</a></strong></em><br />
<strong>TS: </strong>I have one last question from Usman Haque.</p>
<p><strong>Usman Haque:</strong> insofar as a lot of what adam describes as desirable could be said to constitute pretty radical socio-political change (or perhapsâ€¦ â€œadjustmentâ€) i would be really interested to know how his current work @ nokia is or isnâ€™t able to gel with the themes of his writing. in some senses thereâ€™s quite an undercurrent strongly challenging corporate practices, in other senses it could be seen as gentle nudges. how does adam see it? and how about the nokia behemoth? does he have success nudging nokia towards the kind of world he would like to see (i imagine the answer is â€˜yesâ€™ otherwise he wouldnâ€™t be doing itâ€¦) but iâ€™d love to know more about the limits/challenges.</p>
<p><strong>AG: I am told that Henry Kissinger, on his first trip to China in 1971, asked Zhou Enlai whether he thought the French Revolution had or had not advanced the cause of human freedom.<br />
Zhou thought for a moment, pursed his lips, and replied, â€œIt is too soon to tell.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/feed/</wfw:commentRss>
		<slash:comments>19</slash:comments>
		</item>
	</channel>
</rss>
