<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; augmented reality search</title>
	<atom:link href="http://www.ugotrade.com/tag/augmented-reality-search/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Real Time Big Data at Strata 2011: Ambient Findability, Social Search, GeoMessaging, Augmented Data, and New Interfaces</title>
		<link>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/</link>
		<comments>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/#comments</comments>
		<pubDate>Thu, 20 Jan 2011 22:48:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Alistair Croll]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android Tasker]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[attention data]]></category>
		<category><![CDATA[augmented data]]></category>
		<category><![CDATA[augmented reality ecosystem]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[BackType]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[Big data and new interfaces]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cassandra]]></category>
		<category><![CDATA[Collecta]]></category>
		<category><![CDATA[content-shifting]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Data Engineering]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[digital divide]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[Edd Dumbill]]></category>
		<category><![CDATA[Factual]]></category>
		<category><![CDATA[future of work]]></category>
		<category><![CDATA[geo]]></category>
		<category><![CDATA[geo social aware discovery]]></category>
		<category><![CDATA[geo-search]]></category>
		<category><![CDATA[geodata]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[Geoloqi]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[geosearch]]></category>
		<category><![CDATA[gestural interfaces]]></category>
		<category><![CDATA[Gov2.0.]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Hive]]></category>
		<category><![CDATA[key data trends]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[location data]]></category>
		<category><![CDATA[Maneko Neki]]></category>
		<category><![CDATA[MapReduce]]></category>
		<category><![CDATA[mapufacture]]></category>
		<category><![CDATA[Mesos]]></category>
		<category><![CDATA[Michal Avny]]></category>
		<category><![CDATA[mobile local interactions]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[My6sense]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[NoSQL]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[OpenGov]]></category>
		<category><![CDATA[P2P cloud computing]]></category>
		<category><![CDATA[pervasive computing]]></category>
		<category><![CDATA[Q&A]]></category>
		<category><![CDATA[Q&A ecosystems]]></category>
		<category><![CDATA[Q&A platforms]]></category>
		<category><![CDATA[Q&A The New Search Insurgents]]></category>
		<category><![CDATA[Quora]]></category>
		<category><![CDATA[RabbitMQ]]></category>
		<category><![CDATA[real time data analytics]]></category>
		<category><![CDATA[real time data in mobile development]]></category>
		<category><![CDATA[real time search]]></category>
		<category><![CDATA[real time search engines]]></category>
		<category><![CDATA[real time social discovery]]></category>
		<category><![CDATA[semantic web]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[social graph]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[social web]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Strata 2011]]></category>
		<category><![CDATA[Swift River]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Topsy]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Who owns your data?]]></category>
		<category><![CDATA[XMPP]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6025</guid>
		<description><![CDATA[We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with Anselm Hook, Part 2 upcoming).Â  O&#8217;Reilly&#8217;s Strata Confernence 2011, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31.jpg"><img class="alignnone size-medium wp-image-6034" title="noisedderived3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31-300x163.jpg" alt="" width="300" height="163" /></a></p>
<p>We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, Part 2 upcoming).Â  <a href="http://strataconf.com/strata2011" target="_blank">O&#8217;Reilly&#8217;s Strata Confernence 2011</a>, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new interfaces.&#8221; It is, perhaps, one of the most important events of 2011.</p>
<p>Data is driving a revolution much as coal, oil, and steel powered the industrial revolution.Â  And the world changing insight from Karl Marx that &#8220;the industrial revolution polarized the world into two groups: those who own the means of production and those who work on them,&#8221; is taking on on new life, asÂ <a href="http://twitter.com/#!/acroll" target="_blank"> Alistair Croll</a>, co-chair of <a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a>, points out in his post,Â  <a href="http://mashable.com/2011/01/12/data-ownership/" target="_blank">&#8220;Who Owns Your Data?&#8221;</a></p>
<p><strong>&#8220;The important question isnâ€™t who owns the data. Ultimately, we all do. A better question is, who owns the means of analysis? Because thatâ€™s how, as Brand suggests, you get the right information in the right place. The digital divide isnâ€™t about who owns data â€” itâ€™s about who can put that data to work.&#8221;</strong></p>
<p>Strata is where a vanguard will be meet, not only to discuss this revolutionâ€™s futures, but to define how to create, handle, and build the platforms and experiences that will harness the data.  My flight is booked!Â  (Also check out <a href="http://www.bigdatacamp.org/">BigDataCamp</a> which takes place the night before <a title="Strata Conference" href="https://en.oreilly.com/strata2011/public/regwith/str11dnaff" target="_blank">Strata</a>.)</p>
<p>The picture opening this post is from Michael EdgeCumbe&#8217;sÂ  <a href="http://garden.neocyde.net/thoughts/2010/12/fall-2010-itp-winter-show-project/">Fall 2010: ITP Winter Show Project</a>.Â  A project exploring ways to intuitively get the feel of what it going on with big data sets using &#8220;the gestural manipulation and stereoscopic visualization of complex data to create a meditative state for data analysis.&#8221;Â  Michael project will be part of the <a href="http://strataconf.com/strata2011/public/schedule/detail/17840" target="_blank">Science Fair at Strata</a>.Â  For more on Michael&#8217;s work see <a href="http://www.neocyde.net/derive/2010/12" target="_blank">Noise Derived.</a> I also have a number of theÂ    <a href="http://strataconf.com/strata2011/public/schedule/topic/595 " target="_blank">interesting new interface sessions </a>at Strata in my schedule.</p>
<p>The daily <a href="http://radar.oreilly.com/2010/12/write-your-own-visualizations.html" target="_blank">Strata Gems</a> on O&#8217;Reilly Radar are great place to get a gestalt of some of the Strata themes, and <a href="http://radar.oreilly.com/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank">this  post </a>by <a href="http://strataconf.com/strata2011/profile/1" target="_blank">Edd Dumbill</a>, program chair for Strata,<a href="http://radar.oreilly.com/m/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank"> Three key data trends for 2011</a>, looks at the year ahead.Â  This week, I got the chance to ask Edd a few of the questions that I will have on mind at Strata &#8211; see his responses below.</p>
<p>If you have been reading Ugotrade, you will know I am interested in our mobile social augmented futures and there is no question in my mind that these will be unleashed by our new capacities to work with data (see <a href="http://www.ugotrade.com/2010/10/31/tim-o%E2%80%99reilly%E2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/" target="_blank">my post here</a>).</p>
<p><strong><br />
</strong></p>
<h3>Data is the how.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram.png"><img class="alignnone size-medium wp-image-6045" title="backtypediagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram-210x300.png" alt="" width="210" height="300" /></a></p>
<p><em>The pic above is from <a href="http://www.readwriteweb.com/hack/2011/01/secrets-of-backtypes-data-engineers.php" target="_blank">&#8220;Secrets of BackType&#8217;s Data Engineers.&#8221;</a> This post on ReadWriteHack by <a href="http://twitter.com/petewarden">Pete Warden</a>, an ex-Apple engineer, and founder of <a href="http://www.openheatmap.com/">OpenHeatMap</a>, really lives up to its title.Â  Check it out if you want to know howÂ <strong> &#8220;three guys (the <a title="opens in new window" href="http://backtype.com/" target="_blank">BackType</a> team ) with only seed funding process a hundred million messages a day?&#8221;</strong></em></p>
<p>I asked on Quora, &#8220;<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank">What would be the most important developments for Augmented Reality in 2011,&#8221;</a> <a href="https://sites.google.com/site/michalavny/" target="_blank">Michal Avny,</a> Strategist &amp; Real Time search expert, wrote:</p>
<p><strong>&#8220;AR strongly relies on localized personalized real time information.</strong></p>
<p><strong>Having a stream of tweets based on keyword search, location or circle of friends doesnâ€™t really make the AR experience; it is the processed real time relevant information that will make AR useful and intensify the experience.&#8221;</strong></p>
<p><strong>In 2011 Real Time search and Social Search will drastically change to provide the infrastructure required.&#8221;</strong></p>
<p>I followed up on Michal&#8217;s Quora answer with some more questions &#8211; see below in this post.<strong><br />
</strong></p>
<p>Also note<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank"> the response</a> from <a href="http://research.microsoft.com/en-us/people/dmolnar/" target="_blank">David Molna</a>r, here is an excerpt:</p>
<p><strong>&#8220;2. A wave of actionable, important data APIs opened up, enabling useful non-gimmicky AR apps for the first time. Think geoloqi.com , or the work Max Ogden has done with Portland civic data. Plus of course <a href="http://face.com/" target="_blank">face.com</a> , email providers and calendar providers, etc.&#8221;</strong></p>
<p><a href="http://strataconf.com/strata2011/public/schedule/speaker/100889" target="_blank">Amber Case</a>, one of the founders of <a href="http://geoloqi.com/" target="_blank">Geoloqi</a>, is on the programming committee of Strata and will be speaking.  Be sure to catch her session! <a href="http://strataconf.com/strata2011/public/schedule/detail/17748" target="_blank">Posthumans, Big Data and New Interfaces,</a> and if you haven&#8217;t already seen it, <a href="http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html" target="_blank">Amber&#8217;s TED talk</a> is a must see.</p>
<p>Geographic proximity is a powerful filter, as is route, and time. But clearly social proximity, social relevance, and shared tastes are also key dimensions for location based experiences, (see my convo with Schuyler of <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, upcoming).</p>
<p>While the whole business of location based search and curation of augmented mobile social experiences is still, for the most part, uncharted terrain, the danger of key points of control being only really accessible to elite players looms large.   I asked <a href="http://www.youtube.com/watch?v=C2HcWlu1BS4" target="_blank">Sophia Parafina</a>, a pioneer in the open geo space for some thoughts on real-time local /geosearch and geomessaging, and the future of openess &amp; big data (see Sophia&#8217;s response below).</p>
<h3><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet for P2P cloud computing?</a></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg"><img class="alignnone size-full wp-image-6048" title="8a174_invisibles_bigbrother_1210" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg" alt="" width="150" height="150" /></a></p>
<p>This is another question I&#8217;m following,Â <a href="http://www.quora.com/home/following" target="_blank"> </a><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet forÂ P2P cloud computing?</a> It is one of those questions that we seem to have been asking in various forms for a very long while now, but without aÂ  major shift in sight.Â  The pic above is from, <a title="Permanent link to The Cloud Made Open Source " href="http://www.readwriteweb.com/cloud/2010/12/open-source-invisible.php">The Cloud Made Open Source &#8220;Invisible&#8221; This Year</a>.Â  But, perhaps, we are at the point when open p2p clouds will find a place in the market because of their potential importance in real time social search and discovery. <a href="http://distributedsearch.blogspot.com/" target="_blank">Borislav Agapiev</a>, Search Entrepreneur and founder of <a href="Vast.com" target="_blank">Vast.com</a>, writes on <a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing?q=p2p+for+a+non+centralized+infrastructure" target="_blank">Quora</a>:</p>
<p><strong>&#8220;I believe a P2P cloud is ideally suited for social &amp; real-time search and discovery.</strong></p>
<p><strong>Consider MapReduce, a very interesting and popular paradigm for distributed computing. MapReduce is very much about bringing computation to data i.e. doing computation at nodes (map) and then aggregating results through network (reduce).</strong></p>
<p><strong>It is very clear now that user attention data (what they click on) is very valuable for search and discovery, yet a centralized model relies upon uploading all that to a single location and then doing a supposed local MapReduce. Clearly, MapReduce could be done  across the network, without any centralized uploads.</strong></p>
<p><strong>In addition to the efficiency argument raised here, it is even more important to consider privacy issues. Uploading massive amounts of user attention data to a centralized location is not something that is going to make users warm and fuzzy <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />   as we are increasingly seeing.</strong></p>
<p><strong>In a P2P cloud, there is no big brother watching over anyone, all computation and data storage is done in the cloud, fragmented in many, many small  encrypted pieces ala BitTorrent.&#8221;</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1.png"><img class="alignnone size-medium wp-image-6066" title="Screen shot 2011-01-16 at 2.13.43 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1-300x223.png" alt="" width="300" height="223" /></a><br />
</strong></p>
<p><em>Picture above from Brynn Marie Evans, <a href="http://brynnevans.com/blog/2010/03/17/it-takes-two-to-tango/">&#8220;It takes two to tango: review of my social search panel</a>&#8220;</em></p>
<p><em><br />
</em></p>
<h3>The Delta of Now &#8211; Transforming Search into a Social Democratic Act</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44.jpg"><img class="alignnone size-medium wp-image-6049" title="2538108030_d37d124e44" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44-300x225.jpg" alt="" width="300" height="225" /></a></p>
<p><em>Picture of Maneki Neko &#8220;beckoning&#8221; cats from <a href="http://www.journeyetc.com/travel-ideas/famous-landmarks-of-cats-and-dogs-around-the-globe/">Journeyetc</a></em></p>
<p>New ecologies of human and machine intelligence are beginning to change basic social structures â€“ see the <a href="http://www.youtube.com/watch?v=t1J2RXrvPek" target="_blank">Future of Work (Biewald and Chirayath Janah 2010)</a>. And projects like <a href="http://swift.ushahidi.com/" target="_blank">Swift River</a>, using search and machine mining to filter out streams on topics of interest that can then be subsequently curated by human beings. This may be extended to the curation of real-time data streams and employment of machine learning algorithms based upon the explicit relationships.</p>
<p>Augmented mobile social experiences are a new frontier in which ideas and practices from a number of fields collide, including: ambient findability (Morville 2005), urban psychogeography, narrative structures, ambient games and devices, 4d (time-space), explorations of place and memory, enchanted objects and people (Kuniavsky 2010), and designed animism (Laurel 2010), to mention just a few.</p>
<p>Mobile local interaction presents an opportunity to invert the search pyramid and to transform search into a social, democratic act (see my interview with Anselm Hook upcoming).Â  Up until now search has been predicated around a very narrow revenue model.  Google has an implicit model of a B2C â€“ business to consumer brokerage. We are only just beginning to get a glimpse of the disruptive potential of C2C &#8211; consumer to consumer brokerages.  Mobile local C2C brokerages that allow us to transact in a trustworthy way over our local geography in close to real time (Hook 2010) have the potential to enable new forms of social organization.  Bruce Sterlingâ€™s short story about a networked gift economy, <a href="http://tqft.net/wiki/Maneki_Neko" target="_blank">Maneko Neki,</a> is a brilliant glimpse at the disruptive potential of such re-imaginings.</p>
<p>Augmented experiences that shift or change a personâ€™s situated geolocal experience of social reality, and change our relationship to the people and the place by augmenting engagement in, and reputation through, socially driven consumer tie ins and game dynamics, like <a href="http://foursquare.com/" target="_blank">Four Square</a>, &amp; <a href="http://gowalla.com/" target="_blank">Gowalla</a> are beginning to emerge, as <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Kati London pointed out in her excellent keynote at Web 2.0 Expo</a>.  And, while the integration of mobile local interaction and an augmented view that shifts our geolocal experience visually will involve creative solutions to some well churned mobile, tracking, mapping and registration challenges, the exploration and development of new dimensions through which we can filter and create trusted and meaningful augmented mobile social experiences is vital, whether you are considering a mobile screen, map, camera view, or futuristic HUDs and gestural interfaces.</p>
<h3>Talking with Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbill.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png"><img class="alignnone size-full wp-image-6077" title="edddumbillheadshot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png" alt="" width="150" height="150" /></a><br />
Picture from <a href="http://people.oreilly.com/edd" target="_blank">O&#8217;Reilly Community.</a></p>
<p><strong>Tish Shute: </strong>First congratulations on Strata!   On the Strata homepage there is a quote from Jason Hoffman:</p>
<p><strong>&#8220;My gut feeling is that we&#8217;re going to look back at the upcoming Strata Conference like we do at the Web 2.0 Conference in 2004/2005.&#8221;<br />
â€”Jason Hoffman, CTO/Founder, Joyent, Inc.</strong></p>
<p>Why do you think Jasonâ€™s comparison might be prescient?</p>
<p><strong>Edd Dumbill: Web 2.0 is a development that ran through every brand that has a web presence and radically changed the way business is done for many companies and brands.</strong></p>
<p><strong>Strata will have a similar impact: every business has data, every business collects an increasing amount of data. This data is the new oil â€“ a valuable raw material that when refined or combined creates value and opportunity.</strong></p>
<p><strong>Tish Shute:</strong> The rise of real time was one of your three key data trends for 2011.  Hadoop is bringing the capacity to work with big data to more than just a few elite players.  But the challenge is still real time.  You mention we will be seeing a hybrid approach to real time and batch MapReduce processing.  Will we hear more about these approaches to real time at Strata?  And, what do you see as the most important conversations on real time data analytics emerging at Strata?</p>
<p>You point out â€œopen source projects and cloud infrastructure means developers can evaluate and learn to love technologies without requiring support or approval from above.â€  What are the most exciting developments on the horizon for open source tools?</p>
<p><strong>Edd Dumbill: </strong><strong>Here are some projects worth watching, in the key areas of real time, cluster management and Hadoop.</strong></p>
<p><strong>* Cassandra and MongoDB â€” NoSQL databases that will prove vital for anybody with real time big data needs</strong></p>
<p><strong>* Mesos â€” a compute cluster management tool, modeled after that which powers Google</strong></p>
<p><strong>* Hadoop ecosystem&#8217;s continuing maturation, especially HBase and Hive.</strong></p>
<p><strong>Tish Shute: </strong> Do you think the market is ready for p2p cloud computing?</p>
<p><strong>Edd Dumbill: The market is emerging for decentralized and distributed cloud computing, and P2P technologies are one way of achieving that. They key trends will be moving computation nearer the data sets or nearer the point of user consumption of the result.</strong></p>
<p><strong>P2P is a difficult model for anybody wanting to commercialize a service, so I think it will tend to form part of a hybrid solution.</strong></p>
<p><strong>Tish Shute:</strong> We have seen enormous strides in our ability to work with giant unstructured databases recently.  Do you think, perhaps, that the dream of a web of linked data &#8211;  â€œa web of data that can be processed directly and indirectly by machines,â€ will be attained through brute force &#8211; i.e. through our ability to harness the power of massively parallel processing, as much as by Semantic Web approaches focused on machine readable metadata? [Also see <a href="http://www.quora.com/Is-this-a-good-approach-www-dist-systems-bbn-com-people-krohloff-shard_overview-shtml-to-use-Hadoop-to-build-a-scalable-distributed-triple-store" target="_blank">my question on Quora</a>, &#8220;Is this a good approach (<a rel="nofollow" href="http://www.dist-systems.bbn.com/people/krohloff/shard_overview.shtml" target="_blank">www.dist-systems.bbn.com/people/&#8230;</a>) to use Hadoop to build a scalable, distributed triple store?&#8221;]</p>
<p><strong>Edd Dumbill:  I&#8217;ve been an observer of the SW for over a decade and I tend to believe that on the web, data means to you whatever meaning you give it as the consumer. With that model, the links are made by the consumer rather than sitting out there explicitly. Some links become de facto standards, and some very few become web standards.</strong></p>
<p><strong>I think the actuality will be a mix of both explicitly stated metadata and that which is inferred. The Semantic Web is a great framework for certain operations, especially interoperable exchange of metadata. A great many more private meanings, never intended to be shared, will be created by consuming software.</strong></p>
<p><strong>There&#8217;s no question that machines will learn how to process most of the Web. Furthermore, machines will learn how to process most of the physical world we&#8217;re in. And that by the end of this decade</strong>.</p>
<h3>Talking with Sophia Parafina</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere.jpg"><img class="alignnone size-medium wp-image-6062" title="sophiawhere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere-300x250.jpg" alt="" width="300" height="250" /></a></p>
<p><em>Picture of Sophia at <a href="http://where2conf.com/where2011" target="_blank">Where 2.0</a><a href="http://www.flickr.com/photos/rich_gibson/2509114741/" target="_blank"></a></em></p>
<p><strong>Tish Shute:</strong> Sophia you have worked in the trenches for a long time now  to support the growth of open geo data.  What do you hope to see emerge in 2011 in the field of geo-data?</p>
<p><strong>Sophia Parafina: Better support for displaying and handling location data across multiple apps. Fred Wilson <a href="http://www.avc.com/a_vc/2011/01/content-shifting.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AVc+%28A+VC%29" target="_blank">recently blogged about content-shifting</a>, he talks about overcoming content silos across devices. Weâ€™ve worked very hard to reduce data silos via formats, but devices are creating their own silos. I would like to see a standard method for sending geo data and geo information to mobile devices.</strong></p>
<p><strong>Producing content for mobile is different from producing content for a computer browser. Web 2.0 produced a lot of infrastructure for browser based interfaces, but in mobile devices that gap has been filled with apps which is fragmenting how data is handled by various devices. What is even more interesting in the mobile space is that devices can push data back that contains location, user updates, photos and even sensor data.Â  If mobile data standardizes, it could lead to browser based applications and stem the continued fragmentation of the mobile application market.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://simplegeo.com/" target="_blank">Simple Geo</a> and<a href="http://www.factual.com/" target="_blank"> Factual</a> are startups emerging in the geodata space. What do you see on the horizon in terms of both the growth of business opportunities and an open geo data community?</p>
<p><strong>Sophia Parafina: In the near future think weâ€™ll see startups providing curated data + API and in response we will also see companies that provide a single interface across multiple data providers. We saw this when everyone released a mapping API and companies such as <a href="http://mapufacture.com/">Mapufacture</a> provided a single interface across multiple APIs.</strong></p>
<p><strong>We will see a resurgence in data providers repackaging the the 2010 US Census data in different ways to respond to market segments, some of this will be open data but all of it will be provided through an API instead of file. Additionally, weâ€™ll see more data from outside the US.</strong></p>
<p><strong>Tish Shute:</strong> What are the biggest obstacles to having the open geodata sets available that we need to enable mobile local interactions and social augmented experiences?</p>
<p><strong>Sophia Parafina: Licensing for both crowd sourced data and private curated open data will become an issue. We recently seen VLC, the open source video player, pulled from the Apple app store because of licensing issues. Also, licensing of content by geography will be problematic, limiting searches by geographical location. In addition, how will licensing of data that is updated by crowd sourcing work?</strong></p>
<p><strong>Multiple APIs for accessing data sources. The current trend for each provider to create an API for their data sets will result in data silos â€“ there needs to be a single sign-on equivalent for requesting data.</strong></p>
<p><strong>Size of data on the wire, the current models for delivering data is based on broadband connections. However, as mobiles increasingly become the way people use the web, the data needs to be sized accordingly. This also goes for mobile interfaces. Have you tried to shop on a mobile device, or buy a train or plane ticket? Itâ€™s frustrating and error prone. There is a large untapped market of people who only use the Internet on mobile devices.</strong></p>
<p><strong>Tish Shute</strong>: You pointed me to <a href="http://radar.oreilly.com/2010/12/strata-gems-diy-personal-sensi.html" target="_blank">this link in Strata Gems</a> re â€œan interesting and pertinent (also a competitor to GeoLoqi),â€ â€“ <a href="http://tasker.dinglisch.net/" target="_blank">the Android Tasker app.</a> What do these emerging services bring to the table in terms of the next generation of location based services?</p>
<p><strong>Sophia Parafina: This app letâ€™s your device interact with the environment. I think that this is a great way of using the sensors on existing platforms to increase interaction and to implement ambient findability. The basic premise of Tasker is that some action happens in response to an event in an application, time, date, location, event, or gesture. Tasker has defined 180 actions that can occur based and number or combination of events. This can provide a basic vocabulary for interaction between the user and the device and more importantly between users. Tasker also can use Android script plugins, which lowers the bar to creating your own ambient  application.</strong></p>
<p><strong>Programs such as Tasker can provide a way for people to interact with social networks beyond sending messages. People can use their mobile devices to interact with their surroundings with out having to interact with the device.</strong></p>
<p><strong>Tish Shute:</strong> We have had many conversations about emerging ideas of geo-search, geo-messaging and geo-fencing. What are the most interesting developments in these areas and what do you see on the horizon for 2011?</p>
<p><strong>Sophia Parafina: The map will fade into the background and become less important. Display of information will be context aware, that includes location. For example, letâ€™s say I make a grocery list, when Iâ€™m at the grocery story, the list will just pop-up without the need for me to find the app that has the list. Or reminders or offers pop-up when you are near a place at a certain time, letâ€™s say you need to buy a present for a birthday party for a child, you could send out a request that you are looking for an item and retailers could offer â€œon the spotâ€ discounts if you are in the area.</strong></p>
<p><strong>Geo-search, geo-messaging, and geo-fencing are geared to towards mobile devices, so I expect to see them soon as part of apps. Building generic applications that implement geo* will fail because that sort of information is useful only within a context. Geo* apps are solutions looking for an problem. The killer mobile app will use these functions transparently to reduce the cognitive load of the user who is busy moving around in the world.</strong></p>
<p><strong>User data gathered from multiple web applications will become consolidated profiles that will used for context aware applications. For example, there could be a service which matches prices of items that you have shopped for on the web, so for example the service would have access to your cookies, know your favorite retailers, things you have shopped for, your location and activity patters (when you are at home, work, restaurant). When you are in the vicinity of a brick and mortar retailer with the same or similar items, the service can send you alert to match the price of the item you found on line. So your digital life will become more closely linked with your day to day activities.</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Michal Avny</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic.jpg"><img class="alignnone size-medium wp-image-6059" title="Michal_Pic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic-300x275.jpg" alt="" width="300" height="275" /></a></p>
<p><strong>Tish Shute: </strong>At <a href="http://www.web2summit.com/web2010" target="_blank"> Web 2.0 Summit</a>, one of the highlights for me was the, <a href="http://www.web2summit.com/web2010/public/schedule/detail/17101" target="_blank">Q&amp;A:The New Search Insurgents</a> lunch where Charlie Cheever of <a href="http://www.quora.com/" target="_blank">Quora</a>, IMO, stole the show. I tweeted:</p>
<p><em>&#8220;One of my takeaways from #w2s is that #quora points to future of augmented mobile social experiences &#8211; a search filter for experience! #AR&#8221;</em></p>
<p>In your view what are the biggest challenges for location Q&amp;A to emerge as a search filter for location based experiences?</p>
<p><strong>Michal Avny: The biggest location Q&amp;A challenges yet to be conquered are immediacy (real time dynamic data), relevancy (strong personalized filters) and user experience (simplified interface).</strong></p>
<p><strong>Location Q&amp;A enables different use cases.  The most prominent are Follow (follow places, topics and friends to learn about a location), Interact (meet new people based on common interests), Plan ahead (plan a trip, night out or a shopping day by asking and searching for local information) and On-site (check for recommendations, friends, deals, events and traffic nearby).</strong></p>
<p><strong>Unlike Follow, Interact and Plan ahead that can be added to existing Q&amp;A platforms (such as Quora) by attending location specifics as they share similar characteristics, the on-site mode introduces a completely different experience, first and foremost it requires immediate attention.  It is real time based and the nature of the data is dynamic.  Traffic updates, current events, nearby friends, all that changes constantly.  Posting a location question on-site implies the response should be in real time (e.g. best kid friendly restaurant), the normal Q&amp;A response latency wouldnâ€™t work.</strong></p>
<p><strong>Strong relevancy filters are required to accommodate for the overwhelming flood of information.  Moreover, some of the data should be filtered by user behavior and preferences, check in notifications (type of relation), restaurant recommendations (type of food, price level, etc), shopping deals (commercial categories) and more.</strong></p>
<p><strong>Mobile experience requires ease of use and simplicity.  A new Q&amp;A interface and query language that allows for posting questions should be defined as well as coherent summarized response interface.  User on the go should not have to post lengthy questions, browse through tens of results or search for the right service, but instead use a simple intuitive tool.</strong></p>
<p><strong>Tish Shute: </strong>Real- time location based search is in its infancy.  Real time questions can be answered using different services such as Yelp, TripAdvisor, <a href="http://www.waze.com/homepage/" target="_blank">Waze,</a> <a href="http://foursquare.com/" target="_blank">Foursquare</a>, IMDb and more.  But what are the challenges to moving forward with aggregating these sources and then into â€œlocalsâ€ that are able to process and deal with vast amounts of information?</p>
<p><strong>Michal Avny: Using some of the leading location services to answer question is sufficient to start with.</strong></p>
<p><strong>In order to provide broad coverage (worldwide) and reliable information, aggregation of the different services is required for instance to normalize product and service rank, aggregate classified, and more. This is quite challenging as there is no one standard available.</strong></p>
<p><strong>When location Q&amp;A user base is big enough, I foresee a tendency to rely more on â€˜localsâ€™ input as the base of information.   As the platform grows, communities will be formed with different cultures, relationships and trust levels, making the information more valuable and customizable.  Some of the challenges I already mentioned are implementing filters, query language and interfaces to enable using the vast amounts of real time data in a mobile environment.  More of the challenges lying ahead are integrating the â€˜localsâ€™ data with location based services as they are integral components of the Q&amp;A ecosystem.   Merging trust levels and relationships while adhering to different privacy guidelines is a challenge yet to be explored. (This should be discussed in more detail under the protocols topic).</strong></p>
<p><strong>It is quite evident that Quora is now facing growing pains and is struggling to maintain its character.  Same as with Quora, it will also be a challenge to support and maintain the ecosystem while allowing for massive scale-up.</strong></p>
<p><strong>Tish Shute:</strong> I have been very interested in exploring protocols that will be enablers to micro local interaction and mobile social interaction for AR &#8211; particularly the XMPP extensions and operational transform work of Google Wave (now <a href="http://incubator.apache.org/projects/wave.html" target="_blank">Apache Wave</a>), and PubSub protocols like <a href="http://code.google.com/p/pubsubhubbub/" target="_blank">PubHubSubbub</a> and Erlang based <a href="http://www.rabbitmq.com/" target="_blank">RabbitMQ</a>.  We are beginning to see protocols emerging that could enable new real time local services.  What do you think are some of the most valuable use cases for â€œlocalsâ€ that this new generation of real time protocols can enable?</p>
<p><strong>Michal Avny: AR is about interacting with digital information; the AR ecosystem is composed of layers and components such as devices, platforms, browsers, applications and content.  For the different components to interact new protocols, security guidelines, and privacy policies must be in place.  A standard will enable local vendors and service providers to publish specials, deals, updates and events for any application to broadcast, identify people and places by proximity (without having to use the same application or device), local recommendations will be shared by services, devices will be able to interact, location based platforms, such as Q&amp;A, will have access to vast breadth of information, geo aware devices will provide consistent experience globally, and much more.</strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest challenges to going mainstream for this emerging field of real time social discovery?</p>
<p><strong>Michal Avny: The biggest challenge is building towards real time, geo-aware, localized, personalized ambient data.   Discovery is in its infancy, location social based Best, Top, and Trending lists with some basic filtering options are available, and this is great as people are getting accustomed to information surrounding them.  To some degree it can intensify the AR experience, for instance suggest the most popular dish in a restaurant, or map the best coffee shops nearby, but it is customized at best by friend recommendations and depends on the coverage and broadness of the specific discovery service.</strong></p>
<p><strong>There is a need for the next generation of discovery, customized geo social aware discovery that filters the vast amount of real time data by learning user preferences and behavior (built on top of the much needed local social real time open protocol)</strong></p>
<p><strong>Tish Shute:</strong> Who are your favorite startups/upstarts in the the field of real time search and why?</p>
<p><strong>Micha Avny: <a href="http://www.my6sense.com/" target="_blank">My6Sense </a>- My6sense provides a sharper and better way to experience your information from feeds you subscribe to (Social Networks, News, RSS feeds, etc.).  Itâ€™s personal &#8211; Content is ranked based on whatâ€™s relevant to you. It learns what&#8217;s valuable to you by translating your consumption behavior into a personalized ranking function.<br />
My6Sense â€“ because it is a personalized prediction filter, a critical foundation for AR</strong></p>
<p><strong><a href="http://topsy.com/" target="_blank">Topsy</a> &#8211; Topsy is realtime search powered by the social web that finds the most relevant conversations happening online. The siteâ€™s underlying technology examines popular links as well as the influence of each person citing a link. Topsy augments traditional search engines by finding information that people are talking about.<br />
Topsy â€“ because its ranking is based on retweets and influencers, a great social experience</strong></p>
<p><strong><a href="http://collecta.com/" target="_blank">Collecta</a> &#8211; Collecta is a real-time search engine for the social web. It monitors the update streams of popular realtime blogs and sites like Twitter, WordPress, and Flickr, and shows results as they happen. Results can be filtered by status updates, comments, stories, or photos. The entire engine is built around the XMPP standard, which pushes out data on a continual basis, so that for every search you end up watching a stream that keeps updating itself.<br />
Collecta â€“ because it is built around XMPP, a real time experience</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Platforms for Growth and Points of Control for Augmented Reality: Talking with Chris Arkenberg</title>
		<link>http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/</link>
		<comments>http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/#comments</comments>
		<pubDate>Wed, 27 Oct 2010 09:14:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR and html 5]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR eyewear for smart phones]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented foraging]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality on tablets]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[cloud computing and AR]]></category>
		<category><![CDATA[EarthMine]]></category>
		<category><![CDATA[gartner hype cycle]]></category>
		<category><![CDATA[Gary Hayes]]></category>
		<category><![CDATA[John Battelle]]></category>
		<category><![CDATA[Kevin Slavin]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[mobile social augmented reality]]></category>
		<category><![CDATA[MUVEdesign]]></category>
		<category><![CDATA[NVidia augmented reality demo]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Platforms for Growth]]></category>
		<category><![CDATA[Points of Control Map]]></category>
		<category><![CDATA[Porthole]]></category>
		<category><![CDATA[QR codes]]></category>
		<category><![CDATA[Qualcomm SDK for AR]]></category>
		<category><![CDATA[real time analytics and AR]]></category>
		<category><![CDATA[RFID]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[The Battle for the Internet Economy]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[transmedia story telling]]></category>
		<category><![CDATA[trasmedia]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[vision based AR]]></category>
		<category><![CDATA[W3C group on augmented reality]]></category>
		<category><![CDATA[Wave in a Box]]></category>
		<category><![CDATA[Web 2.0 Expo]]></category>
		<category><![CDATA[web standards based browser for AR]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5924</guid>
		<description><![CDATA[The Points of Control map is interactive, so please click here or on the image above for the full experience. Today at 4pm EST, 1pm PDT John Battelle and Tim O&#8217;Reilly will discuss the Points of Control map and The Battle for the Internet Economy in a Free Webcast: &#8220;More than any time in the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://map.web2summit.com/"><img class="alignnone size-medium wp-image-5931" title="Screen shot 2010-10-27 at 1.56.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM-300x181.png" alt="Screen shot 2010-10-27 at 1.56.15 AM" width="300" height="181" /></a></p>
<p><em>The Points of Control map is interactive, so please <a href="http://map.web2summit.com/" target="_blank">click here </a>or on the image above for the full experience.</em></p>
<p><em> </em>Today at 4pm EST, 1pm PDT John Battelle and Tim O&#8217;Reilly will discuss the <a href="http://map.web2summit.com/" target="_blank">Points of Control</a> map and The Battle for the Internet Economy <a href="http://oreilly.com/emails/poc_web2summit-webcast-prg.html" target="_blank">in a Free Webcast</a>:</p>
<p><strong>&#8220;More than any time in the history of the Web, incumbents in the network  economy are consolidating their power and staking new claims to key  points of control. It&#8217;s clear that the internet industry has moved into a  battle to dominate the Internet Economy.</strong></p>
<p><strong>John Battelle and Tim O&#8217;Reilly will debate and discuss these shifting  points of control as the board becomes increasingly crowded. They&#8217;ll map  critical inflection points and identify key players who are clashing to  control services and infrastructure as they attempt to expand their  territories. They&#8217;ll also explore the effect these chokepoints could  have on people, government, and the future of technology innovation.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM.png"><img class="alignnone size-medium wp-image-5932" title="Screen shot 2010-10-27 at 2.01.38 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM-300x124.png" alt="Screen shot 2010-10-27 at 2.01.38 AM" width="300" height="124" /></a></p>
<p><em> </em>I&#8217;ve been wanting to start a discussion on theÂ  <a href="http://map.web2summit.com/">Points of Control map </a>in the Augmented Reality community for a while now, and Chris&#8217; recent post on <a href="http://www.gartner.com/it/page.jsp?id=1447613" target="_blank">the latest edition of the Gartner Hype Cycle</a>, <a href="http://www.urbeingrecorded.com/news/2010/10/13/is-ar-ready-for-the-trough-of-disillusionment/" target="_blank">&#8220;Is AR Ready for the Trough of Disillusionment?&#8221; </a>and this post by Mac  Slocum, <a href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" target="_blank">â€œHow Augmented Reality Apps Can Catch On,&#8221;</a> and the conversation in the comments between Mac, Raimo (one of the founders of <a href="http://www.layar.com/" target="_blank">Layar)</a>, and Chris, all prompted me to get a conversation started&#8230;(see below for all that followed!).Â  Chris put me on the hot seat back in June when he did <a href="http://www.boingboing.net/2010/06/17/tish-shute---augment.html" target="_blank">this very generous interview with me on Boing Boing</a>, so it was time to turn the tables.</p>
<p>Tim O&#8217;Reilly, in hisÂ <a href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" target="_blank"> keynote for Web 2.0 Expo,</a> pointed out there is both a fun and a dark side to the Points of Control map.Â  There are companies on this map, he noted, that rather than &#8220;growing the pie,&#8221; are  trying to divide up the pie, and they are forgetting to think about  creating a sustainable ecosystem. I expect the conversation between Tim O&#8217;Reilly and John Battelle to dig deep into this Battle for the Internet Economy.Â  If, like me, you have another engagement at the time of the webcast, you can register on the site to receive the recording.</p>
<p>AR is still too young to figure in the battles of the giants, but there will be a lot to be learned from this conversation.Â  And, The Points of Control map is good to think with from the POV of AR in many ways.Â  As Chris Arkenberg observed:</p>
<p><strong>&#8220;When I look at this map, the points of control map, itâ€™s  really interesting to me, because what it says to me with respect to AR  is each of these little regions that they have drawn out would be a  great research project. So every single one of these should be  instructive to AR.</strong></p>
<p><strong>In other words, we should be able to look at social networks,  the land of search, or kingdom of ecommerce, and apply some very  rigorous critical thinking to say, â€œHow would AR add to this engagement,  this experience of gaming, or ecommerce, or content?â€</strong></p>
<p><strong>Looking at each of these individually and really meticulously  saying, â€œOK, well yes, it can do this but how is that different from  the current screen media experience, the current web experience that we  have of all these types of things?â€ Â  You know, how can augmented  reality really add a new layer of value and experience to these? And I  think that process would really trim a lot of the fat from the hopes and  dreams of AR and anchor it down into some very pragmatic avenues for  development.Â   And then you could start looking at, â€œWell, OK, what  happens when we start combining these?â€ When we take gaming levels and  plug that into the location basin, as you suggested.&#8221;</strong></p>
<p>Chris Arkenberg is a technology professional with a focus on product strategy &amp; development, specializing in 3D, augmented reality, ubicomp and the social web. He uses research, scenario planning, and foresight methodologies to help organizations anticipate change and adopt a resilient and forward-looking posture in the face of unprecedented uncertainty. His personal work is collected at <a href="http://urbeingrecorded.com " target="_blank">urbeingrecorded</a>, and his <a href="http://www.linkedin.com/in/chrisarkenberg" target="_blank">professional profile is here.</a></p>
<p>He is also one of the founder/organizers of <a href="http://ardevcamp.org" target="_blank">AR DevCamp</a> which is currently scheduled for Dec. 4th (somewhere in SF or The Valley!)Â  Chris said, &#8220;No further details atm (still trying to find a venue and get sponsors) but please direct people to http://ardevcamp.org for upcoming information.&#8221;</p>
<h3>Talking with Chris Arkenberg</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ChrisArkenberg.jpg"><img class="alignnone size-medium wp-image-5929" title="ChrisArkenberg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ChrisArkenberg-300x199.jpg" alt="ChrisArkenberg" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> I know some people thought <a href="http://www.gartner.com/it/page.jsp?id=1447613" target="_blank">the positioning of AR by Gartner near the peak of the hype cycle </a>was misguided, and based on a very narrow understanding of AR as used in marketing apps. But reading your post I thought you made a lot of good points.</p>
<p><strong>Chris Arkenberg:  Itâ€™s tracking hype, right?  Itâ€™s not necessarily tracking the growth of the technologies or their maturation so much as itâ€™s tracking the general attention level.  And whatâ€™s interesting to me is that tends to affect the amount of money that goes into those technologies.</strong></p>
<p><strong>Tish Shute:</strong> I was particularly interested in your post because I have been writing a post about two recent Oâ€™Reilly events in NYC, <a href="http://makerfaire.com/newyork/2010/" target="_blank">Maker Faire</a>, <a href="http://www.web2expo.com/">Web 2.0 Expo</a>, and then <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a>, where Tim gave a very interesting 45 minute keynote.Â Â  AR was pretty low profile at all three events.Â Â <a href="../../augmented%20reality%20at%20web%202.0%20http://www.flickr.com/photos/bdave2007/5036397168/in/photostream/" target="_blank"> But the NVidia augmented reality demo attracted a lot of attention at the sponsors expo, </a> and Usman Haque, Founder of <a href="http://www.pachube.com/" target="_blank">Pachube</a> announced in<a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" target="_blank"> his presentation</a>,  they are working on an augmented reality interface for Pachube called Porthole, its designed for  facilities management and, â€œas a consumer-oriented application that  extends the universe of Pachube data into the context of AR â€“ a  â€˜portholeâ€™ into Pachubeâ€™s data environments.. &#8220;Â  Usman also mentioned, when I talked to him, that he is contributing to the AR standards discussion and on the program committee now <a href="http://www.w3.org/2010/06/16-w3car-minutes.html#item02" target="_blank">for the W3C group on augmented reality</a>.Â  For more on this standards discussion and the Pachube AR interface, see Chris Burmanâ€™s paper for the W3C, <a href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" target="_blank">Portholes and Plumbing: how AR erases boundaries between â€œphysicalâ€ and â€œvirtual.&#8221;</a></p>
<p>I think pioneers in the augmented reality commmunity should pay attention to these wider conversations about the Battle for the Internet Economy, and the exploration of theÂ  â€œPlatforms for Growthâ€ theme at <a href="http://www.web2expo.com/">Web 2.0 Expo</a> is very important- this is a course also a nudge to read my upcoming post on these O&#8217;Reilly events!</p>
<p>Also I have another project I have been chewing on that I would like to talk to you about. Â   I want to start an AR conversation about the wonderful <a href="http://map.web2summit.com/">Points of Control map</a> produce for Web 2.0 summit by <a href="http://battellemedia.com/" target="_blank">John Battelle</a>. [ Note there will be, "Battle for the Internet Economy" free Web2Summit webcast w/ @johnbattelle &amp; @timoreilly Wed 10/27 at 1pm PT http://bit.ly/b46cmb #w2s]</p>
<p>Up to this point, understandably given the immaturity of the technology, AR has little role in the â€œBattle for the Internet  Economy.â€Â    But this doesnâ€™t mean that the map isnâ€™t good for AR visionaries, enthusiasts, entrepreneurs, and developers to think with. Â   And both you and Tim have pointed out the potential for AR to leverage the giant data subsystems in the sky. Â  I have to say the positioning of Cloud Computing on the brink of heading down into the trough of disillusionment in this recent rendition of the Gartner Hype Cycle seems ridiculous!</p>
<p>Cloud Computing is already ubiquitous hardly seems credible that it is headed for a trough of disillusionment!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.48.30-AM.png"><img class="alignnone size-medium wp-image-5940" title="Screen shot 2010-10-27 at 2.48.30 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.48.30-AM-300x199.png" alt="Screen shot 2010-10-27 at 2.48.30 AM" width="300" height="199" /></a></p>
<p><strong>Chris Arkenberg:  Yeah, itâ€™s ubiquitous so why even talk about it when itâ€™s your fundamental infrastructure?</strong></p>
<p><strong>Tish Shute:</strong> Yeah and I seriously doubt it is  imminently headed for a  trough of disillusionmentâ€¦.and this brings me back to the Points of Control Map which as John Batelle points out,  â€œaims to  identify key players who are battling to control the services and infrastructure of a websquared worldâ€ in which the â€œWeb and the world intertwine through mobile and sensor platforms.â€Â   This instrumented world, of course, creates a great deal of opportunity for augmented reality.  Have you seen that, that points of control map?</p>
<p><strong>Chris Arkenberg:  I think I have, actually.</strong></p>
<p><strong>Tish Shute: </strong> There has been much debate about how this intertwining of the web and  the world will play out in augmented reality.Â Â  Chris Burman points out in his position paper for W3C,Â  <a href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" target="_blank">Portholes and Plumbing: how AR erases boundaries between â€œphysicalâ€ and â€œvirtualâ€</a>, that &#8220;trying to draw parallels between a browser based web and the possibilities of AR may solve issues of information distribution in the short-term,&#8221;Â  but it must not have a limiting effect in the long-term.Â Â  But now we at least have one <a href="https://research.cc.gatech.edu/polaris/" target="_blank">web standards-based browser for AR</a> thanks to the work of Blair MacIntyre and the Georgia Tech team.Â  But  I think the discussion in the comments of Mac Slocumâ€™s recent post, <a href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" target="_blank">â€œHow Augmented Reality Apps Can Catch Onâ€</a> is an interesting starting point from which to think about platforms of growth for AR.Â   I am not sure if I am stretching his meaning but I think Raimo, <a href="http://www.layar.com/" target="_blank">Layar</a>, is suggesting that what the Point of Control map call the the Plains of Media content is very important to the growth of the fledgling AR industry right now.   And I would agree with this, and add that the neighboring terrain of gaming levels will be pretty key as one of my other favorite AR start ups <a href="http://ogmento.com/" target="_blank">Ogmento</a> hopes to reveal in the near future!  But what do you think was most important in this brief but pithy dialogue between you Raimo and Mac?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.56.02-AM.png"><img class="alignnone size-medium wp-image-5941" title="Screen shot 2010-10-27 at 2.56.02 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.56.02-AM-300x179.png" alt="Screen shot 2010-10-27 at 2.56.02 AM" width="300" height="179" /></a></p>
<p>[The screenshot above isÂ <a title="MuveDesign" href="http://www.muvedesign.com/"></a>a teaser video the <a title="Gary Hayes" href="http://www.personalizemedia.com/future-of-location-based-augmented-reality-story-games/?utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=Feed:+PersonalizeMedia+%28PERSONALIZE+MEDIA%29" target="_blank">Gary Hayes</a> from <a title="MuveDesign" href="http://www.muvedesign.com/">MUVEdesign</a> for his upcoming (2011 release date), game called Time Treasure.Â  See Gary&#8217;s <a title="Gary Hayes" href="http://www.personalizemedia.com/future-of-location-based-augmented-reality-story-games/?utm_source=feedburner&amp;utm_medium=twitter&amp;utm_campaign=Feed:+PersonalizeMedia+%28PERSONALIZE+MEDIA%29" target="_blank">blog</a> for more and Gary&#8217;sÂ <a href="http://www.personalizemedia.com/16-top-augmented-reality-business-models/" target="_blank"> post from over a year ago</a> on AR Business models.Â  Thomas K. Carpenter, <a href="http://gamesalfresco.com/2010/10/25/time-treasure-future-tablet-game/" target="_blank">on Games Alfresco notes</a>, &#8220;I think this is a terrific idea and I find it interesting heâ€™s planning this on a tablet rather than a smartphone.&#8221;</p>
<p><strong>Chris Arkenberg:  The way I took itâ€¦And to give a little bit of context, I came from sort of this apprehension of augmented reality as an expression of the existing Internet.  So as sort of a visualization layer that allows you to kind of draw out data, and then, with all the affordances of being able to anchor it to real world things.</strong></p>
<p><strong>And my own sort of path has led me to want to really try to understand that and refine it, particularly with respect to the sort of Internet of things and the smarter planet idea of just having embedded systems everywhere.  And specifically, what is the value-add  for augmented reality as a visualization layer of an instrumented world?</strong></p>
<p><strong>And so thatâ€™s caused me to be a bit biased towards that side of AR.  And the way I took Raimoâ€™s comment was that he was saying that, â€œYou know, really what weâ€™re interested in is media.â€  That he was effectively saying that AR for them is really just about that space between the screen and the the world, or between your eyes and the world, and what you can do there.</strong></p>
<p><strong>Certainly I had considered it in the past, but I hadnâ€™t really focused on it or assumed that it was a priority as a business model.  And so he kind of reminded me that, actually, thereâ€™s a lot of entertainment applications.  Thereâ€™s a lot of, obviously, advertising and marketing applications.<br />
And so I felt that I was being a little narrow in my focusâ€¦</strong></p>
<p><strong>Tish Shute: </strong> Yes this comes to the heart of what I am interested in about the role AR can play in opening up new relationships to the world of data that we live, not just making it more accessible and useful to us when and where we need it, but AR as a road to reimaginingÂ  it..</p>
<p>Have you seen any interesting work yet to explore these great data economies in the cloud through AR.  I mean can you think of any others &#8211; there is <em><em><a href="http://www.planefinder.net/" target="_blank">planefinder.net</a> </em></em> but others?</p>
<p><strong>Chris Arkenberg:  Iâ€™ve seen a few just sort of skunk works type applications that people have been playing around with, again, to try and reveal things.  One of them was similar to the aircraft, but it was more for military use and being able to identify things of interest in the sky.  Iâ€™ve seen a couple other for navigation, so being able to identify mountain peaks on a visual plane, for example, but this isnâ€™t so much about revealing an instrumented world.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, I think that was from the Imagination right?  I know thatâ€™s an interesting one. Usman at Web 2.0 Expo, <a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" target="_blank">in his presentation,</a> mentioned the work Pachube is doing on an Augmented Reality interface.  I interviewed Usman again as my last long interview with him was nearly 18 months ago now and Pachube is well on the way to becoming the Facebook of Data or the analogy that Usman prefers &#8211; the Twitter of sensors!</p>
<p><strong>Chris Arkenberg:  Hmm, interesting.</strong></p>
<p><strong>Tish Shute:</strong> And to go back to your comments on Augmented Reality not getting caught in some of the traps that have made virtual worlds lose relevancy I think that is vital that AR developers understand the strategic possibilities of key points of control in the internet economy because the isolation and Balkanization of virtual worlds were certainly a factor in their rapid slide into the trough of disillusionment &#8211; although many would argue that a fundamental flaw in the kind of virtual experience that Second Life and other virtual worlds constructed was really the fatal flaw (see James Turner&#8217;s interview with Kevin Slavin <a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_self">Reality has a gaming layer</a>).</p>
<p>But Second Lifeâ€™s isolation from the other great network economies of the internet was certainly a limiting factor.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s been exactly my sense, and Iâ€™ve, over the years, tried to encourage development in that direction for virtual worlds.  I did work, through Adobe, to help develop Atmosphere 3D back in the the early 2000â€™s.  And we did a lot of work to try and understand the marketplace and the specific value-add of doing things in 3D over 2D.</strong></p>
<p><strong>And this is kind of why I keep referring back to VR and VWâ€™s with respect to augmented reality, is that with immersive worlds, there was this ideaâ€¦there was this big rush.  Everybody was so excited about it.  It was obviously the next cool thing.  And everybody wanted to try to do everything in it.  You could do your shopping in virtual worlds. You could have meetings in virtual worlds.</strong></p>
<p><strong>Tish Shute:</strong> and  shopping, yes ..that didn&#8217;t work out so well!</p>
<p><strong>Chris Arkenberg:  And everybody was very excited in developing these things.  And what it really came down it is, â€œYeah, you can, but itâ€™s actually a lot better to do those things on a flat plane or in person.â€  Meeting Place, WebEx, TelePresence &#8211; those tools generally do a much better job at facilitating TelePresence meetings than a virtual world does. The same with TelePresent Education. There are only very specific things that both VR and AR are really good at.</strong></p>
<p><strong>And thatâ€™s where I find myself with augmented reality right now, trying to really pick through that and critically look at which uses are really appropriate for an AR overlay. And again, I think thatâ€™s why the hype cycle is important, because it reflects back this desire that AR is going to be the next big thing &#8211; the be-all, end-all of interacting with data in the cloud &#8211; and forces us all to take a critical look at why we should do things in AR instead of on a screen.</strong></p>
<p><strong>AR is not going to work well for most things but itâ€™s going to be very good for certain uses.  Right now Iâ€™m very keen at trying to understand what those things might be.</strong></p>
<p><strong>Tish Shute:</strong> I had this wonderful conversation (more in an upcoming post) with Kevin Slavin one of the founders of <a href="http://areacodeinc.com/" target="_blank">Area/Code</a> at Web 2.0 Expo and I think some of what he describes about the data brokerages of High Frequency trading have some interesting implications for ARâ€™s role, say, in ubiquitous computing.  The trading markets are now pretty much dominated by machine to machine intelligence; machine to machine brokerages.  They are basically game economies on the scale that we can barely wrap our heads around where the speed that bots and algo traders can access the network is the key.  We really have no clue what is going on  until we lose our houseâ€¦</p>
<p>Kevin was also<a href="http://radar.oreilly.com/2010/09/drawing-the-line-between-games.html" target="_blank"> interviewed by James Turner on Oâ€™Reilly Radar.</a> He talked about how much of the interesting work in location based mobile social apps is defined in opposition to the model of Second Life.  He also talked to me about  how we are seeing â€œfirst lifeâ€ take on the qualities of â€œsecond life.â€  What goes on the trading floor is largely a performance secondary to a more important world of machine intelligence with giant co-located servers  and bots fighting for trading advantages measured in fractions of seconds.</p>
<p>He pointed out how we draw on all these tropes from sci-fi movies, these HUDs based on ideas of machine intelligence where the robot talks to the other robot in English through an English HUD!Â  Many of our current visual tropes for AR are perhaps just as inadequate for the kind of data driven world we live in.</p>
<p>Of course, when you are thinking of having fun with  dinosaurs, or illustrated books, or whatever, this is not, perhaps, an issue.Â  But if you are thinking of augmented reality interfaces as being important in a battle for network economy, and platforms for growth,Â  how this new interface helps us live better in a world of data is an important issue.</p>
<p><strong>Chris Arkenberg:  Now, does that indicate that the UI just needs more overhaul and innovation, or more that the visual interface for those experiences shouldnâ€™t really leave the screen?  It shouldnâ€™t move on to the view plane?</strong></p>
<p><strong>Tish Shute: </strong> Yes we have a few concept videos that try and explore this ..</p>
<p><strong>Chris Arkenberg:  Well, and I think this will happen at the level of human-computer interface.  I mean thatâ€™s always been its role, in making coherent the sort of machine mind, for lack of a better term, making it coherent to the human mind. So I mean there is a lot of this sort of machine intelligence, the semantic Web 3.0 revolution, where it really is about enabling machines, and agents, and bots to understand the content that weâ€™re feeding them.</strong></p>
<p><strong>But at the end of the day, they, for now, need to be providing value to us human operators. So thereâ€™s always going to be a role for  human-computer interface and user experience design to make this stuff meaningful.</strong></p>
<p><strong>I mean, if you look at the revolution in visualization &amp; data viz, this is of incredible value because it takes a tremendous amount of data and collates it into a glanceable graphic that you can look at and immediately comprehend massive amounts of data because itâ€™s delivered in a handy, visual way.</strong></p>
<p><strong>So I see that as a fascinating design challenge, how the user experience of the data world can be translated into meaningful human interaction.</strong></p>
<p><strong>Tish Shute:</strong> Yeah.  And when we see <a href="http://stamen.com/" target="_blank">Stamen Design</a> pursuing a big idea in AR, thatâ€™s when we might start to rock and roll, right?</p>
<p><strong>Chris Arkenberg:  Yeah. In my article, I sort of jokingly suggested that Apple will create the iShades.  But, theyâ€™ve got the track record of being way ahead of the curve and delivering the future in very bold forms.</strong></p>
<p><strong>Tish Shute:</strong> A key part for the battle for the network economy is to bring the complexity of data into the human realm in a way that increases human agency.  Kevin suggests that the giant robot casinos of markets should actually lift off into total abstractions as theses machine-driven trades get back into the human realm in ways that are so damaging to our lives &#8211;  a lost house or job!  The notion of a counterveillence society where people have more agency over the important aspects of their lives, health, housing, job (which I discussed with Kevin &#8211; interview upcoming) has gotten pretty tricky!</p>
<p>But I think we will begin to see AR eyewear for specific applications (gaming and industrial) get more common fairly soon &#8211; possibly as smart phone accessories.</p>
<p>And it is clear that AR is going to be, increasingly,Â  a part of our entertainment smorgesborg in coming months. Itouch has a camera (although lower resolution),  Nintendoâ€™s are AR-ready and many aspects of the AR vision of hands-free spatial interfaces will go mainstream through Natal.</p>
<p>But we are yet to see an app/platform emerge for  mobile. Social AR games that turn every bar and cafe and ultimately the whole city into a gaming venueÂ  -although I think Ogmento and MUVE aim to lead the way here!  Will an AR company achieve Zynga level success by using the Foursquare, for example?</p>
<p>My feeling is that the lesson of Zynga is pretty important for mobile social AR games.  Could Flash social gaming have taken off without Facebook?</p>
<p><strong>Chris Arkenberg:  And thatâ€™s the real driver.  And again, as you mentioned with Second Life, and this was exactly my own sense, is that they stuck to the closed garden model and didnâ€™t get the power of social and collaboration.  They attempted to add some of those affordances within the world, but, you know, ultimately most people arenâ€™t in virtual worlds, and most people arenâ€™t using augmented reality.  So leveraging the really predominate platforms like Twitter and Facebook and Foursquare, being able to leverage those affordances, that connectivity, into a platform like augmented reality, I think, is really critical. Because again, you get nothing unless you have the masses, unless you have people present.</strong></p>
<p><strong>Tish Shute:</strong> In AR research there is a long history of the  notion of powerful AR-dedicated devices, but smart phones and tablets are good enough,Â  and can launch augmented reality into the heart of the internet economy.  I thinkÂ  the elusive AR eyewear will come to us initially as a smart phone accessory for specific apps.Â  But, for the moment, most AR apps make little attempt to play in the wider internet economy.</p>
<p><strong>Chris Arkenberg:  And I think itâ€™s actually much lower hanging fruit, really, to do gaming, marketing, transmedia.  Because then you donâ€™t really care about the cloud, or maybe you only really care about a little part of it that your gaming property is addressing. Then it becomes much more about entertainment, and much more about persuasion, and sensationalism.  And if youâ€™ve got dancing dinosaurs on your street, great!  Itâ€™s entertaining, itâ€™s cool, itâ€™s new. That stuff is fairly straightforward.</strong></p>
<p><strong>I keep coming back to this idea of, you know, the instrumented city.  What sort of data trails do you get out of a fully instrumented city?  So maybe you get traffic patterns, maybe you get geo-local movements of masses, maybe you get energy usage, that sort of thing, all the, sort of  heat maps you can generate from a city. But then what good does it do to be able to have that on an augmented reality layer versus just looking at it on a mobile device or looking at it on your laptop?</strong></p>
<p><strong>Tish Shute:</strong> Of course the use cases for â€œmagic lensâ€ AR are different from the kind of hands free, 360 view with tightly registered media, that a full vision of AR has always promised.  The 360 view is  quite a different metaphor from the web and mobile rectangular screens.</p>
<p><strong>Chris Arkenberg:  Yes, yes.</strong></p>
<p><strong>Tish Shute:</strong> Did you see that <a href="http://laughingsquid.com/tweet-it-ipads-vs-iphones-a-parody-of-michael-jacksons-beat-it/" target="_blank">great parody of Michael Jackson&#8217;s</a> â€œBeat Itâ€ with the iPads versus the iPhones, right?</p>
<p><strong>Chris Arkenberg:  Oh, really?</strong></p>
<p><strong>Tish Shute:</strong> I tweeted it cos i thought it was quite funny and a little close to the bone!<br />
[laughter]</p>
<p>&#8220;ur wanna an ipatch 2 b the new fad?&#8221; #AR gets cameo in Twitter, iPads &amp; iPhone&#8217;s Michael Jackson-Inspired Parody via @mashable</p>
<p>It is hard to get away from the importance of eyewear when discussing AR!</p>
<p><strong>Chris Arkenberg: Yes, so the hardware, to me, is a big stumbling point right now, or itâ€™s a large gating factor, I think, for realizing what an augmented reality vision could really be like.  That it really does need to be heads up.  This holding the phone up in front of you is fun to demonstrate that itâ€™s possible, and itâ€™s valuable in some waysâ€¦</strong></p>
<p><strong>Tish Shute:</strong> And itâ€™s particularly nice in some applications like the planes app, the Acrossair subway app where you hold the phone down and get the arrow, right?</p>
<p><strong>Chris Arkenberg:  Yeah, the way-finding stuff I think is really valuable&#8230;</strong></p>
<p><strong>Tish Shute:</strong> Sixth Sense really caught peopleâ€™s imagination because it managed to deliver the gesture interface with cheap hardware, even if projection has limited uses (no brightly lit spaces or privacy for example!).</p>
<p>The other important and as yet unrealized part of the AR dream is  real-time communications.  Many interesting uses cases would require this. As you know that is my chief excitement, along with federation,  in the Google Wave Servers for (which should soon be released at <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a>) for <a href="http://www.arwave.org/" target="_blank">ARWave</a>.</p>
<p><strong>Chris Arkenberg:  Well my sense of Wave is that it was a ChromeOS protocol that they instantiated, or that they exhibited in the public deployment of Google Wave.  That that was a proof of their sort of low level architectural solution.  Because, you know, theyâ€™ve been rumored to be working on this cloud OS for some time. And so my sense is that Wave is actually one of their core components of that cloud OS, and that it just happened to incarnate for the public in a test run as Google Wave.</strong></p>
<p><strong>Tish Shute:</strong> I do hope that Wave  In the Box will lower the barriers to entry to people experimenting with this technology.  The FedOne server was just way too hard for most people to take the time to set up.  Of course, it is the brilliance of the Wave Operational Transform work that also poses problems in terms of ease of use. But Wave Federation Protocol is pretty innovative. And could even play an important role in a real time communications for AR eyewear connected to smartphones. The challenges that Wave takes on re real-time communications, federation, permissions and filters are pretty important ones for ARâ€¦</p>
<p><strong>Chris Arkenberg:  Especially when youâ€™re trying to federate a lot of permissions and filter a lot of data, which all of that gets even more important when you have a visual layer between you and the real world.</strong></p>
<p><strong>Tish Shute:</strong> You got it.  Yeah!</p>
<p><strong>Chris Arkenberg:  I think thatâ€™s really valuable real estate, both for third parties that want to get access to your eyes, as well as for you, as the user, who still needs to navigate through the phenomenal world and not be occluded by massive amounts of overhead data.</strong></p>
<p><strong>Tish Shute:</strong> Yes, I am sure Google has big plans for the next level of cloud computing and Wave looks at some key challenges.  I suppose federation poses some key business problems.  I think it was Michael Jones who said to me that it was a bit like socialism in that you have to be willing to give something up for the greater good.</p>
<p>Perhaps federation does not present enough appeal because of its challenges re business models?</p>
<p><strong>Chris Arkenberg:  Well, I wonder.  I mean thereâ€™s got to be some value for their ad platform as ads are moving more towards this personalized experience.  Advertising is becoming less of a shotgun blast and more of a very precise, surgical strike. So being able to track user data to such a fine degree to mobilize the appropriate ads around them wherever they are, on any platform, is certainly very valuable to Google and their ad ecology.</strong></p>
<p><strong>Tish Shute:</strong> Many people have high hopes that HTML 5 by lowering the barrier of entry forÂ  browser style AR could also pave the way for some interesting AR work..</p>
<p><strong>Chris Arkenberg:  Well, as much as I would hope that all the different players are going to come together and establish some shared set of standards, really, whatâ€™s happening is itâ€™s a rush to the finish line to be the firstâ€¦to get the most penetration in the marketplace so that Layar, for example, can say, â€œItâ€™s official.  Weâ€™re the platform.â€  And then the consolidation that will follow, where the Googles and the other big players like Qualcomm say, â€œOK, itâ€™s mature enough.  Weâ€™ll start buying up all the smaller companies.â€</strong></p>
<p><strong>And thatâ€™s where the real challenge is right now is that there are no standards.  Itâ€™s such an immature technology that you have a lot of different players trying to establish the ground rules.  And again, this is one of the challenges that faced public virtual worlds, is that you had a lot of different virtual worlds that werenâ€™t talking to each other in any particular way, and that they each had their own development platform. And so you end up with a very fractured ecosystem or set of competing ecosystems, which is kind of whatâ€™s happening with AR right now, where a developer has to choose between a number of different new platforms or hedge by deploying across multiple platforms. Basically, the web browser wars are set to be recapitulated by the AR browsers.</strong></p>
<p><strong>Among them, Layar and Metaio seem to be getting the most traction.  But thereâ€™s still not a really strong case for a unified development ecosystem to emerge.</strong></p>
<p><strong>Tish Shute:</strong> So a discussion of ecosystem development brings us back to the Points of Control Map I think. So what do you see as key points of interest for AR developers to watch in the  Points of Control Map? And where do you want to sort of put your bets, right?  We are still really waiting for mobile social AR to emerge into the mainstream.</p>
<p><strong>Chris Arkenberg:  Yes.  And thatâ€™s primarily the shortcoming of  the hardware itself, but also of the accuracy of current GPS technology.  Thatâ€™s another kind of gating factor, because again, AR wants to be able to express the data within a distinct place or object.</strong></p>
<p><strong>So in a lot of ways, other than kind of what weâ€™ve allowed for the broader entertainment purposes, for AR to really work, there needs to be more resolution in GPS location.  So for it to be truly locativeâ€¦because itâ€™s OK to tell Foursquare that youâ€™re in Bar X.  But if you want to be able to draw data directly on a wall within that bar, or do advertising over the marquee on the front, you need more factors to accurately register those images on a discreet location. So thatâ€™s another, sort of, aspect of the immaturity of AR, is that itâ€™s still very hard to register things on discreet locations without employing a number of diverse triangulation methods.</strong></p>
<p><strong>Tish Shute:</strong> Right.  The mobile AR games we see at the moment are really just faking a relationship to the physical world unless they rely on markers or some limited form of natural feature recognition which is really just a more sophisticated form of markers.  But the Qualcomm  SDK does offer some opportunities to tie AR media to the world more tightly as does the Metaio SDK. But in terms of a mobile social AR game that could be like the Cape of Zynga to FourSquare in Location Basin [see the <a href="http://map.web2summit.com/">Points of Control map</a>]&#8230; We havenâ€™t seen anything close yet.</p>
<p>AR should be able to bring the check-in mode to any object in our environment.</p>
<p><strong>Chris Arkenberg:  Yes, yes.  And thatâ€™s actually one of the early interests I had in the notion of social augmented reality. I wanted a way to tag my community with invisible annotations that only certain people could read, and found pretty quickly that thatâ€™s very difficult to do.  I mean you can kind of do some regional tagging, like on a  beach, for example, but if you wanted to tag the bench that was on the cliff above the beach, itâ€™s very difficult to do that using strictly locative reckoning.</strong></p>
<p><strong>Thereâ€™s all sorts of really cool social engagement that can be revealed when people are allowed to attach things to the world around them, to the streets they normally pass through, or the points of interest that they normally engage in. To be able to author on the fly on the streets and attach it discreetly to an object effectively.</strong></p>
<p><strong>Tish Shute:</strong> And yes we do have all kinds of markers and QR codes.  But Erick Schonfeld of Tech Crunch<a href="http://techcrunch.com/2010/10/18/likify-qr-code/" target="_blank"> made a good point that QR codes</a>: &#8220;Until QR code scanners become a default feature of most smartphones and  they start to become actually useful enough for people to go through the  trouble to scan them, they will remain a gee-whiz feature nobody uses.&#8221;</p>
<p><strong>Chris Arkenberg:  So again, this gets back to competing standards and who gets access to the phone stack, the bundle. Who gets the OEM dealâ€¦?</strong></p>
<p><strong>Tish Shute:</strong> Yes, the battles for the networks on the Handset Plains are pretty important for AR!<br />
[laughter] I think Layar have made some smart moves on The Handset Plains.</p>
<p>And there are a lot of acquisitions of nearfield technology to look at.Â   If I remember rightly Ebay bought the Red Laser tech from Occipital &#8211; now thereâ€™s any interesting company. Their panorama stuff rocks!</p>
<p><strong>Chris Arkenberg:  Right. Thereâ€™s a lot of nearfield stuff thatâ€™s supposed to hit all of the major mobile platforms in the next year or so.</strong></p>
<p><strong>I mean I think where this is heading, in my mind, is basically smart motes.  You know, little nearfield wide-range RFIDâ€™s that are the size of a small, tiny square that you could attach to just about anything and then program it to be a representative of your establishment or of an object, that then you can start to tag just about anything. I mean you canâ€™t rely on geo to do it, but if you have a Nearfield chip there that costs maybe like two cents to buy in bulk, and you can flash program it, then you can start to attach data to just about anything.</strong></p>
<p><strong>Tish Shute:</strong> Yes &#8216;cos some things still remain very difficult for near field image recognition technologies like Google Goggles.</p>
<p><strong>Chris Arkenberg:  Well, if your phone can interrogate for Nearfield devices, and it detects a chip in its near field, it can then interrogate that chip.  The chip may contain flash data on itself, or it may contain the local server in the establishment, or it may go to the cloud and get that data back.</strong></p>
<p><strong>Tish Shute:</strong> Yes there is moverment from the top and open source hardware like Arduino has created an opportunity for all sorts of creativity with instrumented environments.Â  And the handheld sensors in our pockets &#8211; our smart phones create a lot of opportunity for bottom up innovation too.</p>
<p><strong>Chris Arkenberg:  I mean thatâ€™s my guess.  If you look at what IBM is doing with their Smarter Planet initiative, theyâ€™re partnering with a lot of municipalities, and obviously with a lot of businesses and their global supply chains.</strong></p>
<p><strong>But theyâ€™re basically working with municipalities and all these stakeholders to instrument their territory, their business, or their city, as it were. So theyâ€™re working to provide embedded sensors and the software necessary to read them out and run reports &amp; viz.  And presumably that software can extend to include some sort of mobile device to interrogate the sensors and read the data.</strong></p>
<p><strong>Thatâ€™s kind of a top-down approach of a very large global company working with top-down governance bodies to do this. Simultaneously you have the maker crowd experimenting with Arduino and such to build from the grassroots, the bottom up approach.</strong></p>
<p><strong>And thatâ€™s primarily gated by the amount of learning it takes to be able to program these devices, to be able to hack them.  Typically, the grassroots creators who make these devices donâ€™t have the luxury of very large budgets to make things highly usable and Wizywig.</strong></p>
<p><strong>So the bottom up community is a sandbox to create tremendous amounts of innovation, because they are unconstrained by the very real financial needs of the top down innovators.  And so you get a lot of fascinating innovation, a very rich ecology from the bottom-up approach, but you donâ€™t get a lot of wide distribution.  But that does filter up to and inform the top down approach that has a lot more money to put into this stuff.  And it ultimately has to respond to the needs of the marketplace.</strong></p>
<p><strong>I mean if thereâ€™s an answer to the question of whether something like AR will succeed through the bottom-up grassroots approach or the top-down industry approach, I would say it would be both.  That handsets will be hacked to read the bottom up innovations of the maker community, and handsets will be preprogrammed to read the top down efforts of the IBMs of the world.</strong></p>
<p><strong>Tish Shute:</strong> Yes but i have to say it is very time-consuming hacking phones (I have just seen a few days suck up in this myself so that I could upgrade my G1 to try out the new ARWave client!).  I mean Android has obviously been the platform of choice because of openness but the business model of iPhone and its market share in the US sure make it important for developers.Â   Itâ€™s like you donâ€™t exist if you donâ€™t have an iphone app for what you are doing.</p>
<p><strong>Chris Arkenberg:  Yeah, and thatâ€™s the challenge, because at the end of the day developers prefer not to work for free and a solid, reliable mechanism to monetize their efforts becomes very appealing.</strong></p>
<p><strong>When I look at this map, the points of control map, itâ€™s really interesting to me, because what it says to me with respect to AR is each of these little regions that they have drawn out would be a great research project. So every single one of these should be instructive to AR.</strong></p>
<p><strong>In other words, we should be able to look at social networks, the land of search, or kingdom of ecommerce, and apply some very rigorous critical thinking to say, â€œHow would AR add to this engagement, this experience of gaming, or ecommerce, or content?â€</strong></p>
<p><strong>Looking at each of these individually and really meticulously saying, â€œOK, well yes, it can do this but how is that different from the current screen media experience, the current web experience that we have of all these types of things?â€  You know, how can augmented reality really add a new layer of value and experience to these? And I think that process would really trim a lot of the fat from the hopes and dreams of AR and anchor it down into some very pragmatic avenues for development.  And then you could start looking at, â€œWell, OK, what happens when we start combining these?â€ When we take gaming levels and plug that into the location basin, as you suggested.</strong></p>
<p><strong>Tish Shute: </strong> Some of the important platforms for AR donâ€™t appear to have spots on the map like Google Street View and other mapping technologies that hold out so much hope for AR, or am I missing something?</p>
<p><strong>Chris Arkenberg:  You mean on the map?</strong></p>
<p><strong>Tish Shute:</strong> Yes for the full vision of AR we need sensor integration, computer vision and cool mapping technologies to come together. Do you see where Google Maps and Google Street View&#8230; Where would they be?</p>
<p><strong>Chris Arkenberg:  Yeah, I mean itâ€™s certainly content, itâ€™s locationâ€¦</strong></p>
<p><strong>Are you familiar with Earthmine?</strong></p>
<p><strong>Tish Shute:</strong> Yes, yes I am, definitely.<a href="http://www.earthmine.com/index" target="_blank"> Earth Mine</a>, <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, Google Street View, user generated internet photo sets like  Flickr all of these could be very important to AR, potentially.</p>
<p><strong>Chris Arkenberg:  Well, and the interesting thing about Earthmine is that theyâ€™re effectively trying to do an extremely precise pixel to pixel location mapping.  So theyâ€™re taking pictures of cities just like Street View, except theyâ€™re using the Z axis to interrogate depth and then using very precise geolocation to attach a GPS signature to each pixel that theyâ€™re registering in their images. Effectively, you get a one-to-one data set between pixels and locations.  And so you can look at something like Google Street View, and if you point to the side of a building, in theory, it should know exactly where that is.</strong></p>
<p><strong>Theyâ€™re rolling this out with the idea of being able to tag augmented reality objects in layers directly to surfaces in the real world.  So thatâ€™s another approach to trying to get accurate registration and to try and create what are essentially mirror worlds. Then your Google Street View becomes a canvas for authoring the blended world, because if you plop a 3D object into Street View on your desktop, and then you go out to that location with your AR headset, youâ€™ll see that 3D object on the actual street.</strong></p>
<p><strong>Tish Shute:</strong> There was some experimental work with Google Earth as a platform for a kind of simulated AR but I suppose Google Earth doesnâ€™t figure in the battle for the network economy as it never got developed as a platform.</p>
<p><strong>Chris Arkenberg:  It hasnâ€™t tried to become a platform, to my  knowledge.  I mean I know some people are doing stuff with it, but as far as I know, Google owns it, they did it the best because they have the best maps, and thereâ€™s not a huge ecosystem of development thatâ€™s based around it other than content layers.</strong></p>
<p><strong>And my sense of everything else on the Points of Control map is theyâ€™re looking more at these sort of platform technologies thatâ€¦</strong></p>
<p><strong>Tish Shute:</strong> Yes, re platforms for growth for AR. Gaming consoles will probably emerge as a significant platform for AR this year.</p>
<p><strong>Chris Arkenberg:  There will be much more of a blended reality experience in the living room for sure, and with interactive billboards. Digital mirrors are another area.  So I mean if we kind of extend AR to include just blended reality in general, you know, this is moving into our culture through a number of different points. As you mentioned, it will be in the living room, it will be in our department stores where you can preview different outfits in their mirror. Weâ€™re already seeing these giant interactive digital billboards in Times Square and other areas.</strong></p>
<p><strong>Itâ€™s funny.  I mean for me, the sort of blended reality aside, the augmented reality, to me, is actually a very simple proposition in some respects.  When I look at this map, augmented reality is just an interface layer to this map in my mind, just as itâ€™s an interface layer to the cloud and itâ€™s an interface layer to the instrumented world. Itâ€™s a way to get information out of our devices and onto the world.</strong></p>
<p><strong>Tish Shute:</strong> The importance of leveraging existing platforms has become pretty clear but it is interesting Facebook definitely gave Zynga the opportunity but would Facebook be so big without Zingaâ€™s social gaming boost?</p>
<p><strong>Chris Arkenberg:  I feel that Zynga has definitely helped its growthâ€¦But I think Zynga has benefited a lot more from Facebook than Facebook has from Zynga.</strong></p>
<p><strong>Tish Shute:</strong> Zynga certainly proved you  could build a profitable business on Facebookâ€™s API!</p>
<p><strong>Chris Arkenberg:  They did.  And they also really validated the Facebook ecosystem and the platform.  They really extended itâ€¦ Zynga benefited from the massive social affordances that Facebook had already architected and developed. They brought gaming directly into Facebook, and particularly, this emerging brand of lightweight social gaming that when you sit it on top of a massive global social network like Facebook, it suddenly lights up.</strong></p>
<p><strong>Tish Shute: </strong>AR pioneers should quite carefully go through this map. There is so much to think about here. Iâ€™m a kind of fanatic about  Streams of  Activity in AR.  Real time brokerages and their potential for AR is something I am fascinated by.  That is one reason I love the ARWave project.</p>
<p>Anselm Hook, to me, is one of the great thinkers in this area of real time brokerages &#8211; with his project Angel, and the work of <a href="http://www.ushahidi.com/" target="_blank">Ushahidi,</a> which is now the platform <a href="http://www.ugotrade.com/2010/09/17/urban-augmented-realities-and-social-augmentations-that-matter-interview-with-bruce-sterling-part-2/" target="_blank">for augmented foraging (see here)</a>.  Anselm is now working on AR at PARC which is exciting.</p>
<p><strong>Chris Arkenberg:  Well, there are some challenges working with data streams. Presentation and filtering I think is a big challenge with any sort of stream.  Because obviously, you have a lot of potential data to manage, to parse, and to make valuable and comprehensible. So I think this is bound very closely to being able to personalize experiences, or having very discreet valuable experiences.  Disaster relief, for example, I think is an interesting idea that ties into the Pachube type of work. Where, if you had the headset and you were a relief worker, and you had immediate lightweight, non-intrusive, heads up alpha channel overlay, waypoint markers showing you all of the disaster locations or points of need, AR becomes extremely valuable, because itâ€™s a primarily hands-free environment.  This is why the military stuff is so interesting.</strong></p>
<p><strong>Tish Shute:</strong> Ha!  We are running  into the eye patch/shades/goggles/sexy specs thing again.  But filtering and making streams of activity relevant will be very interesting for  AR.Â  Again that it why I love the Wave Federation Protocol work because what they have built into their XMPP extensions.  You can have your real-time personal data streams, or community streams, or broadcast publicly &#8211; the permissions are built.</p>
<p>And Thomas Wrobelâ€™s original vision of these layers and channels is only fully expressed if you have the eyewear.</p>
<p><strong>Chris Arkenberg:  Well, and it becomes redundant if itâ€™s on a mobile. To use a very basic example, Twitter, obviously thereâ€™s an app you can view those streams of activity on the camera stream. But you can view that real time data on the screen.  Why do you need to see it heads up?</strong></p>
<p><strong>The reason I really pay attention to what the military is investing in, one, because they have a ton of money, but also because they tend to represent the core bio survival needs of the speciesâ€¦So, when I look at computing, I see this very obvious trend of computers getting smaller and smaller and closer and closer to us because theyâ€™re so valuable to our success.  They give us so much valuable information for engaging our world on a moment by moment basis.  So, of course now we have these tiny little handheld devices that give us access to the global knowledge depositories of human history, because itâ€™s so useful to have that stuff right at hand.</strong></p>
<p><strong>The only impediment now is that it takes one of our hands, if not both of them, to access it.  So if you are in the natural world, which we are all always in the natural world, ultimately, you want your hands free in order to engage with the world on a physical level.</strong></p>
<p><strong>I see computation, or rather, our access to computation is just going to get thinner and thinner, and weâ€™ll very soon move into eyewear, and inevitably, weâ€™ll move into brain computer interface in some capacity.</strong></p>
<p><strong>So when youâ€™re the disaster worker, or a deployed soldier, or the extreme mountain biker, or the heli-skier, or just an adventurer, there are a lot of very practical reasons to have access to information on a heads-up plane. I see AR as being so profound and so valuable, but weâ€™re getting a glimpse of it in its infancy, and itâ€™s got a ways to go to be able to really contain what it is weâ€™re reaching for.</strong></p>
<p><strong>Tish Shute:</strong> I agree.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s been a big criticism Iâ€™ve had with all the existing AR implementations that Iâ€™ve seen, is that the UI really needs a revolution.  Itâ€™s very heavy handed.  It is not dynamic, even though itâ€™s supposed to be.  It does not take advantage of transparencies.  It treats the screen like a screen.  It doesnâ€™t treat the screen like a window onto the real world. When youâ€™re looking on the real world, you donâ€™t want a lot of occlusion.  You want very soft-touch indicators of a data shadow behind something that you can then address and then have it call out the information thatâ€™s important to you.</strong></p>
<p>Tish Shute:  Now, thatâ€™s a very nice kind of image youâ€™ve conjured for me there.  Do you see that more could be done on the smartphone than is being done within that?  Or are we like waiting for the old ishades?</p>
<p><strong>Chris Arkenberg:  I think thereâ€™s definitely a lot of room for improvement on the smartphone UI.  Nobodyâ€™s really played around with it much. And again, I think thatâ€™s in part that there hasnâ€™t been a really established platform with enough money to fund interesting UI work. We see it in some of the concept demos that float around every now and then.</strong></p>
<p><strong>I guess itâ€™s both a blessing and curse that Iâ€™m always five steps ahead of where Iâ€™m trying to get to.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, I am familiar with that feeling!</p>
<p><strong>Chris Arkenberg:  So Iâ€™m always trying to reach for the vision even though itâ€™s a bit distant. I think thereâ€™s going to be a lot of development on the handsets.  But again, I think we need a lot of refinement.  We need a lot of real critical analysis of why this is a good thing.</strong></p>
<p><strong>To get back to the original point of Raimoâ€™s comment, it struck me.  And I knew it, but I just had set it aside as gimmickry. But heâ€™s right.  Content is a huge driver for this.  Just stuff thatâ€™s engaging, and fun, and cool, and shows off the technology so they can get enough money to make it through whatever Trough of Disappointment may be waiting.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, donâ€™t underestimate the Planes of Content!Â  They are a great place to get interest and money to keep AR technology  moving on, right?</p>
<p><strong>Chris Arkenberg:  Yeah, yeah.  Because, you know, thereâ€™s a lot of freedom there.  And you can piggyback on all the rest of the content thatâ€™s out there and jump on memes and marketing objectives, etc&#8230;</strong></p>
<p><strong>And thereâ€™s a lot of stuffâ€¦Iâ€™m blanking on some of the names, but some of these historical recreations of city streets.  Thereâ€™s a street in London where they overlaid historical photos in a really compelling experience. [Museum of London - http://www.museumoflondon.org.uk/] Again, Iâ€™m completely forgetting the attributions, but hose are the type of things that can really be pursued on the existing platforms.  There is stuff thatâ€™s really compelling and really cool.</strong></p>
<p><strong>I heard of another interesting use case &#8211; and I should say that I canâ€™t find attributions to this anywhere on the web and I may be paraphrasing or mis-representing the actual work, but I think the concept is worth exploring anyway. But the idea was that you could take the locations of border checkpoints and conflict sites in Palestine and Israel and visually overlay them on an AR layer in San Francisco.  And it would do some sort of transposition where you could virtually view these things in San Francisco with the same locational mapping superimposed. So you could see where the checkpoints where.  You could see where the wall was.  You could see where suicide bombings were and where there had been conflicts.</strong> <strong>[I cannot find any citations for this!]</strong></p>
<p><strong>Tish Shute: </strong> But with an AR view?  But why would you use an AR view if you  are in San Francisco, then?</p>
<p><strong>Chris Arkenberg:  Because it superimposes two realities, translating the Gaza conflict into San Francisco as you are walking around. You can interrogate the world. Thereâ€™s a discoverability aspect where youâ€™re using the headset to reveal things, or the handset rather, to reveal things that you could not see otherwise in your city. It was done as an art piece, but as a provocative, obviously political art piece.</strong></p>
<p><strong>Tish Shute: </strong>Very interesting.  Iâ€™d love to see that. Because thatâ€™s interesting to get away from this idea that you actually have to sort of have this one to one relationship between the data and the world is kinda nice, isnâ€™t it?  Well, not one to one, but a very literalâ€¦getting away from that literalness is kind of good.</p>
<p><strong>Chris Arkenberg:  And thatâ€™s a possibility of virtual reality and augmented reality merging, that maybe virtual reality is actually going to do best by coming out of the box and writing itself over our reality, so that as you are walking around, you are no longer seeing San Francisco, but you are seeing part of Everquest or World of Warcraft.</strong></p>
<p><strong>Tish Shute: </strong> Well this is where Bruce Sterling gets to that point he made in <a href="http://augmentedrealityevent.com/2010/06/06/are-2010-keynote-by-bruce-sterling-build-a-big-pie/" target="_blank">his keynote for are2010</a>, that if we actually have viable AR eyewear, then you get the gothic stepsister of AR, VR rising from the grave!Â  He asks whether the very charm of augmented reality, is in fact that it adds rather than subtracts from your engagement with the world and that getting get sucked back into the black hole of VR might not be so great.</p>
<p><strong>Chris Arkenberg:  And then you get all sorts of interesting challenges to social cohesion if you have a lot of different people experiencing very different worlds, effectively.  That if there is no real consensual reality and a majority of your local populous is, in fact, experiencing very different and unique versions of the world, what does that do to social cohesion?  How does that reinforce tribalism, for example, when only you and certain others get to opt in to a particular layer view of the world?</strong></p>
<p><strong>Tish Shute:</strong> Yes Jamais Cascio wrote an interesting piece on that issue on AR and social cohesion a while back.</p>
<p>An eye patch is a more logical vision than the goggles in many ways but I suppose the loss is stereo vision?</p>
<p><strong>Chris Arkenberg:  And actually, there were developments in military helicopter technology many years ago that used a single pane square of glass over the eye mounted to the helmets of pilots.  And then they drew various bits of heads-up information on it. So that ensures that youâ€™re having a real strong engagement with the real world, which, obviously, when youâ€™re a helicopter pilot is quite important.  But you still have access to the data layer of  the invisible world.</strong></p>
<p><strong>Tish Shute:</strong> I just went to <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a> and I have to say, I was awestruck about how big thatâ€™s got.  I mean <a href="http://hadoop.apache.org/" target="_blank">Hadoop</a> has gone from like zero to huge in just a few years.  I mean itâ€™s just like now everyone has the power of the Google big table at their fingertips.</p>
<p>Whatâ€™s the play for AR in the land of search?</p>
<p>I could imagine Hadoop being very powerful tool for AR analytics?</p>
<p>Have you got any thoughts on the land of search and AR? Of course visual search is proceeding at a fast pace and there is a lot of promise for integrations with AR in the future but the latency for visual search is still pretty high?</p>
<p><strong>Chris Arkenberg:  In the near term, not a lot.  In the medium term, thereâ€™s a larger trend towards virtual agents that you can program or teach to keep watch over things for you as an effort to scale down the data overload.  So search is something thatâ€™s going to become more personalized and more active.  Thereâ€™s a movement to make it so people can essentially deputize these agents to be always searching for them; to be out there looking for the things that they have told these agents are important to them.</strong></p>
<p><strong>So active search for AR I think presents some challenges, obviously because you need to do text input, typically, or voice input.  Voice input, I think, is much more achievable than text input for AR.  But I can certainly imagine an AR layer that is being serviced by these agents that we have roaming around the web for us reconciling their visual view of the world with our personalizations. AR apps are contextually aware so it knows that if youâ€™re downtown, itâ€™s not going to be giving you a ton of information about Software as a Service infrastructure, or what have you.  But that, instead, itâ€™s going to be handing you little tidbits about a particular clothing brand youâ€™ve opted in to follow and information about  music venues &amp; schedules, for example.  Or perhaps youâ€™ll be on the lookout for other users that have opted in to publicly tag themselves as a member of this or that affinity.</strong></p>
<p><strong>I keep coming back to this idea of AR as really just a simple visualization layer that all of these other technologies can potentially feed into.  So in that sense, search becomes a passive thing that AR is just simply presenting to you in a heads-up, hands-free, or potentially hands-free environment.</strong></p>
<p><strong>Tish Shute:</strong> Yes, the big challenge is the stepping stones to that point! Small steps that keep interest going into developing the underlying technology (and not just in research labs!) that will bring us that interface.Â  We have seen some movement already with Qualcomm.</p>
<p><strong>Chris Arkenberg:</strong> And there are bandwidth issues as well, as we can see with the Google Goggles, which is a great idea of visual search.  But you have to take a picture and send it to the cloud and wait for your results.  Itâ€™s not a real-time dynamic interrogation of the world.</p>
<p><strong>Tish Shute:</strong> Yes we are really only at the very beginning of  AR being ready for prime time.. it would be interesting to ask AR developers how many of them use AR on a daily basis.</p>
<p><strong>Chris Arkenberg:  I think a lot of us, weâ€™re just informed by the sci-fi myths and fascinated with the potential now thatâ€™s itâ€™s starting to become real. But I think we all kinda get that itâ€™s still extraordinarily young.  I mean the web is extraordinarily young. And AR is itself far younger in a lot of ways in its implementations.</strong></p>
<p><strong>Everybody has a lot of excitement about all of the great potentials that are being unleashed by this great wave of the Internet and the web and ubiquitous mobile computing.  So thatâ€™s why, you know, you look at that map and we talk about AR and you canâ€™t talk about any of the stuff without talking about all of it, in a lot of ways, particularly with something like AR where itâ€™s so ultimately agnostic and could be completely pervasive across all of these layers.</strong></p>
<p><strong>So my fascination is with the future, and I measure our progress towards it by the young nascent offerings from the platform players and the developers. And yeah, a lot of it isâ€¦itâ€™s akin to getting that first triangle on the screen in 3D.  You know, when the renderer finally works and you get a triangle on the screen, and you go, â€œOh my God, it renders.â€  And then you can start to really build polygons and build objects, and start doing boolian operations, and get light and rendering in there, and textures, and on, and on, and on.<br />
So Iâ€™m fascinated by the Layars and the Metaioâ€™sâ€¦<br />
[laughter]</strong></p>
<p><strong>Tish Shute:</strong> Yes and hats off to all the players in the emerging industry, Layar, Metaio, Ogmento, Total Immsersion, and all the others who are finding clever ways to bring fun aspects of  AR into the mainstream, and fuel interest to take the technology to the next level.</p>
<p><strong>Chris Arkenberg:  Absolutely.  And the hype cycle is very valuable.  It has really helped launch the AR industry.  Itâ€™s brought a lot of eyes, and itâ€™s brought a lot of money into the industry.  And itâ€™s forcing people like us to have these conversations to understand how to refine its growth and really focus on the potential in all these different venues, whether itâ€™s trying to save lives, or better understand your city, or have really compelling entertainment experiences.</strong></p>
<p><strong>Everybodyâ€™s excited, and everybodyâ€™s sharing, and everybodyâ€™s trying to move it forward in a way thatâ€™s the most productive.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>The Next Wave of AR: Exploring Social Augmented Experiences at Where 2.0</title>
		<link>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/</link>
		<comments>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/#comments</comments>
		<pubDate>Mon, 29 Mar 2010 05:25:03 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR browsers]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave demo]]></category>
		<category><![CDATA[atemorality]]></category>
		<category><![CDATA[atemporal network culture]]></category>
		<category><![CDATA[augmented reality and federation]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[augmenting the map as interface]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[collaborative augmented reality]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[Dennou Coil]]></category>
		<category><![CDATA[design principles for social augmented experiences]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[layers and channels of augmentation]]></category>
		<category><![CDATA[location technologies]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[real time social augmented experiences]]></category>
		<category><![CDATA[Ruby On Sails]]></category>
		<category><![CDATA[social AR]]></category>
		<category><![CDATA[social AR and crisis response]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[Will Wright]]></category>
		<category><![CDATA[writing within the map]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5332</guid>
		<description><![CDATA[Where 2.0 is going to be epic this year (see my interview with Brady Forrest here), and it is so exciting to be part of it.Â  Location technologies and augmented reality are annointed rulers now.Â  Time Magazine recognized augmented reality as one of its 10 Tech Trends for 2010 (for more see ReadWriteWeb). The photo [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/jeremyandlisahight.jpg"><img class="alignnone size-medium wp-image-5336" title="jeremyandlisahight" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/jeremyandlisahight-300x160.jpg" alt="jeremyandlisahight" width="300" height="160" /></a></p>
<p><a id="jqit" title="Where 2.0" href="http://en.oreilly.com/where2010">Where  2.0</a> is going to be epic this year (see <a id="ysmn" title="my interview with Brady Forrest here" href="../../2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/">my interview  with Brady Forrest here</a>), and it is so exciting to be part of it.Â   Location technologies and augmented reality are annointed rulers now.Â  <a href="http://www.time.com/time/specials/packages/article/0,28804,1973759_1973760_1973797,00.html">Time  Magazine recognized</a> augmented reality as one of its 10 Tech Trends  for 2010 (for more <a href="http://www.readwriteweb.com/archives/augmented_reality_among_times_10_tech_trends_2010.php" target="_blank">see ReadWriteWeb</a>).</p>
<p>The  photo above is by Jeremy and Lisa Hight.Â  <a id="ohzg" title="Jeremy Hight" href="http://34n118w.net/">Jeremy Hight</a> is an information  designer, theorist and artist working in Augmented Reality and Locative  Media. Â  His essay â€œNarrative Archaeologyâ€ was named one of the 4  primary texts in Locative Media.</p>
<p><a id="xel:" title="Jeremy Hight" href="http://en.oreilly.com/where2010/public/schedule/speaker/69399">Jeremy Hight</a> will be part of our  panel: <a title="The Next Wave of AR: Exploring Social Augmented Experiences" href="http://en.oreilly.com/where2010/public/schedule/detail/11046">The  Next Wave of AR: Exploring Social Augmented Experiences</a>, with <a id="b49q" title="Anselm Hook" href="http://en.oreilly.com/where2010/public/schedule/speaker/6545">Anselm Hook</a>, <a id="h3j-" title="Joe Lamantia" href="http://en.oreilly.com/where2010/public/schedule/speaker/26367">Joe Lamantia</a>, <a id="xtfk" title="Sophia Parafina" href="http://en.oreilly.com/where2010/public/schedule/speaker/59688">Sophia Parafina</a> and <a id="uw9f" title="myself." href="http://en.oreilly.com/where2010/public/schedule/speaker/38011">myself.</a> We will <a href="http://www.youtube.com/watch?v=ZjXCTCSKtRQ" target="_blank">debut the video of the  ARWave project demo </a>that brings together augmented reality,  geolocation, and wave federation (more details later in this post).Â  And, Jeremy will bring to our  presentation some augmentations on his recent brilliant work and paper, <a href="http://www.neme.org/main/1111/writing-within-the-map" target="_blank">â€œWriting Within the Map.â€</a></p>
<p>Greg  J. Smithâ€™s points out in <a href="http://serialconsign.com/2010/03/thoughts-writing-within-map#comments" target="_blank">his in depth look at Jeremyâ€™s work</a> that it, <strong>â€œdovetails  with some of the main points in Bruce Sterlingâ€™s recent <a href="http://www.wired.com/beyond_the_beyond/2010/02/atemporality-for-the-creative-artist/">atemporality  keynote</a> at Transmedialeâ€ â€“ </strong>fortunately there is a <a href="http://www.wired.com/beyond_the_beyond/2010/02/atemporality-for-the-creative-artist/" target="_blank">transcription of Bruceâ€™s keynote here</a>.Â  What is so  awesome about this dovetailing is that you can get a feel for the  fun part of living in an, â€œatemporal network culture.â€Â  And, if you want  to really understand just how much locative media and augmented reality  have changed us, youÂ  might want to dig into these texts.</p>
<p>Bruce  Sterling and Jeremy Hight, and members of the ARWave team, and a  superb cast of augmented reality movers and shakers &#8211; including Will  Wright and Jesse Schell, will be <a id="ncnl" title="speaking at Augmented Reality Event in Santa Clara, June 2nd and  3rd." href="http://augmentedrealityevent.com/speakers/">speaking at Augmented Reality Event in Santa Clara, June 2nd and  3rd.</a></p>
<p>But, this week, the AR community&#8217;s attention  will be on the events at Where 2.0.Â Â  The  keynote speakers will be streamed live, so if you are not fortunate  enough to be there, tune in!</p>
<h3>The Next Wave of AR: Exploring Social Augmented Experiences</h3>
<p>On our panel, Jeremy  Hight, Anselm Hook, Sophia Parafina, Joe Lamantia and I will cover some  of the key social, cultural, technical and interactional questions for  exploring social augmented experiences. There will be five lightning  presentations, and an opportunity for questions from the audience, and a  world premier of the ARWave demo!</p>
<p><strong>1)  â€œAugmenting the map as interface: AR and Locative Narrativesâ€ -</strong> Jeremy Hight<strong><br />
</strong></p>
<p><strong>*Map augmentation of the historic route 66  can house an essay contest and publication globally but as embedded  within that map augmentation instead of books or even web sites.</strong></p>
<p><strong>*  A place on a map can be a graphic index and database to save and  collect<br />
the writing of that place with a graphic or textual search  index.</strong></p>
<p><strong>*One can pop immersive visualizations of abandoned or lost  buildings from map location in shared software and collectively augment  (imagine channels within the lost core of detroit where one is memories  and accounts tagged within parts in the immersive visualization while  another is of poems and stories written by people moved by the place and  its semiotics and story).</strong></p>
<p><strong>*The news stand is to be the map.</strong></p>
<p><strong>*New  forms of literature will be born of mapping, spaces,augmentation and<br />
new tools</strong></p>
<p>The concept drawings below (click to  enlarge)Â are  a collaboration between Jeremy Hight and Paul Wehby, Senior Designer at  <a href="http://www.lacma.org/" target="_blank">LA County Museum of Art.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby1post.jpg"><img class="alignnone size-thumbnail wp-image-5342" title="wehby1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby1post-150x150.jpg" alt="wehby1post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby2post.jpg"><img class="alignnone size-thumbnail wp-image-5343" title="wehby2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby2post-150x150.jpg" alt="wehby2post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby3post.jpg"><img class="alignnone size-thumbnail wp-image-5352" title="wehby3post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby3post-150x150.jpg" alt="wehby3post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby4post.jpg"><img class="alignnone size-thumbnail  wp-image-5353" title="wehby4post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby4post-150x150.jpg" alt="wehby4post" width="150" height="150" /></a></p>
<p><strong>2) </strong>Anselm Hook will look at, <strong>&#8220;10 reasons why AR isn&#8217;t a  flash in the pan,&#8221; </strong>and how,<strong> â€œAR can help us see the world we  would like to have exist.â€</strong></p>
<p>Anselm notes, <strong>â€œSo  much of what we do is so fickle and Iâ€™m looking for ways to connect  digital media work to deep values.â€</strong></p>
<p><strong>3)</strong> Sophia Parafina will present on, <strong>â€œSocial AR and Crisis Responseâ€</strong></p>
<p><strong>â€œAugmented  reality as a multi-party conversation. Â Rather than being passive  viewers of AR with a limited ability to Â checkin to places and make  annotations, current devices can broadcast sensor information that can  be fused into an interactive stream. AR users can send and receive  information, location, and sensor data from their mobile device.Â  The  streams can be federated into a unique AR view composed by the user.</strong></p>
<p><strong>Entertainment  and gaming are obvious applications, but it can also be applied to  crisis situations such as the search and rescue operations in Haiti.  Â Efforts such as Mission 4636, the SMS translation service, could  benefit from AR views. Â The collaboration among the Mission 4636  volunteers was the key element Â in their success for providing location  and rapid translation to responders on the ground.</strong></p>
<p><strong>With an AR  view, responders can send back their sensor information from their  mobiles to provide contextual information to remote volunteers. Â This  extends the conversation between remote volunteers and on the ground  responders and fosters collaboration which was a key element for the  success of Mission 4636â€³</strong></p>
<p><strong>4)</strong> Joe Lamantia,  an experience design and strategy consultant helping to define the  interaction framework and scenarios behind ARWave, will discuss, <strong>â€œDesign  Principles For Social Augmented Experiences:â€</strong></p>
<p><strong>â€œWith  the exotic mixed realities envisioned by futurists and science fiction  writers seemingly around the corner, it is time to move beyond questions  of technical feasibility to consider the value and impact of turning  reality inside out for everyday social settings and experiences. Thanks  to the inherently social nature of augmented reality, we can be sure the  value and impact of many augmented experiences depends in large part on  how effectively they integrate with the social dimensions of real-world  settings, in real time.&#8221;</strong></p>
<p>Joe will share, <strong>&#8220;eight guiding  principles for designing experiences that engage naturally with the  social dimension, and increase the value of augmented experiences.&#8221; </strong></p>
<p><strong>5) <a id="y08e" title="AR Wave" href="http://groups.google.com/group/arwave">&#8220;ARWave</a> &#8211; A demo and state of play,&#8221; </strong>from Tish Shute</p>
<p>I  will have the awesome privilege, on our Where 2.0 panel, of showcasing <a id="y08e" title="AR Wave" href="http://groups.google.com/group/arwave">ARWave</a>.Â Â  We willÂ   premier the ARWave demo which shows how ARWave has accomplished the  basics of geolocating data on Wave Federation Protocol (and real time  collaboration on this geolocated data).Â  <span id="ejpu" dir="ltr">If  you&#8217;re interested in the ARWave project join the <a id="n4k6" title="Mailing  list" href="http://groups.google.com/group/arwave">Mailing list</a>, FAQ are <a id="medt" title="here" href="http://lostagain.nl/websiteIndex/projects/Arn/information.html">here</a>, and have a peek at the current state of  development at <a id="ius-" title="Google Code" href="http://code.google.com/p/arwave/">Google Code</a>, and the <a id="dj:p" title="specification for an AR Blip" href="http://arwave.wiki.zoho.com/ARBlip-Specification.html">specification for an AR Blip</a>.Â   We also have Waves for the project hosted on Google Wave.Â  You can  join the general discussion <a id="xiwt" title="here" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BJAcNzz16A">here</a>, and the technical side <a id="s393" title="here" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">here</a>.</span></p>
<p>The picture below is a  screen shot from the demo video produced by core AR Wave developer and  concept designer, Thomas Wrobel.</p>
<p>Click on the  image to enlarge, and note: <strong>â€œThe pink thing is from Dennou Coil. Its  an anti-virus program (that literally chaseâ€™s down bugs and glitches and  removes them).â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.58.55-PM.png"><img class="alignnone size-medium wp-image-5344" title="Screen shot 2010-03-27 at 6.58.55 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.58.55-PM-281x300.png" alt="Screen shot 2010-03-27 at 6.58.55 PM" width="281" height="300" /></a></p>
<h3>ARWave</h3>
<p>In ARWave, stories or art are tied to place. And as Jeremy Hight  writes:</p>
<p><strong>â€œThe possibility exists to take a part of an  area and overlay a dystopia, a utopia, multiples of each of these, or  even recreations of previous incarnations in the past. Writing and  publication thus cannot only be of place, and form(s), but of selected  augmentations of icons, streets, buildings and related texts on top of  the map. These spaces can be built in real time and can be turned on and  off as channels of augmentation that over time illustrate many faces of  place in its present, past, possible futures,etc. with texts within  these alternate spaces as commentary, as fused aesthetic analysis, or  simply creative writing relevant to these charged and hybrid spaces.â€</strong></p>
<p>As  Thomas notes, Jeremy Hightâ€™s,Â  <strong>â€œidea of channels ties into the concept  of waves = a layer, and people can have many layers on at once.â€</strong></p>
<p>This  is different from the <a href="http://layar.com/" target="_blank">Layar</a> concept of a layer or rather â€œlayar.â€</p>
<p><strong>&#8220;We  are not talking about layers in the classical map layer way of  thinking, where you have a layer of all restaurants or a layer of all  mountain peaks, etc.,&#8221; </strong>notes ARWave developer Markus Strickler.</p>
<p>Currently all geo location apps like Layar have to use their own  servers, so users have to use different clients with different log ins  to see data from different sources.Â  But because ARWave uses federation,  we don&#8217;t depend on centralized infrastructure where the client of one  company can only connect to the server of that company.Â  This opens up  many exciting new possibilities for how people can decide to view and  publish geolocated data.</p>
<p>With AR Wave, via one  login, people can access the whole distributed network of servers (see  diagrams below), and any content will be accessible to them. ARWave will  make it easy for individuals, not just developers, to layer their  environment â€“ allowing the creation of augmented reality content to be  as simple as contributing to a Wave.</p>
<p><strong>â€œARWave  will enable individuals to publish easily to everyoneâ€¦.or just a few  people,â€</strong> Thomas notes:</p>
<p><strong>â€œTo â€˜publishâ€™ is also  self publication and distribution in communities or like minded groups  without the hard read of publication or rejection.â€ = publishing on a  Wave. No one approves it, anyone can publish to communities, or their  friends and family. Or even just personal publishing it for their own  reference.â€</strong></p>
<p>But ARWave does not compete with  existing AR Browsers.Â Â  On the contrary, AR browsers like Layar,  Wikitude and others, could implement ARWave and use it to enhance their  applications.</p>
<p><strong>â€œ<a href="http://layar.com/" target="_blank">Layar</a></strong><strong> has a killer  browser already,Â  ARWave would add social features. They can keep their  â€œwalled gardenâ€ of data and still join the federation of open data too <img src="../wp-includes/images/smilies/icon_smile.gif" alt=":)" /> â€ (Thomas Wrobel)</strong></p>
<p>Yup, that is the cool  part of federation â€“ you can have your cake and eat it too!</p>
<p>Sophia  Parafina and I will be organizing a discussion session on ARWave and  Federation at <a href="http://upcoming.yahoo.com/event/4909659/CA/Mountain-View/WhereCamp-SF/Google-Maxwell-Tech-Talk/CA/Mountain-View/WhereCamp-SF-2010/Google-Maxwell-Tech-Talk/" target="_blank">WhereCamp</a>, right after Where 2.0, April 3rd and 4th, and<a href="http://twitter.com/dlpeters" target="_blank"> Dan Peterson</a> who is in leading the  federation effort for Google Wave will join us.</p>
<p>The  diagrams below illustrate how ARWave and federation can revolutionize  the way we share our augmented realities.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.33-PM.png"><img class="alignnone size-medium wp-image-5347" title="Screen shot 2010-03-27 at 6.06.33 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.33-PM-300x218.png" alt="Screen shot 2010-03-27 at 6.06.33 PM" width="300" height="218" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.00-PM.png"><img class="alignnone size-medium wp-image-5345" title="Screen shot 2010-03-27 at 6.06.00 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.00-PM-300x214.png" alt="Screen shot 2010-03-27 at 6.06.00 PM" width="300" height="214" /></a></p>
<h3><strong>Real Time Social Augmented Experiences</strong></h3>
<p>Another key  aspect of ARWave is itâ€™s near to real time update capabilities.Â  As Jeff  Pulver pointed out in, â€œ<a href="http://pulverblog.pulver.com/archives/009156.html" target="_blank"><strong>SXSW  2010: The days twitter became less relevant:â€</strong></a></p>
<p><a href="http://pulverblog.pulver.com/archives/009156.html" target="_blank"><strong> </strong></a><strong>â€œAt  <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c149%7c09546&amp;digest=j9iIm6%2b67%2fKjaKaD%2bG459g" target="_blank">South By Southwest</a> 2010 (SXSW), a strange thing  happened on the way to Austin. A community of twitter faithful shifted  from sharing everything about everything on only twitter (and maybe  Facebook) and changed their habits to rely on learning about what was  happening and where things were happening by using <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c140%7c09546&amp;digest=vh5VR%2fg1W2H2FHKwRIGl8g" target="_blank">foursquare</a> and <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c141%7c09546&amp;digest=SyK27R5EP7LzBWYvodNDpQ" target="_blank">Gowalla</a> instead. Iâ€™m sure there were other products  and platforms being used including <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c142%7c09546&amp;digest=Nd55%2flEGjFr3lopcn8%2fqiA" target="_blank">Loopt</a> and <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c143%7c09546&amp;digest=rJYwQX8VJw9Bww36xQ1Lbg" target="_blank">GySPii</a> but foursquare and Gowalla were the dominant  platforms.â€<br />
</strong></p>
<p>Later Jeff wrote:</p>
<p><strong>â€œThere  were times where I could feel the ebbs and the flows of the people move  as different people checked into various locations. While most of this  was felt locally in the place I was in, it also became apparent on the  platforms when hundreds of people would rush to check in to a location.  There were also times when it felt like I was chasing ghosts; These were  the times I would go to a spot because a friend had checked into that  spot only to discover they were no longer there.â€</strong></p>
<p>ARWaveâ€™s  realtime collaborative capabilities are going to introduce some  fascinating dynamics to â€œchasing ghosts,â€ as the  ARWave framework gets integrated into services like foursquare â€“ a  project we have already begun to look at.</p>
<h3><strong>Augmented Reality  Search</strong></h3>
<p>As I mention<a href="../../2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/" target="_blank"> in my previous post</a>, ARWave presents some  fascinating possibilities for AR Search.Â  For example, one might do  advanced searching within waves using SPARQL, which could then display  in the form of a personal blip in your viewpoint (which in turn could be  shared with others).Â  Linked data will be massively important in  filtering and delivering useful info for augmented views (<a href="../../2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/" target="_blank">see my conversation with Will Wright </a>about the  problem of augmented reality overriding our very smart instincts and not  being useless or worse as a result).</p>
<p>Anselm Hook, who I  interviewed in depth recently about,Â <a title="Permanent Link to Visual Search,  Augmented  Reality and a Social Commons for the Physical World Platform:  Interview  with Anselm Hook" rel="bookmark" href="http://docs.google.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>, has  some very interesting thoughts on real time stuff, trading brokerages,  andÂ  the view within a single city block, which he elaborated on in the  second half to this interview which is upcoming on Ugotrade soon!</p>
<h3><strong>The  ARWave Developers</strong></h3>
<p><strong> </strong>There are three  people who unfortunately canâ€™t join us at Where 2.0 â€“ Â the costs of  travelling from Europe being an obstacle. Â But as they have been  developing the code for ARWave that will rock our augmented world, I  asked them, in a Wave conversation, to give me a few comments about  their interest in working on ARWave, and a pic and a short bio. Â  Also I  should mention the work of the PyGoWave team whose incredibly fast work  creating <a id="stt3" title="PyGoWave" href="http://pygowave.net/">PyGoWave</a> has given ARWave a rocket launch pad.Â  Also many thanks to the Wave community, see the <a id="vma_" title="Wave Federation  Protocol documentation" href="http://www.waveprotocol.org/">Wave Federation Protocol documentation</a>, <a id="exsg" title="Google's Wave  Server" href="https://wave.google.com/wave">Google&#8217;s Wave Server</a>, <a id="b:s7" title="RubyOnSails" href="http://wiki.github.com/danopia/ruby-on-sails/">RubyOnSails</a> (Ruby On Rails based Wave server).</p>
<p><a href="http://need2revolt.wordpress.com/" target="_blank"><strong>Davide   Carnovale</strong></a> @need2revolt</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/davide.jpg"><img class="alignnone size-thumbnail wp-image-5349" title="davide" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/davide-150x150.jpg" alt="davide" width="150" height="150" /></a></p>
<p><strong>â€œImho, the coolest  geolocated related thing is that weâ€™re making a world where the info  does not necessarily comes from an explicit search from the user, but  comes also from the actual locaton youâ€™re in. For instance, you can have  special offers in stores like fourquare does, or your friends can leave  geolocated notes for you that are triggered when you walk by.Â  We can  have games based on the treasure hunt schema requiring you to actually  go to specific location.</strong></p>
<p><strong>Other than this I  can think about self-guided tours of the city, maybe user generated  too, or for museums.<br />
</strong></p>
<p><strong>Naturally these are long term  goals with some rl use cases.</strong></p>
<p><strong>As for my  bio, there isnâ€™t much to sayâ€¦ I got a first level degree in computer  science and Iâ€™m taking the second (and last) level. Iâ€™ve developed with  mobile agents, osgart/artoolkit, brain computer interfaces, linux kernel  and thatâ€™s pretty much allâ€¦â€</strong></p>
<p><strong><br />
</strong></p>
<p><strong><a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-28-at-4.35.59-AM.png"><img class="alignnone size-thumbnail wp-image-5354" title="Screen shot 2010-03-28 at 4.35.59 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-28-at-4.35.59-AM-150x150.png" alt="Screen shot 2010-03-28 at 4.35.59 AM" width="150" height="150" /></a></p>
<p><strong>&#8220;If you are looking for specific advantages of using Wave I&#8217;d say;<br />
</strong><strong> </strong></p>
<p><strong>*  Federated â€“ Letting creators tap into bigger userbase. Each new app or  data layer will add to the â€œincentiveâ€ for users to join in. Google had  some good stats a few months back as to how much a simple login screen  can put people off using stuff. Â By breaking that barrier it should make  AR userbaseâ€™s grow.</strong></p>
<p><strong>* It deals with user accounts,  permissions, and real-time updating without creators needing to make a  new server standard themselves. It lowers barriers to development.</strong></p>
<p><strong>*  As the clients, servers, and data can be made separately by different  parties, its easier for developers to concentrate on just providing what  they want. You want to just make content? No problem! You dont need to  worry about doing anything else but that. It would become as easy as  making a webpage (or easier!).</strong></p>
<p><strong>* Bots will allow the  development of interactive AR games very easily. Just like modern  version of IRC bots, the infrastructure does the heavy lifting, and  interesting things can be done with just simple scripting.</strong></p>
<p><strong>*  The idea is anyone will be able to make a layer onto the world, and  people can mix, match and share their layers as they wish. Its not just  the data that becomes interesting to see augmenting our world, but the  combinations of data! For example, perhaps you could see the profits  generated by different companies above their buildings, but also see how  environmentally friendly they are at the same time. Or maybe see  pollution levels against health-statistics.Â  Seeing combinations of  geolocated data from different sources at the same time has many  interesting possibilities both for scientific as well as casual (game/  map/ chat) use.</strong></p>
<p><strong>hmz..I could go on forever listing stuff  here reallyâ€¦..</strong></p>
<p><strong>I guess if we are supposed  to be forming a roadmap of significant/interesting things for ARWave?</strong></p>
<p><strong>*  Example clients letting people make their own layers (waves) and add  points to them.</strong></p>
<p><strong>* Letting people log in to different  servers</strong></p>
<p><strong>* Servers federated together. (not our  responsibility, but essential part of the roadmap).</strong></p>
<p><strong>*  Anyone logged into any server can see data from anyone else that&#8217;s shared  with them, regardless of where they are logged into</strong></p>
<p><strong> * 3D  support, demonstrating various sorts of geolocated data.?</strong></p>
<p><strong>*  Use of bots for example games?<br />
â€”-<br />
My Bioâ€™s quite simple.<br />
Studied 3D Animation in Portsmouth, UK.<br />
Moved to the Netherlands,  have since been working in creating ARG games, in the last year founded  Lostagain (Lostagain.nl).â€</strong></p>
<p><strong><br />
</strong></p>
<p><strong><a id="ikdu" title="Markus Strickler" href="http://twitter.com/kusako">Markus  Strickler @kusako</a></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/markus.jpg"><img class="alignnone size-thumbnail wp-image-5350" title="markus" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/markus-150x150.jpg" alt="markus" width="150" height="150" /></a></p>
<p><strong>â€œI think the main point behind ARWave is to go beyond simply  displaying existing placemarks on top of a live camera view, towards a  highly personalized, augmented world where everybody can edit and share  localized information collaboratively and in real time. Wave provides  the means to do this through its model of persistent real time  conversations and adds even more by providing a way for personal agents  (robots) to participate in these conversations.</strong></p>
<p><strong>As  for my Bio: Iâ€™ve been developing Web applications for the last 15  years, hold a degree in Image Sciences and am currently working as a  Java developer in Cologne, Germany.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality, and Physical Hyperlinks for Playfulness, Not just Purchases: Talking with Paige Saez about ImageWiki</title>
		<link>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/</link>
		<comments>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/#comments</comments>
		<pubDate>Fri, 19 Mar 2010 03:25:17 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[Augmented reality Magician]]></category>
		<category><![CDATA[Augmented Reality Meetup]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Chris Grayson]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[Imagwik]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[Jason Kolb]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[linked data and augmented reality]]></category>
		<category><![CDATA[Makerlab]]></category>
		<category><![CDATA[Marco Tempest]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open Frameworks]]></category>
		<category><![CDATA[open Frameworks and augmented reality]]></category>
		<category><![CDATA[OpenCV]]></category>
		<category><![CDATA[OpenCV and augmented reality]]></category>
		<category><![CDATA[optical character recognition]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[physical hyperlinking]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[RDF and Augmented Reality Search]]></category>
		<category><![CDATA[semantic web and augmented reality]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[Social Commons for Augmented Reality]]></category>
		<category><![CDATA[SPARQL]]></category>
		<category><![CDATA[SPARQL and ARWAVE]]></category>
		<category><![CDATA[SPARQL and Wave]]></category>
		<category><![CDATA[SPARQL and XMPP]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubiquity]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5262</guid>
		<description><![CDATA[The video above, The Imawik commercial, is a collaboration between In The Can Productions and Paige Saez for Makerlab &#8220;The Imawik (ImageWiki) is a visual search tool for mobile devices. It allows for the ability to turn images into physical hyperlinks, conflating visual culture with a community-editable universal namespace for images.&#8221; Paige Saez is an [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em>The video above, <a href="http://www.vimeo.com/2818525" target="_blank">The Imawik commercial</a>, is a collaboration between <a href="http://www.inthecanllc.com/" target="_blank">In The Can Productions</a> and <a href="http://makerlab.com/who.html" target="_blank">Paige Saez</a> for <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a></em></p>
<p>&#8220;The Imawik (<a href="http://imagewiki.org/" target="_blank">ImageWiki</a>) is a visual search tool for mobile devices. It allows for the  ability to turn images into physical hyperlinks, conflating visual  culture with a community-editable universal namespace for images.&#8221;</p>
<p>Paige Saez is an artist, designer and researcher.Â  In 2007 she founded <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a> with <a href="http://www.hook.org/" target="_blank">Anselm  Hook</a>, an arts and technology incubator focused on civic and  environmental projects.</p>
<p>Paige and Anselm (see my interview with Anselm Hook here, <a title="Permanent Link to Visual Search,  Augmented Reality and a Social Commons for the Physical World Platform:  Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>) have been asking a very important question:<strong></strong></p>
<p><strong>&#8220;Who Will Own Our Augmented Future?&#8221;</strong></p>
<p>But most importantly, they have been actually developing applications (again<a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank"> see my interview with Anselm</a> for more background on this), to allow people to play with, hack and explore and create with the physical world platform, and to imagine new possibilities for physical hyperlinking and augmented realities.Â  This is pretty important stuff, and kudos to Paige and Anselm for beginning this work before the big players &#8211; <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>,  and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> came hurtling into the field of visual search and physical hyperlinkingÂ  &#8211; <a href="http://techblips.dailyradar.com/video/translation-in-google-goggles-prototype/" target="_blank">see this demonstration of translation and optical   character recognition</a> in Google Goggle&#8217;s.Â  Also check out Jamey Graham&#8217;s (Ricoh Research) Ignite presentation at Tools of Change, 2010 &#8211; <a href="http://www.toccon.com/toc2010/public/schedule/detail/13370" target="_blank">Visual Search: Connecting Newspapers, Magazines and Books to Digital Information without Barcodes</a>, for more see <a href="http://ricohinnovations.com/betalabs/visualsearch">ricohinnovations.com/betalabs/visualsearch</a>.</p>
<p>We are only just beginning  to get a glimpse of how contested the social commons of the physical  world platform is going to be &#8211; see the Yelp <a href="http://blogs.wsj.com/digits/2010/03/17/small-businesses-join-lawsuit-against-yelp/" target="_blank">controversy.</a> <strong> </strong></p>
<p>As Paige points out:</p>
<p>&#8220;<strong>The lens that you are actually  looking through was as important as what you were looking at. And  democratizing that lens became the most important thing that we could  possibly do.&#8221;</strong></p>
<p>I<strong> </strong>am in total agreement.Â  One reason I have so much enthusiasm for <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> (note: if you are interested in following the developer conversations there are several public Waves) is I see this open framework playing an important role in the democratization of our augmented views, by creating an open, distributed, and universally accessible platform for  augmented reality that will allow the creation of augmented reality content and games to be as  simple as making an html page, or contributing to a wiki.</p>
<p>Federation, real time collaboration, <a href="http://linkeddata.org/" target="_blank">linked data</a> &#8211; ARBlips that contain metadata that is usable for semantic searches, and modified wave servers that can listen to and respond toÂ <a href="http://www.w3.org/TR/rdf-sparql-query/" target="_blank"> <span> </span>SPARQL</a> HTTP  requests properly (see Jason Kolb&#8217;s <a href="http://jasonkolb.com/" target="_blank">many interesting posts </a>on XMPP and Wave).Â <span> These are just some of the reasons why </span>ARWave could revolutionize augmented reality  searches and more! (see<a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"> my presentation at MoMo13</a> &#8211; video <a href="http://www.youtube.com/watch?v=Y7iqg8X24mU" target="_blank">here</a>)</p>
<p>For more on real time social augmented experiences see our panel, <a href="http://en.oreilly.com/where2010/public/schedule/detail/11046" target="_blank">The Next Wave of AR: Exploring Social Augmented Experiences</a> at <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0 2010</a>, and don&#8217;t miss the <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0</a> conference which has been the crucible for the emergence of location technologies.</p>
<p>Augmented realities, proximity- based social networks,  mapping &amp; location aware  technologies, sensors everywhere, <a href="http://linkeddata.org/" target="_blank">linked data</a>, and human  psychology are on a collision course in what <a href="http://www.schellgames.com/" target="_blank">Jesse Schell</a> calls the &#8220;Gamepocalypse&#8221; Â  See <a href="http://g4tv.com/videos/44277/dice-2010-design-outside-the-box-presentation/" target="_blank">Jesse Schell&#8217;s Dice 2010  talk here,</a> and check out his <a href="http://www.gamepocalypsenow.blogspot.com/" target="_blank">Gamepocalypse Now</a> blog.Â  As Bruce Sterling&#8217;s notes in <a href="http://www.wired.com/beyond_the_beyond/2010/02/jesse-schell-future-of-games-from-dice-2010/" target="_blank">his post here</a>:</p>
<p><strong>*Another  precious half hour out of your life.Â   However: if youâ€™re into   interaction design, ubiquity, social networking, and trendspotting, in   the gaming biz or out of it, youâ€™re gonna wanna do yourself a favor and   listen to this.</strong></p>
<p>And don&#8217;t forget to <a href="http://augmentedrealityevent.com/register/" target="_blank">register now</a> for <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event (ARE2010 in 2-3 June, 2010 â€“ Santa Clara, CA</a><a href="http://augmentedrealityevent.com/" target="_blank">)</a><strong>.</strong></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://www.stupidfunclub.com/" target="_blank">Will Wright</a>, and Jesse Schell <a href="http://augmentedrealityevent.com/speakers/" target="_blank">will be keynoting, and there is a totally awesome line up of AR innovators and industry leaders</a>, including Paige and Anselm!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling.jpg"><img class="alignnone size-thumbnail wp-image-5289" title="bruce_sterling" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling-150x150.jpg" alt="bruce_sterling" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright.jpg"><img class="alignnone size-thumbnail wp-image-5290" title="will_wright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright-150x150.jpg" alt="will_wright" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost.jpg"><img class="alignnone size-thumbnail wp-image-5291" title="Jesseschellpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost-150x150.jpg" alt="Jesseschellpost" width="150" height="150" /></a></p>
<h3>And:</h3>
<p>You are in luck!</p>
<p>Here is a discount code for the first 100 folks to register to the  event (before the end of March). Go to the <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">registration page</a>, type in code AR245 and &#8220;youâ€™ll be  asked to pay onlyÂ $245 for 2 full days of AR goodness.&#8221;</p>
<p>&#8220;Watching AR prophet Bruce Sterling, and gaming legend Will Wright, visionary game designer Jesse Schell  deliver keynotes for this price â€“ is aÂ magnificentÂ steal.Â  And on top,  participating in more than 30 talks by AR industry leaders will turn  these $254 into your best investment of the year,&#8221; as OriÂ  put is so well on Games Alfresco!</p>
<p>If you want a preview of just how exciting it is to be involved in augmented reality right now check out <a href="http://gamesalfresco.com/2010/03/17/magic-games-education-and-live-coding-at-the-augmented-reality-meetup-in-nyc/" target="_blank">Ori Inbar&#8217;s great round up</a> on our latest monthly <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">Augmented Reality Meetup NY</a> (or as, Ori notes, we fondly like to  call itÂ <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a>.)Â  There is lots of video up now (much thanks to <a href="http://www.chrisgrayson.com/" target="_blank">Chris  Grayson</a>, whoÂ  <a href="http://armeetup.org/001_arny/video/index.html" target="_blank">live  streamed it</a>).Â  <a href="http://www.marcotempest.com/" target="_blank">Augmented Reality Magician, Marco Tempest</a>, is an absolutely <strong>must</strong> see.Â  (developers note this is an awesome use of <a href="http://www.openframeworks.cc/" target="_blank">open Frameworks</a> and <a href="http://opencv.willowgarage.com/wiki/">OpenCV</a>).Â Â  The video of the show includes a rare explanation of how it  all worksÂ  &#8211; see <a href="http://www.youtube.com/watch?v=6TluCaxz7KM&amp;feature=player_embedded" target="_blank">here</a>.</p>
<h3>Talking with Paige Saez &#8211; &#8220;Software is candy now!&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg"><img class="alignnone size-full wp-image-5266" title="paige_headshot_sq135" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg" alt="paige_headshot_sq135" width="135" height="135" /></a><br />
<strong> </strong></p>
<p><strong>Tish  Shute:</strong> What interests me about ImageWiki is that you have thought  about physical hyperlinking beyond the obvious of where to get your  next good hamburger and beer, right?</p>
<p><strong>Paige Saez:</strong> Right. It was interesting for  me in just thinking about the two things. How do you design a tool to  work in a way that people are getting value from it? And also, how do  you make it work in a way where people can explore and hack it? I think  the most interesting technologies, and this is probably something  somebody else said sometime, are the ones that disappear, that we don&#8217;t  see, instead we see <em>through</em>. They become just the  intermediaries.Â  They don&#8217;t interfere with what we are trying to do.</p>
<p>It&#8217;s a struggle whenever you are developing a new way for  people to get information or make something happen, because you are  playing with magic a little bit. And you have to make it vanish the way a  good magic trick makes an experience a magical one. But at the same  time you also need to reveal just enough that you let people in and they  can see how to change it and make it their own. That is the interesting  tension for this space right now, the idea of augmented reality begins  to lead the idea of a social commons for physical things. The Imagewiki  project was a locus of just this tension. Tish you and I have previously  discussed how difficult it was to even get people to understand the two  concepts independently.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png"><img class="alignnone size-full wp-image-5269" title="dhj5mk2g_515dwxtjnds_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png" alt="dhj5mk2g_515dwxtjnds_b" width="642" height="163" /></a></p>
<p><strong>Tish Shute:</strong> Right, until  recently most people hadn&#8217;t even heard the term augmented reality and I  am not sure that a particularly high percentage of people would  recognize it now despite the recent interest in smart phone apps.</p>
<p><strong>Paige Saez:</strong> It&#8217;s very  difficult to get people to understand the two concepts, and now you are  adding in the third level of participation as well. So I don&#8217;t think it  is impossible, but I do think it requires narrative. It is interesting  that you were talking about the stories you heard this morning from the  creatives at the event [Tish mentioned David Curcurito, Creative  Director, Esquire gave an excellent presentation at Sobel Media event  NYC] because it&#8217;s narrative and the attention to telling a story that  help you walk through all of the ways you can understand how completely  expansive this area is right now.</p>
<p>So I think we have to play with it, play with the space and the  tools. I think we need to have an idea of what we want people to use  the tool for, and we need to not only introduce them to the tool and the  technology, but also introduce them to the concepts as well. So I see  it as a three part process.</p>
<p>I&#8217;m really excited to be there with people,  helping them do that. I think we need to do this face to face. I don&#8217;t  think this can be only through a social network. The ImageWiki website  is like one quarter of the entire picture, you know? The website is the  resource center and the place where you can see people adding images,  but what value is it to you to see an added image? It is more valuable  for you to be interacting with the image or interacting with the object  in the real world.</p>
<p>Designing for the experience of using the  ImageWiki got very complicated very fast. I was trying to figure out the main  thrust of the design for the UI for the ImageWiki and at a certain point  I had to take a step back and say â€œOkay, this has to be good enough for  now because we can lay it out and prototype as long as we want on the  Web or mobile UI. What we need to be doing is going outside and actually  aggregating and putting images into the database in order to see what  exactly happens when we are adding.â€Â  It&#8217;s not just like you are taking a  picture of something and adding it to Flickr. Using the tool is very  context specific and the information is context specific, and you can&#8217;t  necessarily make that all happen at the exact same time. I think these  are really fascinating spaces to be struggling in and I&#8217;m so glad to be  working in this space.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2.jpg"><img class="alignnone size-medium wp-image-5300" title="imagewiki_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2-300x225.jpg" alt="imagewiki_2" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1.jpg"><img class="alignnone size-medium  wp-image-5299" title="imagewiki" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1-300x225.jpg" alt="imagewiki" width="300" height="225" /></a></p>
<p><em>Images by Chris Blow of <a href="http://unthinkingly.com/" target="_blank">unthinkingly.com</a></em></p>
<p><strong>Tish  Shute:</strong> Could you explain why we need ImageWiki? I mean I think I  have ideas on this, but perhaps you can explain to me from you point of  view why we need an ImageWiki, as opposed, to say, extending the image  space of Wikimedia or something added on to Flickr.Â  I mean maybe  something leveraging the geotagged photos sets and APIs we already have?</p>
<p><strong>Paige Saez:</strong> Yes, definitely. It&#8217;s a really good question, I mean it really is. Like,  do you need an entirely new place to be holding images outside of the  places that we are already holding images? That&#8217;s a huge question;  enormous. Especially when you take a look at the problems around that.  Its&#8217; exhausting for an end user. Who the heck wants to go and reload  everything into <em>yet another place</em>, right?</p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Paige Saez:</strong> Moreover, who is going to  really bother? Another problem would be what happens to the existing  datasets that people have already committed to? And then of course there  is the problem of authority and explanations why&#8230;.Gaining interest  and authority in a space when nobody even understands why that space  should exist in the first place. And those are just three, you know, off  the top of my head problems with that idea.</p>
<p>And yet at the same time, I don&#8217;t actually know  how else to go about thinking about the ImageWiki unless I think about  it as it&#8217;s own thing. Then you start thinking about models of large  independant image databases that exist already, examples of this from a  product standpoint- references to consider. The Getty Foundation comes  to mind. There are many other historical centers that have huge  resources and images that are licensed out and used. So here we have a  working example of people already doing this. But succesfully? I don&#8217;t  know. We do have a ton of intellectual property rights and copyright  issues and ownership and use issues with images currently. As a working  artist these issues for me were a major red flag to consider. Working on  the social commons for augmented reality starts paralleling issues  found in digital rights management and intellectual property.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png"><img class="alignnone size-full wp-image-5274" title="dhj5mk2g_518gpgpr7gd_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png" alt="dhj5mk2g_518gpgpr7gd_b" width="441" height="606" /></a></p>
<p><strong>Tish Shute:</strong> But one good thing about Wikimedia, why I focused on Wikimedia, is Flickr and Wikimedia already use a creative commons licensing, right?</p>
<p><strong>Paige Saez:</strong> Creative commons, you know they have their own resource center, too. But you know they haven&#8217;t been successful as great databases for images so far.</p>
<p><strong>Tish Shute:</strong> What would you like to see that they don&#8217;t have? Like say maybe start with Wikimedia, right?</p>
<p><strong>Paige Saez:</strong> There&#8217;s just still a lot of issues with how to encourage people to want to contribute. It&#8217;s hard to show the value to someone who doesn&#8217;t already understand the value for some reason. At least for me personally this is something I have run into frequently. I don&#8217;t know if it is necessarily what Wikimedia doesn&#8217;t have, I think it is a lack of understanding of what creative commons really means. And there is still a very strong sense of ownership and concern about creative property rights. Being paid to be creative is a tremendously difficult thing to do. People fear losing their livelihoods. They think this is possible. Is it? I dunno.</p>
<p>For example : Look at me, I take a photograph of something, I can sell that.  And there&#8217;s a question about whether or not, as an artist, I want to have my photographs in a pool of images that is open and accessible when I could be making money on it instead. Now that is just an example. Me personally, I can see the value. But that is a common concern. The gist of the question being, &#8216;what value does it bring to give something away versus holding on to it?&#8217; A hugely popular discussion right now.</p>
<p>This is the same crux of the problem we are dealing with when we talk about thinking about images in the social commons for the real world. It&#8217;s a conversation about ownership. It&#8217;s about, who does this belong to really? If I take a photograph of a Levi&#8217;s billboard, does that photograph belong to me or does it belong to Levi&#8217;s? We know the boundaries of that. But when the image becomes a living image, an image capable of transmutation; an image that provokes an action or hyperlinks to a product, experience, information&#8230;.where are the boundaries in that?</p>
<p><strong>Tish Shute: </strong>But how is ImageWiki handling that differently from Wikimedia, I suppose is my question.</p>
<p><strong>Paige Saez:</strong> We haven&#8217;t solved the problem.</p>
<p><strong>Tish Shute:</strong> Yes, I suppose it is not like we have fully solve the problem of a creative commons for images on the internet let alone the issues of a social commons for the real world! So neither one has solved the problem, right?</p>
<p><strong>Paige Saez:</strong> Exactly. To be honest, it made my head spin. I realized we were building a web application and a mobile tool doing augmented reality, real time feedback on the world and suddenly we weren&#8217;t. Suddenly we were dealing with DNS and talking about physical hyperlinks and ownership and property. And basically at that point you just have to sit and really start looking at catching up on IP issues and figuring out how to deal with that space in a much more wholistic way. It became so important that we had to take a step back and go</p>
<p>â€œOh my god I think we have really uncovered a real problem here.â€</p>
<p>At the point when we were building out the tools we realized something was really going on with our project. Here we were thinking that this was just a beautiful experience of learning about the world around us. We reallyâ€¦Anselm and I both just really wanted this tool to exist. It was something that we both just really wanted to happen in the world, something that we felt really just thrilled to make. And we looked at and used it and realized that instead of it just being a beautiful experience, it was a fundamental shift in how we understood everything. That it impacted our world in the same way the Internet impacted our world. It was a fundamental shift in understanding. A sea-change.</p>
<p>So I put down the prototype and went back to researching, read a ton of books on IP and went and presented to friends, family, schoolmates and co-workers trying to explain the project and then the larger conceptual framework that had emerged from the project. I began using the metaphor of thinking about Magritte&#8217;s &#8220;Ceci n&#8217;est pas une pipe.&#8221; Thinking about a pipe that isn&#8217;t actually a pipe.</p>
<p><strong>Tish Shute:</strong> Oh, yes!</p>
<p><strong>Paige Saez: </strong>..to try to help explain to people that the image that you see is actually not, you know, it&#8217;s not an image of a thing. It&#8217;s an image. And that image has a tone and that image has a voice, and that image was chosen. And there were decisions that were made through the interface of the camera, specific decisions that defined the view of what you were looking at. And that that wasn&#8217;t being acknowledged and that that was a fundamental part of what the ImageWiki was aiming to do. The lens that you are actually looking through was as important as what you were looking at. And democratizing that lens became the most important thing that we could possibly do.</p>
<p><strong>Tish Shute:</strong> So the emphasis for you on ImageWiki was in fact the lens, even though you found obstacles to creating the interface, right?</p>
<p><strong>Paige Saez:</strong> Yes. Definitely. That&#8217;s what I fell in love with first. I really wanted to be able to use my phone to learn about what kind of tree this was or to buy tickets for the band on the poster I just saw, or see a hidden secret. For me it was very much a story, a narrative experience that I just thought was magical. And that is how I fell in love with it, which is not where I ended up.  Where I ended up was realizing it was a fundamental shift in not only my own understanding of how to use the world around me, but in our understanding of looking at the world.</p>
<p><strong>Tish Shute: </strong>It would be pretty scary if an image DNS was basically in the hands of either one or very few people, right?  I mean even ImageWiki would be stuck with this problem, that if you set up a bunch of servers, you are going to be holding a very, very large image database. I mean, whatever your motivation, right?  I think at the minute that is why I am very into seeing everything through the lens of federation, I see that unless we have federation, these giant central, databases are inevitable aren&#8217;t they?</p>
<p><strong>Paige Saez: </strong>Essentially, yes. I mean I wasn&#8217;t able to walk through it as quickly as that. It kind of just overwhelmed me. Looking back on it, it seems perfectly obvious. I was just like â€œOh my god, what have we done? Like what is going on?â€ Particularly for me because so much of my life has been spent in art, it was really easy to immediately understand the connection between the view, the viewer, and whatâ€™s being viewed as all just different layers of ownership and understanding that it is a gaze. Right? We know that we are never able to look at something without passing judgment on it, but to see that become a part of the interface in a real-time fashion just blew my mind.</p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Paige Saez:</strong> I think you are right. Getty Images, Flickr images, no matter what you are always holding on to something and you have to be responsible for it. Right? So how do you deal with the responsibility but don&#8217;t take on too much ownership? Where is the boundary with that?</p>
<p><strong>Tish Shute: </strong>And for me, the simple answer to that is loosely connected small parts, distributed systems and federation.  Because there is only one way to be able to utilize these things is to have them distributed so that no one holds all the cards. Right?</p>
<p><strong>Paige Saez: </strong>Definitely and I personally agree with you wholeheartedly. However, the idea of distributed power is a concept that most people just don&#8217;t know how to deal with.</p>
<p><strong>Tish Shute:</strong> And it&#8217;s easier said than done because actually the root problems that you are talking about aren&#8217;t got rid through federation, because if someone really holds the, sort of, all the good image databases just because they have the potential to be federated, they may not choose to open them up on many levels.</p>
<p><strong>Paige Saez:</strong> And even then you have to think about, sort of, like the next level of it, which is we want it to be all open and accessible, but everything is owned by somebody. Like, what really is public anymore, in general?</p>
<p><strong>Tish Shute:</strong> And what is interesting though, regardless of what we speculate conceptually on this, we already set off down the road. I mean we have already several largeâ€¦they are all in beta I suppose, Google Goggles, Point and Find, right? But we have applications that are beginning to implement this. They are beginning to implement search on it, and it is geo-located even if it&#8217;s not in an augmented view, right? So it is proximity based.</p>
<p><strong>Paige Saez: </strong>Right, right. I mean maybe the solution is that if we follow that line of thinking then Flickr will be partnering with Google Goggles. And then my images would stay under my ownership through the authority of Flickr. And I would use Flickr as my place to add images and they would just be responsive via my devices via AR.</p>
<p><strong>Tish Shute:</strong> That&#8217;s very interesting.</p>
<p><strong>Paige Saez:</strong> Definitely I think so. It is also the shortest distance between things.</p>
<p><strong>Tish Shute:</strong> Yes, and as Anselm kept pointing out, basically it is going to happen in the simplest way possible, really, regardless of the implications of that. But OK, getting back to ImageWiki. As you say neither Wikimedia nor Flickr were really designed to take this role, right?</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute:</strong> With ImageWiki, you&#8217;ve had these ideas and a concern with the social implications of physical hyperlinking  in your mind since it&#8217;s inception. Are there any design ideas you&#8217;ve come up with that you know, as opposed to sort of, as you say, connecting Flickr to Point and Find, or who knows, Google Goggles.  How is ImageWiki going to be different, do you think? Is that a hard question at this point?</p>
<p><strong>Paige Saez:</strong> It is, and it&#8217;s a great question, and it&#8217;s a question I really love to think about. I think we have to introduce the politics with the tools. It has to be acknowledged that it&#8217;s not just a place to hold information, that&#8217;s what I feel in my heart.</p>
<p>At the same time, is that too much for people to really grasp at one time? In my experience it really has been, so the design of the experience needs to allow for an understanding of the power of the tool and the level of authority that the tool offers, while not getting in the way of it; just using it.  Because ultimately, at the end of the day, nobody will use anything if it isn&#8217;t valuable to them. And so I could talk for miles and miles and miles about how important it is that corporations don&#8217;t own all of the rights to all of the visual things in my life, right? For the rest of my life I could talk about that. The idea that advertising is dominating all of our views of anything in the world around us is horrifying. It doesn&#8217;t matter unless I can show somebody why it matters to them or how it affects them. It&#8217;s just that that is a tremendously difficult thing to explain through a user interface.</p>
<p>And I actually think that it&#8217;s great that tools like Google Goggles and Nokia Point and Find are here to do a lot of the hard work of showing people how it works. Recently somebody explained to me their experience of using Google Goggles. They went through this process of saying how the Google Goggles took a picture and then did this really complicated visual scanning thing over the image and it took a full minute.</p>
<p>And I said, â€œWell of course they did it that way.â€  And they said, â€œWell what do you mean?&#8221; I said, â€œWell, what they are really doing there when they are doing all these fancy graphics, is they are showing you how it works.â€ And even if it isn&#8217;t actually related at all to how it functionally works, algorithmically, that&#8217;s not the point. The point is that this gesture of the time taken to make it look like it&#8217;s scanning an image and going back and forth with pretty colors is giving people the time to process that as an experience. That&#8217;s a metaphor for what&#8217;s really happening. And these kinds of metaphors are crucial with user experience design. We have lots and lots of examples of them and how they work, and many of them aren&#8217;t necessary. Like you know, for example, the bar that shows you the time it&#8217;s taking for something to process.There is no relationship between that and reality. But it is really important.</p>
<p><strong>Tish Shute:</strong> Yes those bars often have no relationship between the actual time..</p>
<p><strong>Paige Saez:</strong> And that&#8217;s the thing. Like the idea of time versus our perceived understanding of time. Right? The length of time it takes for your Firefox browser to open and load your last 30 tabs, versus the reality of what&#8217;s actually happening. When you are doing that sort of research you are actually accessing millions and millions of places and points of interest all over the world, so we need more of that. We need more of the process shown. Anselm and I worked with a film maker named Karl Lind from In the Can Productions here in Portland to try and make a video about the ImageWiki. We made this little video and I can try to show it to you or send it to you if you want.</p>
<p><strong>Tish Shute:</strong> One of the issues with this kind of visual search is that it is inherently dependent on large databases, regardless of where they are federated, are going to be very large. Right? I mean someone is going to have something big, and aggregated there.   I suppose someone will figure out the challenges of federated search eventually but that is quite a big challenge!</p>
<p>So I suppose I am still trying to understand what ImageWiki can offer that we can&#8217;t get with any other existing service?  How will their be a social commons and even a social contract for the world as a platform for computing and physical hyperlinks?</p>
<p>Eben Moglen  brought up something when I talked to him about virtual worlds, he said we need code angels to let us know what was going on in the virtual space &#8211; who was gathering data and how, for example.</p>
<p><strong>Paige Saez:</strong> Tell me more about that, I want to hear more about that.</p>
<p><strong>Tish Shute: </strong> Eben suggested this metaphor for when I was asking him about privacy in virtual worlds. The fact that people just didn&#8217;t know that when they were pushing avatars around virtual worlds what metrics were being gathered on their behavior.  And he basically said that what we need is code angels when we enter these spaces because having the rules of the game buried in a TOC was ridiculous.</p>
<p><strong>Paige Saez:</strong> That is a really interesting idea.</p>
<p><strong>Tish Shute: </strong> Maybe ImageWiki needs to be our code angel to navigate the augmented world. I mean that&#8217;s what I want to see it as. And when I hear you talk, what I hear is you talking in broad categories about what a code angel might be in the space of images and image links to the physical world. I mean that is what I hear from you.</p>
<p><strong>Paige Saez:</strong> Yeah. No, I definitely agree with that. It is interesting. In that sense, it is kind of a protection layer. Is that what you are thinking?</p>
<p><strong>Tish Shute: </strong>Yes, I suppose because we can&#8217;t be navigating a lot of complicated opt-ins and opt-outs just to get around our neighborhood safely (in terms of privacy (also see Eben Moglen&#8217;s definition of privacy hereâ€¦)  We will need a code angel that is sort of keeping up with you in real time!</p>
<p><strong>Paige Saez:</strong> Right, right. I wonder how that would work in regards to images, though. That is a really interesting thing to try and put on an image. I guess why I am having such a hard time being specific about it, is I am <strong>just trying to work it in my head, thinking of a specific use case, like what would be an example of that?</strong></p>
<p><strong>Tish Shute: </strong>Well I suppose the example, and this is a crude one, is when you point your Google Goggles to the book jacket, the code angel, this is very crude, would say â€œYou are right now drawing images from the Amazon database &#8211; they are collecting data such and such data from your search.</p>
<p>And then of course the ability to have crowd sourced tagging and corrections..</p>
<p>There was a wonderful book that came out last year on how we can have commercial intelligence -Dan Golemanâ€™s new book: â€œEcological Intelligence: How Knowing the Hidden Impacts of What We Buy Can Change Everything&#8221;&#8230;</p>
<p>how corporations various different stakeholders, including their customers will drive corporations to do the morally right thing because they will lose the commercial support of customers who wonâ€™t support them unless they are more green, fairer, do the things we would like them to do whatever that happens to be &#8211; physical hyperlinking and tagging I guess would be a big part of this.</p>
<p><strong>Paige Saez:</strong> Sort of a transparency issue.  And that almost becomes a page rank algorithm in and of itself. I mean now we are really talking about search more than anything, and what tool becomes the dominant search tool. Anselm and I talked a lot about one platformâ€¦  I mean eventually we will have a unified platform. It willâ€¦No matter what, for the Internet and for physical objects and visual objects in the real world. It will just be a matter of, literally, who can find the best and most valuable, most relevant information on a thing. Currently we just have it very proprietary.</p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Paige Saez: </strong>That definitely won&#8217;t last. It just can&#8217;t, because of the exact problem that you are raising. And we already know too much about resources and information as they pertain to products for us to ever go back to a time where we are not considering other ways of getting information about it anyway. Right?</p>
<p>Like I have the same concerns nowadays when I look at fruit. I look at a piece of fruit in the store. I would never just assume that the person who put the sticker on that fruit, anymore, is the ultimate authority necessarily. I would always assume at this point I could go online and go find out more information about a company. Issues about like eco-footprint or how much toxicity, or pesticides or whatnot are now totally accessible already.</p>
<p>So I am thinking when you look at that piece of fruit and that sticker for Google, say what you are describing, do we just go immediately to the company&#8217;s website, or is it even more specific? Do we know that the sticker on that piece of fruit is going to tell us specific information about that? Or are we just getting back the nutritional resources, or are we getting a listing of all of the different options out of a page rank algorithm that shows us, â€œWell this is the website for the fruit.  Here is the nutritional information.  Here are the last 15 comments on it.â€  It&#8217;s basically just a basic search.</p>
<p>Have you heard of Good Search?</p>
<p><strong>Tish Shute:</strong> you mean http://en.wikipedia.org/wiki/GoodSearch</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute: </strong>A code angel interface would have to give you options, wouldn&#8217;t it on possible views available?</p>
<p><strong>Paige Saez:</strong> Yes. You are then talking about filtering your view. Then it really gets really interesting, of course. I don&#8217;t even know if we have a choice in that. I think we are really kind of hitting a wall with who owns the space and the platform. Is it just a basic search because we are already familiar with search? If you had an option to choose, say, â€œI want to look at this apple sticker and I only want to getâ€¦programmatically only looking at my friend&#8217;s opinions of this company.â€</p>
<p>Or I have a safety valve on it that only shows me certain information based on what the code angel knows about me, my preferences, my age, things like that. Then that gets really, really interesting, because we are trying to do all that work right now just with social media and the Internet. We are already overwhelmed with too much information. It is already past the point of comprehension. So to think that we would actually drill down even more specifics is very interesting.</p>
<p><strong>Tish Shute:</strong> That was a point Anselm made about the fact that once you are into this mobile, just in time, one view kind of situation, it is quite different than the Internet where you can bring up all these different screens and go to another website.</p>
<p><strong>Paige Saez: </strong>Well yes, mobile is a different level of engagement. Very contextual. Much less information. Much more about timeliness. I don&#8217;t want to look an apple and get back a Google search. Oh my God no. Thatâ€™s the last thing I want. I would love to be able to look at an apple and my phone already knows exactly what I want, information-wise, to get back from that apple. But I don&#8217;t know. It&#8217;s all contextual and personal.  So I think the code angle concept you are talking about is really interesting because you still need to think about who is the person that is adding or creating those level filters- is it you, a filtered friend network, an algorithm? How much work is too much work? Where do we draw the line? How much of this are we willing to let the machine do for us?</p>
<p><strong>Tish Shute: </strong>Right.</p>
<p><strong>Paige Saez: </strong>And then of course once you have those filters in place, you need control over them. You will need to dial them up and dial them down, be able to choose and add new ones, so on and so forth. It becomes very modal at that point. For example, I want to change my view: To walk into a grocery store and instead of finding out information, Iâ€™d want to see where the hidden Easter egg puzzles were that my friends left last week because weâ€™re playing a game.</p>
<p>Iâ€™m still really attracted to the creative opportunities with the ImageWiki. Iâ€™m really attracted to changing this experience from being a one-to-one relationship (from Corporation to Consumer) to an open-ended relationship (From Person to Person). If I look at a book jacket, sure I can find out where to buy the book, but thatâ€™s boring. Who cares? Iâ€™d like to find out a link to a story or an adventure or a movie or something unthought-of before.</p>
<p>How do we build that in? How do we encourage serendipity? Mystery? I think the ImageWiki is the space for building that in, actually. Not how, that would be the one place, right? Thatâ€™s my really big fear is that this relationship just stays one-to-one. Click an image of consumable object, get back objects retail value. How completely dull. We have to do better than this.</p>
<p>Additionally, what if I want to take a photograph of a book, an apple, or something and I donâ€™t want to pull back data. Instead, I want to pull back music, or I want to pull back a video, or I want to pull back a song, or lyrics, or a story, or another image. Itâ€™s just a hyperlink at the end of the day, you know? Thatâ€™s all weâ€™re really doing. Hyperlinks can pull back so many different things.</p>
<p><strong>Tish Shute:</strong> And thatâ€™s one of the reasons I&#8217;m into mobile social interaction utility building, because without that, if we donâ€™t have that way to do that in mobile technologyâ€¦thatâ€™s very available on the Internet, as weâ€™ve seen, with Twitter. These applications are very easy to do on the Internet. Theyâ€™re not easy to do natively in a mobile application..</p>
<p>hey, Iâ€™m just promoting AR Wave again. I should shut up.</p>
<p><strong>Paige Saez:</strong> Oh, no.  I think itâ€™s a fascinating concept, I really do. I totally agree. As weâ€™ve talked about it before, itâ€™s amazing that marketing and advertising are helping push forward AR, and itâ€™s great. Itâ€™s fantastic.</p>
<p>But itâ€™s also the worst possible thing that could ever happen because it is such a singular way of looking at an overall ubiquitous computing experience. There are other ways.</p>
<p>The best experience I ever had was trying to explain to people about physical hyperlinks. I had to walk them through it. Good interactive isnâ€™t something you present or show, itâ€™s something you do. Nothing beats just walking around and showing people with a device or a tool or something else.</p>
<p>I mean, God forbid it always stays in our computers and our phones. I really hope we donâ€™t have to be stuck living our entire lives with these horrible interfaces.  But for the time being, we will. Having an AR app show you a puzzle, or a mystery, or a game, or an adventure is a magnificent experience, totally overwhelming, and people get it right away. Thereâ€™s no question; they totally understand.</p>
<p><strong>Tish Shute:</strong> Yes, I agree.</p>
<p><strong>Paige Saez:</strong> You walk them through the experience with a physical hyperlink and then you say, â€œHere, I could use this device and I could show you where to buy this thing, or I could use this device and we could start playing a game.â€ Then everybody gets it.</p>
<p><strong>Tish Shute:</strong> So then I have a question, because one of the things Anselm said to me when he wanted me to refer back to you is that he feels that the direction for ImageWiki should be perhaps to focus less on the technology and more on just the actual, I suppose, gathering of the images, how theyâ€™re going to be annotated, the metadata, right? But my question to him was, the problem if you do that, without the platform, thereâ€™s no experience or motivation for people to do that. Right? Is there?</p>
<p><strong>Paige Saez: </strong>Yeah, I agree with you on that one. Iâ€™m curious what hisâ€¦I think the reason why he wants to do that is he wants to be able to show people examples via the resources. Like to be able to show someone a library, essentially, which I think makes sense with some people. I definitely think that some audiences would really relate to that. For me, it doesnâ€™t make sense because Iâ€™m just very experiential. I need to do it and I need to show other people how to do it and I need to grow that way. I think that at the end of the day, those are great ways to go about doing it. Itâ€™s just itâ€™s a huge thing to do in either direction.</p>
<p>What Anselm&#8217;s really thinking on, I believe, is more about exemplifying how we read and understand images culturally. Then youâ€™re really getting into Visual Studies and Critical Theory which is what I did for my Masters at PNCA. I worked on the ImageWiki while I was in grad school, it was something I was doing for fun. Independently of my studies, the project lead to issues on democracy and objects and property and I ended up right smack in the middle of what I was studying; the nature and cultural analysis of images Questions like, &#8216;what exactly do we get out of images?&#8217; and how all these different things are happening in an image, and people get tons of totally different things out of an image depending on many factors.</p>
<p>The questions I began to ask myself got very philosophical. Questions like â€œIs this apple red? Is this apple red-orange? Is this a small apple? Whatâ€™s my understanding of small versus your understanding of small?â€</p>
<p>Because you supposed that you needed a text backup to the search, how would I be able to search for an apple? Because what if my understanding of apple is red and your understanding of apple is green. And so if Iâ€™m looking for a green apple, am I looking for the same green apple as you? Itâ€™s all semantics, sure.  But at the same time, it gets bigger and bigger, and itâ€™s fascinating.</p>
<p><strong>Tish Shute: </strong>Google Goggles seem to work best on book jackets, basically.</p>
<p><strong>Paige Saez: </strong> But book jackets are actually perfect for this.  Book jackets are perfect for this problem, because book jackets are specifically designed art.  So at the end of the day, we are still talking about creative works, artistic works, that have been designed as a communication tool.  But that is not something that people can own.  Creative works that are designed are a communication tool, with varying levels of skill to be sure, but still something anybody can do.  What we need to do is we need to be using that language.  We donâ€™t need to be trying to reach as far as facial recognition.  We need to develop our own logos, our own brand, our ownâ€¦I mean not brand.  Brand is a bad way of saying it.  Another way of saying it would be like, just use it.  Develop a visual language that we can use that is as effective and as well utilized as book jackets or the movie posters or something.</p>
<p><strong>Tish Shute:</strong> What are some of the use cases for ImageWiki you would like to develop first?</p>
<p><strong>Paige Saez:</strong> My dreamâ€¦I have like four or five use cases that I want to see happen.  One of them is I walk down the street and there is a new poster for my favorite band.  And I can just go up to the poster and I use my device, whatever it looks like, and I download the latest album. It&#8217;s transactional. I am able to just plug in my headset and walk down the street and the transaction is done. I saw something I wanted. It was beautiful. I was able to get it and I was able to move on in my life.  And that is totally possible.</p>
<p>Another one would be I walk down the street and there is a piece of graffiti.  And I am able to use my device to find out who the artist was that made it and to give them props, and to point my other friends to the fact that the piece is there and it will most likely be there only for a short period of time- information retrieval and socialization.</p>
<p>Or, use my device to find an Easter egg, to find a narrative puzzle that ends up going on for weeks, and everybody is involved, and we are all playing this game together. Adventure-based, non-linear experiences. I want playfulness, not just purchases.</p>
<p><strong>Tish Shute: </strong> Did you think of piggybacking on the Flickr API for geo-tagged photos as a way to work with those databases or not?</p>
<p><strong>Paige Saez:</strong> Yeah, we definitely thought about that.</p>
<p><strong>Tish Shute: </strong> And why did you decide not to, for any reason orâ€¦?</p>
<p><strong>Paige Saez:</strong> Ultimately, we justâ€¦we were such a small group, we just had to tackle certain things at a certain time.</p>
<p><strong>Tish Shute:</strong> Right.  And you were so prescient, you were working slightly before we had the mediating devices, werenâ€™t you?  You were just before the mobile devices really got adequate for this.</p>
<p><strong>Paige Saez:</strong> Yeah.  We started on itâ€¦I believe it was Januaryâ€¦No. December 2007. Basically, the iPhone had just launched like maybe six months prior or something like that.</p>
<p><strong>Tish Shute:</strong> But not 3G and not 3GS, right?</p>
<p><strong>Paige Saez: </strong>Not 3GS. It was the first generation iPhone. We built the ImageWiki before the App Store existed.</p>
<p>We knew that the App Store was coming out.  And we knew that the App Store was going to be the biggest thing in the whole world. I remember getting into multiple fights with friends about how revolutionary the iPhone and the App Store were going to be and people thinking I was totally crazy; people just thinking I was absolutely nuts for being so excited about it.</p>
<p>It sucks that it is a closed proprietary system, but the App Store has done something for software that nothing has ever done in the whole world.  Software is candy now.  It&#8217;s candy.  It is like when you are waiting at the grocery store at the checkout line and you are stuck behind somebody, and you have got all these little tchotchka&#8217;s, candy bars, magazines, nail-clippers and things. That is the equivalent of software now.  It&#8217;s become an impulse buy, which is amazing.  Nobody would ever have thoughtâ€¦that is actually revolutionary. That&#8217;s huge.</p>
<p><strong>Tish Shute:</strong> <a href="http://www.cs.columbia.edu/~feiner/" target="_blank">Steven Feiner</a>, who is one of the founding fathers of augmented reality said to me during a conversations at the ARNY meetup that one reason that augmented reality, despite the hype, is manifesting very differently from how virtual reality burst onto the tech scene is that it is about affordable apps on affordable readily available hardware.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>AR Wave: Layers and Channels of Social Augmented Experiences</title>
		<link>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/</link>
		<comments>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/#comments</comments>
		<pubDate>Tue, 13 Oct 2009 18:52:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR Browser]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[augmentaion]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Channels and Social Augmented Realities]]></category>
		<category><![CDATA[citi sensing]]></category>
		<category><![CDATA[citizen sensing]]></category>
		<category><![CDATA[Clayton Lilly]]></category>
		<category><![CDATA[cybernetics vs ecology and human waste]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[eco mapping]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geospatial web]]></category>
		<category><![CDATA[geospatial web and augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave as an AR enabler]]></category>
		<category><![CDATA[Google Wave enable augmented reality]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[green tech augmented reality]]></category>
		<category><![CDATA[immersive sight]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Layers]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[Life Clipper]]></category>
		<category><![CDATA[life streaming]]></category>
		<category><![CDATA[location based media]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Mannahatta]]></category>
		<category><![CDATA[map based augmentation]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[modulated mapping]]></category>
		<category><![CDATA[modulated napping]]></category>
		<category><![CDATA[multi-user]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[non euclidian geometry]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[Seanseable Labs]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality experiences]]></category>
		<category><![CDATA[sound augmentation]]></category>
		<category><![CDATA[Thomas K. Carpenter]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Wave as a platform for augmented reality]]></category>
		<category><![CDATA[Wave Blip]]></category>
		<category><![CDATA[Wave Bots]]></category>
		<category><![CDATA[Wave playback]]></category>
		<category><![CDATA[Wave playback feature]]></category>
		<category><![CDATA[Wave Robots]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4585</guid>
		<description><![CDATA[It is now nearly two weeks since the Google Wave preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the Google Wave Federation Protocol and servers (click on the image to see [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>It is now nearly two weeks since the <a href="http://wave.google.com/" target="_blank">Google Wave </a>preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a> and servers (click on the image to see the dynamic annotated sketch <a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank">or here</a>).</p>
<p>Even in the short time we have had to explore Wave, some very exciting possibilities are becoming clear. Thomas puts some of the virtues of Wave as an AR enabler succinctly when he writes:</p>
<p><strong>â€œWave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</strong></p>
<p><strong>These are all incredibly useful properties for any AR experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the userâ€¦..a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secureâ€¦data is only exchanged when necessary, and will stay local if no one else needs to view it.</strong></p>
<p><strong>Wave allows bots to run on itâ€¦allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</strong></p>
<p><strong>For all these reasons and more, Wave makes a great platform for AR.â€</strong></p>
<p>There will be much more <span>coming soon on Wave enabled AR because the Google Wave invites have begun to flow out to a wider community now. This week, many of our small ad-</span>hoc group looking at the development challenges and implications of Google Wave for AR actually got into Wave for the first time.</p>
<p>Many thanks to all the people who have contributed to this discussion so far including: Thomas Wrobel, Thomas K. Carpenter, Jeremy Hight, Joe Lamantia, Clayton Lilly, Gene Becker and many others.</p>
<p>We will be setting up some public AR Framework Development Waves this week.Â  If you have any trouble finding them, or adding yourself to it, please add Thomas and I to your contact list.Â  I am tishshute@googlewave.comÂ  Thomas is darkflame@googlewave.comÂ  The first two are currently called:<strong> </strong></p>
<p><strong><br />
AR Wave: Augmented Reality Wave Framework Development</strong> (developer forum)</p>
<p><strong>AR Wave: Augmented Reality Wave Development</strong> (for general discussion)</p>
<p>The discussion so far has been in two areas. On the one hand, it is gear-heady and focused on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a>, code, development challenges, and interfacing to mobile, while on the other hand people have been looking at use cases and questions of user experience.</p>
<p>Distributed, â€œshared augmented realities,â€ or â€œsocial augmented experiences&#8221; â€“ that not only allow mashups, &amp; multisource data flows, but dynamic overlays (not limited to 3d), created by users, linked to location/place/time, and distributed to other users who wish to engage with the experience by viewing and co-creating elements for their own goals and benefit &#8211; are something very new for us to think about.</p>
<p>As, Joe Lamantia, puts it, now:</p>
<p><strong>â€œthereâ€™s a feedback loop between which interactions are made easy by any given combo of device;/ hardware / software / connectivity, and the ways that people really work in real life (without any mediation / permeation by tech).â€</strong></p>
<p>Joe Lamantia whose term, <strong>â€œsocial augmented experiencesâ€</strong> I borrow for this post title, has done some thinking about <strong>â€œconcepts and models for understanding and contributing to shared augmented experiences, such as the social scales for interaction, and the challenges attendant to designing such interactions.â€ </strong>Check out <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia&#8217;s blog </a>for more on this later this week.</p>
<p>It is very helpful, as Joe points out, to shift the focusÂ  back and forth between the experience and the medium.</p>
<p>It is super exciting to have clear evidence that shared augmented realities are no longer merely possible, but highly probable and actually do-able now.</p>
<p>I shouldÂ  be absolutely clear about what Google Wave does to enable AR because obviously Wave plays no role in solving image recognition and tracking/registrations issues.Â  But, for example, Wave protocols and servers do provide a means to exchange, edit, and read data, and that enables distributed, social augmented realities.</p>
<p>Thomas explains how the newly named &#8220;AR Blip&#8221; works as:</p>
<p><strong>&#8220;An AR Blip is simply a Blip in wave containing AR data. Typically this would be the positional and url data telling a AR browser to position a 3d object at a location in space.</strong></p>
<p><strong>In more generic terms, an AR Blip allows data of various forms (meshes,text,sound) to be given a real-world position.&#8221;</strong></p>
<p>I have mentioned in other posts (<a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>) that Wave can be used for AR as precise or as loose as the current generation devices can handle. And as the hardware and software for the kind of AR that can put media out in the world to truly immerse you in a mixed space, the frameworkÂ  shouldÂ  be able to handle this too.</p>
<p>(a note on the Wave playback feature &#8211; this opens up a whole new world of possibilities.Â  Check out <a href="http://snarkmarket.com/2009/3605" target="_blank">this post</a> on some of the implications of playback for writing!)</p>
<p>The use cases we have been coming up with are too numerous to go into in detail this post<span>.Â  The open nature of an AR framework/Wave standard will lead to many new applications we have barely begun to imagine.Â  As Thomas points out, different client software can be made for browsing, potentially allowing for various specialist browsers, as well as more generic ones for typicalÂ  use. T</span>he multitudes of different kinds of data in/output that could be integrated in an open AR framework as it evolves are mind boggling.</p>
<p>But, for now, someÂ  obvious use cases do come to mind:<br />
eg.</p>
<p>- Historical environmental overlays showing how a city used to be/and how this vision may be constructed differently by different communities</p>
<p>- Proposed building work showing future changes to a structure/and the negotiations of this future (both the public and professionals could submit their own comments to the plans in context), seeing pipes, cables and other invisible elements that can help builders and engineers collaborate and do their work.</p>
<p>- Skinning the world with interactive fantasies</p>
<p>I asked Thomas to help people understand how Wave enables new interactions to data by explaining how Wave could enable citi sensing and citizen sensing projects (e.g.<a href="http://tinyurl.com/y97d5zr" target="_blank"> this one being pioneered by Griswold</a>):</p>
<p><strong><strong>&#8220;Sensors, both mobile and static could contribute environmental data into city overlays;</strong></strong></p>
<div><strong><strong>â€”temperature, windspeed, air quality (amounts of certain particles) water quality, amount of sunlight, Co2 emissions could all be feed into different waves. The AR Wave Framework makes it easy to see any combination of these at the same time.&#8221;</strong></strong></div>
<div><strong><strong><br />
</strong></strong></div>
<p><strong><strong> </strong></strong>Having these invisible aspects of the world made visible would create ways to improve sustainability, social equity, urban management, energy efficiency, public health, and allow communities to understand and become active participants in the ecosystems and infrastructure of their neighborhoods.</p>
<p>The key is reflecting thisÂ  kind of data back to people &#8220;making it not back story but fore story,&#8221; right where we are, right where it happens, as well as having it available for analysis.</p>
<p>As well asÂ  creating new opportunities to interact/respond to/and enhance data, making visible the invisible as <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko&#8217;s</a> work on <a href="http://www.amphibiousarchitecture.net/" target="_blank">Amphibious Architecture</a> and <a href="http://www.haque.co.uk/" target="_blank">Usman Haque&#8217;s</a> project <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> shows, can also create new connections/understandings between humans and the non human&#8217;s that share our world, e.g. fish, plants, waterways.</p>
<p>At a more prosaic levelÂ  potential buyers of property could see more clearly what they are buying, city planners could see better what needs to be worked on, and environmental researchers could see more clearly the impact people are having on an area.</p>
<p>Also Wave can provide some of the framework necessary to begin to begin to address tricky problems of privacy. Sensitive data can be stored on private waves, e.g. medical data for doctors and researchers, but the analysis of theÂ  data could still be of benefit to all, e.g., if it&#8217;s tied disease occurrences to locations andÂ  relationships between the environmental data and health wereâ€¦quite literallyâ€¦made visible.</p>
<p><strong>&#8220;The publication of energy consumption and making it visible as overlays, could help influence the public into supporting more energy efficiency companies and businesses. It could also help citizens to try to keep their own energy usage down, to try to keep their street in â€œthe green.â€</strong></p>
<p>Thomas notes:</p>
<p><strong>&#8220;With all of the above, it becomes fairly trivial to write persistent Wave-bots that automatically send notice when certain criteria are met (pollutants over a certain level, for example). On publicly readable waves, anyone can use the data in their local computers, process it, and contribute results back on a new wave. Alternatively, persistent remote severs could run Cron jobs, or other automated processing, using services such as App Engine to run wave robots.</strong></p>
<p><strong>All these possibilities become â€œfreeâ€ when using Wave as a platform for geographically tied data.&#8221;</strong></p>
<p>But of course this is just the beginning!</p>
<p><em>Recently, I talked at length with Jeremy Hight who has been thinking about, designing and creating shared augmented realities, that anticipate the kind of dynamic, real time, large scale architecture we now have available through Wave,Â  for quite some time now.Â Â  This is exciting stuff. </em></p>
<p><em><br />
</em></p>
<h3><strong>Modulated Mapping:</strong> Talking with Jeremy Hight about Layers, Channels andÂ  Social Augmented Experiences</h3>
<p><strong><strong> </strong></strong></p>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5.jpg"><img class="alignnone size-medium wp-image-4611" title="modulatedmapping5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5-230x300.jpg" alt="modulatedmapping5" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><strong><em><span>image from Volume Magazine (Hight/Wehby)</span></em></strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I know you have been involved in locative media from its early days. Perhaps we can talk about how AR continues the locative media journey?</p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> gave me this distinction, recently:<em> &#8220;AR is about systems that put media out in the world, and immerse you in a mixed space. Â Even the current &#8220;not really registered&#8221; mobile phone AR systems are still &#8220;sort of&#8221; AR (e.g., Layar, etc).</em></p>
<p><em>Locative media/ubicomp/etc are very different, in that they tend to display media on a device (phone screen) that is relevant to your context, but does not attempt to merge it with the world.<br />
The difference is significant, and making it clear helps people think about what they do and what they want to do, with their work. The locative media space though points toward future AR systems (when the technology catches up!).&#8221;</em></p>
<p><strong><strong>Jeremy Hight: The need is to finish the arc that locative media and early AR have started and to now truly return to the map itself, but as an internet of data, interactivity, channels of data , end user options like analog machines once were but in high end tools, a smart AI-ish ability for it to cull data for the user, and to allow social networking to be in real world places on the map both in building augmentation and in using and appreciating it..not hacks..which have their place&#8230;but a rhizome, a branched system with shared root,end user adjustable and variable..this is the key.</strong></strong></p>
<p><strong><strong>This takes AR and mapping and makes a possible world of channels in space and this eventually can be a kind of net we see in our field of vision with a selected percentage of visual field and placement so a geo-spatial net, a local to world wide fusion of lm into a tool and educational tool</strong></strong></p>
<p><strong><strong><span>VR[virtual reality] has greatly advanced, but in nodes as it has limitationsâ€¦LM [locative media] is the sameâ€¦AR [augmented reality] is the way..</span></strong><strong> it now has locative elements and aspects of VR integrated into its functionality and nodes&#8230;it is the best option with all of these elements, greater hybridity and data level potential a well as end user and community sourcing potential</strong></strong></p>
<p><strong><strong>I wrote an essay for Archis&#8217; Volume, the architecture magazine on a near future sense of some of this&#8230;.a visual net on the lens like ar but with smart objects and social networking and dissent.</strong></strong></p>
<p><strong><strong>I also wrote of these things for immersive graphic design, spatially aware museumÂ  augmentation, education through ar and lm and nod to the base interface of eye to cerebral cortex in layered and malleable augmentation in my essay <a href="http://www.neme.org/main/645/immersive-sight" target="_blank">&#8220;Immersive Sight&#8221;</a> a few years back</strong></strong></p>
<div id="gqg9" style="text-align: left;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b.jpg"><img class="alignnone size-medium wp-image-4601" title="dgznj3hp_3dj7g8zf7_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b-300x225.jpg" alt="dgznj3hp_3dj7g8zf7_b" width="300" height="225" /></a></strong></div>
<p><strong><strong>image [above] is simple illustration of a possible example on a screen or in front of eye where in a mondrian show..the graphic design of information actually builds as one moves</strong></strong></p>
<p><strong><strong>(key is calibrated spatial intervals and related layers of further augmentation which is logical due to location and proximity)</strong></strong></p>
<p><strong><strong>from immersive sight on immersive graphic design:</strong> <em>&#8220;The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.</em></strong></p>
<p><strong><em>Another interesting possibility is that individual points of augmentation donâ€™t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect). </em></strong></p>
<p><strong><em>Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.</em>&#8220;</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>One of the things I found very inspiring when I read your papers was that your ideas are not all dependent on a model of AR that would necessarily require goggles, back packs and lots of CPU/GPU &#8211; not that that wouldn&#8217;t be nice, but that even using &#8220;magic lens&#8221; AR of the kind smart phones has enabled in an open distributed framework would open up a lot of new possibilities for what you call modulated mapping wouldn&#8217;t it?Â  What kind of social augmented realities might be enabled by a distributed infrastructure like this [AR Wave]?</p>
<p><strong><strong>Jeremy Hight: right&#8230;.I see that as wayyy down the road&#8230;most important is the one you talk about as it is more immediate and thus more essential and needed. Eventually the goggles will be like a contact lens and a deep immersive ar version ofÂ  this will come, that to me is certain, but a ways down the road.Â  An incredible amount is possible now, and this is a more pragmatic move as opposed to the more theoretical of what is a few steps from here. Thus it is more important and essential now. Tools like Google Wave are taking what even 2 years ago was more theoretical discussions of what may be and instead introducing key elements to a more immediate, powerful, flexible level of augmentation. What have been hacks and isolated elements are to be integrated and social networking, task completion, shared tools and graphics building and geo-location.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think some people question what augmented reality has to bring to the continuum of location based experiences that other forms of interface/mapping do not?</p>
<p><strong><strong><span>Jeremy Hight: rightâ€¦.and the schism between its commercial </span></strong><strong>flat self and tests with physics etc and in between &#8230;there are a lot of unfortunate assumptions it seems as to where ar and lm cross and how ar can be many things beyond deep immersion or the opposite pole of a hockey puck having a magic purple line etc&#8230;.like lm is seen as either car directions or situationist experiments with deep data&#8230;..the progression to me is deeply organic&#8230;.and now augmentation can be more malleable, variable and end user controlled.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes, it is really exciting time for AR.Â  Historically AR research has gone after the hard problems of image recognition, tracking and registration because we have had available to us these dynamic, real time, large scale architectures like Wave available (until now!),Â  so less work has been done on exploring the possibilities for distributed AR fully integrated with the internet and WWW hasn&#8217;t it?</p>
<p>A distributed augmented reality framework such as we have envisaged on Wave wouldÂ  allow people to see many layers from many different people at the same time. â€¬And this kind of model has been part of your thinking and fundamental to your work for a while, hasn&#8217;t it? But it is a very new idea to most people to think about collaboratively editing layers on the world, and to be able to viewÂ  augmented space through channels and networked communities?Â  Could you explain some of the ways you have explored these ideas and how they could be explored further now to create meaningful experiences for people?</p>
<p><strong><strong><span>Jeremy Hight: right..exactlyâ€¦modulated mapping to me can be an amazing tool for studentsâ€¦back end searching data visualizations and augmentations based on their needsâ€¦while they do something else on their computer or iphoneâ€¦that can be amazing..and not deep </span></strong><strong>immersive..The map can be active, malleable, open source fed, and even, in a sense, intelligent and able to adapt. The possibility also exists for this map to have a function that based on key words will search databases on-line to find maps, animations, histories and stories etc to place within it for your study and engagement. The map is thus a platform and yet is active. Community is possible as people can communicate graphically in works placed on the map and in building mode in the tool. All the tropes of locative media are to be in a </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> system of channels of augmentation and a spatial net. The software by design will allow development on the map and communication like programs such as second life but in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> itself.</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1.jpg"><img class="alignnone size-medium wp-image-4607" title="interactive 3d map copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1-246x300.jpg" alt="interactive 3d map copy" width="246" height="300" /></a></strong></p>
<p><strong><strong><em><strong><span>image from Parsons Journal of Information Mapping Volume 2 (Hight/Wehby)</span></strong></em></strong></strong></p>
<p><strong><strong><span>I wrote an essay a few years ago for the Sarai reader questioning the traditional map and its semiotics and need to reconsider â€“ then did work looking into it and what those dynamics were and they got into 2 group shows in museums in Russiaâ€¦so it actually was my arc toward modulated mappingâ€¦an interesting way to it! But yes the map itself..this is a huge area of potential and non screen based alone navigation etc. I see now that my 2 dozen or so essays in lm,ar, interface design and augmentation have all also been leading in this direction for about 10 years now</span></strong></strong></p>
<p><strong><strong>Tish Shute: </strong>IÂ  love immersive visualization but can we &#8220;return to the map &#8211; the internet of data&#8221; as you mentioned earlier and produce interesting augmentation experiences that go beyond locative media&#8217;s device display mode without having the goggles, for example, through the magic lens of or smart phones?</strong></p>
<p><strong><strong>Jeremy Hight: yes, absolutely.Â  the map in the older paradigm is an artifice born often of war and border dispute and not of the earth itself and its processes&#8230;the new mapping like google maps is malleable, can be open source, can read spaces and can be layers of info in the related space not plucked from it as in the past..this is amazing. the old map also was born of false semiotics/semantics like &#8220;discovery of new lands&#8221; or &#8221; pioneer&#8221;Â  while the places were there already and names often were of empire&#8230;now this is no longer the case</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2.jpg"><img class="alignnone size-medium wp-image-4608" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2-300x233.jpg" alt="jeremy map small2 copy" width="300" height="233" /></a></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>So geoAR is an a better way to express a new social relationship to mapping? And how does this fit into the evolving arc of locative media that evolves into augmented reality?</p>
<p><strong><strong>Jeremy Hight:&#8230;early lm was mostly geocaching and drawing with gps..it took new paradigms to invigorate the fieldÂ  a lot of folks focus on tools and what already is, cross pollination can ground ideas that are more radical&#8230;a metaphor in a sense to place what can be in a familiar context.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>one of the great disappointments in VR has been its isolation from networked computing and also, up to now, augmented reality &#8211; to achieve an immersive experience withÂ  tight registration of media/graphics have to create separate system isolated from the internet and power of the web.</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.this will change. vr is to me an island but ar takes a part of it and shifts the paradigm and new things open this way. Do you know the project <a href="http://www.lifeclipper.net/EN/process.html" target="_blank">&#8220;life clipper&#8221;</a>? friends of mine..doing interesting things..they are a clear bridge betwen lm and ar&#8230;.and from vr</strong></strong></p>
<p><strong><strong>in ar augmentation and what is being augmented become fused or in collision or in complex interactions as a means to a larger contextualization and exploration of what is being augmented..this is true in immersive or non ar&#8230;.huge potential</strong></strong></p>
<p><strong><strong>vr is a space, now can be surgery which is amazing. but not layered interaction, thus an island and graphic iconography on a location can use symbolic icons which opens up even more layers (graphic designer/information designer in me talking there I suppose..)</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes !Â  talk to me more about layers and channels I think this is one of the most interesting questions for meÂ  in augmented reality at the moment &#8211; what can we do with layers and channels and the new possibilities on connections between people and environments that these can create?</p>
<p>The ability for anyone to post something is critical to the distributed idea but one of the reasons I am so excited by Google Wave is I am fascinated by the playback function. How do you think this will enable new forms of collaborative locative narratives (<a href="http://snarkmarket.com/2009/3605" target="_blank">nice post on Wave playback here </a>).</p>
<p><strong><strong>Jeremy Hight: We are in an age of cartographic awareness unseen in hundreds of years. When was the last time that new </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> tools were sold in chain stores and installed in most vehicles? When was the last time that also the augmentation of maps was done by millions (Google map hacks, etc)? The ubiquitous gps maps run in automobiles while people post pictures and graphic pins to denote specific places on on-line maps.</strong></strong></p>
<p><strong><strong>The need is for a tool that combines all of these new elements into an open source, intuitive layered and rhizomatic map that is porous (like pumice, organic in form yet with â€œbreathing roomâ€ ),ventilated (i.e: adjustable, a flow in and out), and open (open source,open access,open spatialized dialog).</strong></strong></p>
<p><strong><strong><span> I wrote of this in my essay &#8220;Revising the Map: Modulated Mapping and the Spatial Interface .&#8221;(</span></strong><span> </span><a id="h0qr" title="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )" href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%20%29"><span>http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )</span></a></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3.jpg"><img class="alignnone size-medium wp-image-4609" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3-300x206.jpg" alt="jeremy map small2 copy" width="300" height="206" /></a></strong></p>
<p><strong><em><strong><span>image from Parsons Journal of Information Mapping (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> One mapping project I really like is <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>.Â  How could distributed AR contribute to a project like <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>?</p>
<p><strong><strong>Jeremy Hight: that is a good example..imagine taking manhattan and having channels of options to overlay, that being an excellent option, and imagine being able to even run a few at once with deliniating icons..you can augment a space with history, data, erasure, narrative, scientific analysis, time line of architecture, infrastructure, archaeological record etc&#8230;.endless possibilities, and this agitates place and place on a map into an active field of information with end user control&#8230;and open options for new layers</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>and do you think we could do interesting things with AR on a project like Mannahatta even with the current mediating devices we have available &#8211; i.e. our smart phones as obviously the rich pc experience of Mannhatta has built for it&#8217;s web interface would not be available as AR at this point?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.k.i.s.s right?Â Â  these projects do not have to only be immersive and graphic intensive&#8230;&#8230;take how people upload photos onto google maps&#8230;.just make that on a menu of options, there are some pretty cool hacks already..<br />
&#8230;options is key, a space can have a community as well, building on it in software, and others navigating it, i see it near future and down the road..always have with ar really</strong></strong></p>
<p><strong><strong><a href="../wp-content/uploads/2009/10/locativenarratives1.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1.jpg"><img class="alignnone size-medium wp-image-4596" title="locativenarratives1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1-230x300.jpg" alt="locativenarratives1" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><em><strong><span>image from Volume Magazine (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Jeremy Hight: and yes, a lot of people focus on ar as its limitations and processing power needs as a major road block</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>so do you see AR on smart phones adding any value to a project like Mannahatta?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;that it can be integrated into other similar works and even disparate but cloud linked ones&#8230;so a place can be &#8220;read&#8221; in diff ways on the iphone&#8230;.beyond its map location, and more can be possible if you are there&#8230;others away, so it becomes channels of augmentation</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>AR like locative media puts who you are, where you are, what you are doing, what is around you center stage in online experience but it also &#8220;puts media out in the world&#8221; &#8211; people I think understand this well as a single user experience but we are only just beginning to think about how this will manifest as a social experience &#8211; could explain more about modulated mapping as an experience of social augmentation?</p>
<p><strong><strong style="background-color: #99ff99; color: black;"><span>Jeremy H</span>ight: Modulated</strong> <strong style="background-color: #ff9999; color: black;">Mapping </strong><strong>is a tool that will allow channels to be run along the map itself. This will allow one to view different icons and augmentations both as systems on the map and in deeper layers of information (photos, videos, animations,Â  visualizations, etc) that can be turned on and off as desired. The different layers of icons and data may be history, dissent, artworks, spatialized narratives, and annotations developed that are communally based on shared interests, placed spatially and far beyond. The use of chat functionality in text or audio will be open in building mode and in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> navigation/usage as desired. This also allows a community to develop or augment in the spaces on the earth. These nodes can be larger and open or small and set by groups in their channel. The end result is an open source sense of </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> that will also have a needed sense of user control as one can select which layers of augmentation they wish to see and interact with at any time. It also will incorporate all the functionality of locative media in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software and </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong>. In building mode and in map mode, icons will be coded to represent within channels (remember that the person using it has selected channels of augmentation from many based on their current interests and needs). Icons will be coded as active to show work in progress in cities and the globe to both invite participation and to further agitate the map from the sense of the static as action is visible even with its icons as people are working and community is formed in common interest/need .</strong></strong></p>
<p><strong><strong>locative media got a buzz for &#8220;reading&#8221; places&#8230;when I helped create locative narrative that was what blew me away back in 2001&#8230;that we could give places a voice by placing data from research and icons on a map&#8230;&#8230;this meant lost history or augmentation was possible as kind of voices of a place and its layers&#8230;&#8230;.I called it &#8220;narrative archaeology.&#8221; We now have tools that can push these ideas and concepts farther..much farther&#8230;and with a range beyond what was before, and then the map was just a tool&#8230;.but now we are returning to the map itself&#8230;..and this as place as much as marker..this is where ar takes the ball to use a bad metaphor</strong></strong></p>
<p><strong><strong>also that project could only work if you came to our spot of a 4 block augmentation and with us there to lend you our gear&#8230;we are far beyond that now but it had its place</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>How do you see &#8220;in context&#8221; AR and something we might call &#8220;context aware&#8221; cloud computing models interacting?</p>
<p><strong><strong>Jeremy Hight: sure&#8230;and I must add that I have issues with cloud computing as much as it is a good idea..</strong>.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>because of loss of autonomy?</p>
<p><strong><strong>Jeremy Hight: tivo is simply a hard drive&#8230;but it keyword reads and givesÂ  suggestions..that is the is cro magnon link to what can be</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The nice thing about Wave is because of the Federation model, the cloud model and local store ur own data models should work together.<strong><strong><span> </span></strong></strong></p>
<p><strong><strong><span>Jeremy Hight: yes..that is better&#8230;..loss of autonomy also opens up the arbitrary which is the flaw of search engines as we know itâ€¦even Bing fails to me in that sense</span></strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>how do you mean, could you explain?</p>
<p><span> </span><strong><strong><span>Jeremy Hight: spidersÂ  cull from wordsÂ  but cull like trawlers at sea â€¦. tested Bing with very specific requests.. it spat out the same mass of mostly off topic resultsâ€¦.</span><br />
<span> I wonder if there is a way to cull from key words and topics from a userâ€¦not O</span>rwellian back end of courseâ€¦but from their preferences, their searches etc..</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>did you see the discussion on search in the AR Framework doc? AR search will be a massively important thing that will take a lot of intelligence and all sorts of algorithm development won&#8217;t it?</p>
<p><strong><strong>Jeremy Hight:It also has one area of key functionality that moves into more intuitive software. Upon continued usage, the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software will â€œlearnâ€ and search based on key words used and spheres of interest the user is </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> or observing as mapped and will integrate deeper data and types of animations, etc. into the map or will have them waiting to be integrated upon user approval as desired. Over time the level of sophistication of additions and of search intuition will increase dramatically. The search can also, if the user wishes, run in the back end while working in the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> program, or in off time as selected while doing other tasks. It also can never be used if one is not interested. One of the key elements of this </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> is that it is not composed of a closed set or needs user hacks to augment, but instead is to evolve and deepen by user controls and desired as designed. Pre-existing data,visualizations and augmentations can be integrated with relative ease.</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>One of the things that Joe Lamantia points out about social augmented experiences is that they will operate across a number of different scales &#8211; conversation &gt; product design &amp; build team &gt; neighborhood / town fixing potholes &gt; global community for causes. How do designs for channels and layers change across these different social scales?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> quote myself &#8230;&#8221;The &#8220;frontier&#8221; is often defined as the space just ahead of the known edge and limit, and where it may be pushed out deeper into the previously unknown. The frontier in the world of ideas is not the warm comfort of what has been long assimilated; and the frontier in the landscape is not of maps, but of places beyond and before themâ„</strong></strong></p>
<p><strong><strong>The border along what has been claimed is not only that of maps â€“ it is of concepts, functions, inventions and related emergent industries. Ideas and innovations are like the cloud shape that briefly forms around a jet breaking the sound barrier, tangible yet not fully mapped into measure. It is when things are nailed down into specific entities, calibrated and assessed, that the dangers may inflict themselves â€“ greed, competition, imitation, anger, jealously, a provincial sense of ownership either possessed or demanded&#8221;. (from essay in Sarai reader). Otherwise channels and augmentation do not have to be socio-economically stratifying or defined by them. We built 34nÂ  for almost nothing on older tools.</strong></strong></p>
<div id="yqjj" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><img class="alignnone size-medium wp-image-4599" title="dgznj3hp_1g3svj8fq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b-300x225.jpg" alt="dgznj3hp_1g3svj8fq_b" width="300" height="225" /></a></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><span> </span></a></strong></div>
<p><strong><em><strong><span>image from 34north 118westÂ  (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>The ar that is not deep immersion can be more readily available and channels can be what end users need like the diversity of chat rooms or range of Facebook users among us.</strong></strong></p>
<p><strong><strong>I had two moments yesterday that totally fit what we talked about.Â  I went to west hollywood book fair and traditional directions off of mapping for driving directions were wrong and we got lost&#8230;our friend could only get a wireless signal to map on itouch and we had to roam neighborhoods then we called a friend who google mapped it and we found we were a block away&#8230;.so a fast geomapping overlay with an icon for the book fair on some optional grid service or community would have made it immediate.Â  Then at the book fair talked to a small press publisher who is trying to map works about los angeles by los angeles authors on a map..she was stunned when I told her it could be a kind of google map feature option</strong></strong></p>
<p><strong><strong>it also has great potential to publish and place writing and art in places..both for commentary and access. imagine reading joyce in chapters where it was written about and then another similar experience but with writers who published on a service into their city.</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> The challenge of shared augmented realities is not just a matter of shipping bits around, but also of how it we will use channels and layars &#8211; to create and negotiate different, distributed perspectives, understand a shared common core/or expressions of dissent (this came up in an email conversation with <a href="http://www.oreillynet.com/pub/au/166" target="_blank">Simon St Laurent</a>).</p>
<p><strong><strong><strong>Jeremy Hight:</strong> well my example earlier could have been communal in a way too..a tribe sort of augmentation channeling &#8230;.like subscribing to list servs back in the day but of augmentation communities/channels, and for folks to build and use in shared live form, coordinating too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong> </strong>one good thing though about building an open AR Framework is that as bandwidth/CPU/hardware gets better shared high def immersive experiences could be supported by the same framework..</p>
<p><strong><strong>Jeremy Hight: excellent</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>were you thinking of the image recognition and tracking with this example?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> yeah&#8230;.like scanning across a multi channeled google map augmentation with diff icons and their connected data&#8230;and poss social networking and fle sharing even in that mode&#8230;and rastering etc&#8230;.could be cool with google wave </strong><strong><span>- on the map..then zooming in a la powers of ten..(eames film).</span></strong></strong></p>
<p><strong><strong>-</strong><strong><span>I have pictured variations of this for a few years now in my head like the example of my friends and I yesterdayâ€¦we could have correlated a destination by icons in diff channels..one being lit events within lit channel in l.a mapâ€¦maybe things streaming on it tooâ€¦remote info and video etc&#8230; that would be awesome</span></strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> So many of the ideas in you paper on modulated mapping (see <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a>) are brilliant use cases for shared augmented realities. Perhaps you could talk more your ideas about locative narrative because this is something I think is at the core of the kinds of experiences that a distributed AR Framework would make possible?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> on the project &#8220;34 north 118 west&#8221; we mapped out a 4 block area for augmentation of sound files triggered by latitude and longitude on the gps grid and map and the map on the screen had pink rectangles that were the &#8220;hot spots&#8221; where the augmentation had been placed.</strong></strong></p>
<div id="nwc6" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b.jpg"><img class="alignnone size-medium wp-image-4600" title="dgznj3hp_0gg994bf9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b-300x225.jpg" alt="dgznj3hp_0gg994bf9_b" width="300" height="225" /></a></strong></strong></div>
<p><strong><em><strong><span>image of interactive map with map based augmentation connected to audio augmentation on site for 34north 118west (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>We researched the history of the area and placed moments in time of what had been there at specific locations &#8230;.I called this <a href="http://www.xcp.bfn.org/hight.html" target="_blank">&#8220;narrative archaeology&#8221;</a> as it allowed places to be &#8220;read&#8221; by their augmentations&#8230;info that was of the place beyond the immediate experience (diff types of info) that otherwise would be lost or only found in books or web sites elsewhere. there now are locative narratives around the world but they need to be linked.Â  from humble origins &#8220;narrative archaeology&#8221; went on to be recently named of the 4 primary texts in locative media which is pretty amazing to me&#8230;but it is growing</strong></strong></p>
<p><strong><strong>- the limitations then were what I called the &#8220;bowling alley connundrum&#8221; &#8211; the specifc data had to reset like pins&#8230;..and was isolated&#8230;.this led me to think about ar back then and up to now.Â  How these could lead to much more from that point, data that would be more layered, variable , fluid..yet still augmented place and sense of place and social networking within data and software</strong></strong></p>
<p><strong><strong><a href="http://34n118w.net/34N/" target="_blank">lifeclipper</a> to me is a bridge</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>But Life Clipper is isolated from the internet currently is it?</p>
<p><strong><strong><span>Jeremy Hight: yes&#8230;ours was too.. that is what google wave makes possible.. our project only ran on our gear..in 4 blocksâ€¦with additional auxi</span>liary info online, and not malleable..but hey 2001 and all..</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>so the sites for 34 north 118 west are still active though?</p>
<p><strong>Jeremy Hight: oh yeah!</strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>nice I really like sound augmentation &#8211; have you seen <a href="http://www.soundwalk.com/blog/tag/augmented-reality/" target="_blank">Soundwalk</a>?</p>
<p><strong><strong><span>Jeremy Hight: yes, very cool..</span> </strong><strong>we chose sound only as it fought the power of image..instead caused a person to be in a sense of two places and times at once</strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> and in 2001 that was definitely a visionary project!</p>
<p>You must be very excited that finally the pieces are coming together to make this stuff scale!</p>
<p><strong><strong><strong>Jeremy Hight:</strong> I can&#8217;t even tell you!! it is funny..i have known that this would come..just waited and waited&#8230;</strong></strong></p>
<p><strong><strong>..knew it needed the right people and tools..</strong></strong></p>
<p><strong><strong><span>..so the bowling alley connundrum led me to develop my project shortlisted for the iss (international space station)Â  as I thought a lot about how points and works are not to be isolatedâ€¦but connectedÂ  and should be flowing in diff parts of a mapâ€¦.to open up perspective and connected augmentations , but also to think about the map againâ€¦not as a base only. then moved into my work with new ways to visualize time and it all really began to gell.Â  The ideas first were published as an essay</span></strong><span> </span><a id="qw.2" title="(http://www.fylkingen.se/hz/n8/hight.html)" href="http://www.fylkingen.se/hz/n8/hight.html"><span>(http://www.fylkingen.se/hz/n8/hight.html)</span></a><span> </span><strong><span>and later my project blog</span></strong><span> (</span><a id="bp.b" title="http://floatingpointsspace.blogspot.com/)" href="http://floatingpointsspace.blogspot.com/%29"><span>http://floatingpointsspace.blogspot.com/)</span></a></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>One thing I noticed when I was reading your paper is how you have been exploring non-euclidian geometries.Â  Could you explain how this is part of your idea of modulated mapping?</p>
<p><strong><strong><span>Jeremy Hight: Yes, this first came to me when my wife was reading to me from a book on the Poincare Conjecture and I was hit with a new way to measure events in time and after months of sketches, schematics and research came to see how it could also be connected to a geo-spatial web of projects and augmentations.Â  It was published in the inaugural issue of Parsons School of Design&#8217;s Journal of Information Mapping which was an exciting fit.</span></strong><span><strong> I call it &#8220;Immersive Event Time&#8221;</strong>(</span><a id="o3rt" title="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)" href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%29"><span>http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)</span></a></strong></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b.jpg"><img class="alignnone size-medium wp-image-4634" title="dgznj3hp_4cxz57xgv_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b-195x300.jpg" alt="dgznj3hp_4cxz57xgv_b" width="195" height="300" /></a></strong></span></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b.jpg"><img class="alignnone size-medium wp-image-4635" title="dgznj3hp_5g68k9ggh_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b-300x225.jpg" alt="dgznj3hp_5g68k9ggh_b" width="300" height="225" /></a><br />
</strong></span></p>
<p><strong><strong>so the last 3 years I have been working on how it could all work as channels of augmentation, and building and navigation as open and community in a sense as well as ai capability that was the time work especially. how time as experienced within an event is not a time &#8220;line&#8221;Â  but points on and within a form&#8230;.and how this model is better for visualizing events in time and documenting them. it actually sprang form reading a book on the poincare conjecture, popped a bunch of other stuff together so one could visualize an event in time as like being in the belly of a whale..with time as the ribs..and our measure of time as the skin&#8230;and moving within it&#8230;.hoping this will be used as educational tool</strong></strong></p>
<p><strong><strong>and this also can be tied to ar and map again&#8230;how documentation of important events can be kept within icons on a google map..then download varying visualizations based on bandwidth and desired format</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>I have been thinking about is the new forms of social interaction/agency that these kinds of augmentations of space/place/time will create.Â  it seems there are two poles &#8211; one is the area Natalie Jeremijenko explores of shifting social relations from institutions/statistics to real time/location based/interactions and new forms of social agency.Â  The other pole completely is more like the cloud based AI and perhaps crowd sourced machine learning.</p>
<p>Your ideas explore the possibilities of both these poles.Â  And certainly one of the big deals of distributed AR integrated with would be the possibilities it opened up both for new forms of networked social relationships and for new ways to draw on network effects.</p>
<p><strong><strong><strong>Jeremy Hight:</strong> and cross pollinations within &#8230;that is what my mind goes to</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>The other night I met Assaf Biderman, MIT, from the <a href="http://senseable.mit.edu/trashtrack/" target="_blank">Trash Track</a> team. Trash Track doesn&#8217;t utilize AR but I could see that there are possibilites there.<br />
What do you think?</p>
<p><strong><strong><span>Jeremy Hight: yes, absolutely,</span> </strong><strong>there can sort of skins on locations that user end selection can yield &#8230;like channels of place&#8230;.and can range from pragmatic core to art and play and places between&#8230;.how this recalibrates the semiotics of map&#8230;more than just augmentation as seen as a kind of piggy back on map..map becomes interface and defanged platform if you wil, interestingly my more poetic/philosophic writing led me here too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> I know they are at very different poles of the system but I do wonder how AR can bring some of the level of social agency/interaction that <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a> works on into a productive interaction with the kind of innovations in Machine learning that Dolores Lab style machine learning!!and others are pioneering?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> Natalie&#8217;s genius to me is in practical functional tech that also opens deeper questions and even new openings of what is needed..amazing layers in her work that way.. succint yet deep..very deep</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>Yes &#8211; I a just writing a post about her work &#8211; I find it deeply moving the way she has delved into the possibilities to using technology to open us up to our world.Â  One of the reasons I find distributed AR so interesting is because it will make it possible for all kinds of people to create and use augmentation in their lives and communities.</p>
<p>So to return to how a distributed AR framework could contribute to a project like Trash Track?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> what about using it for community, dissent and awareness raising then?Â  like Natalie&#8217;s work but building like a communal work of multiple points, like the old adage of the elephant and the blind menÂ  sorry..metaphor &#8211; like one of my points in immersive sight was how one could take augmentation as multiple works sort of turning the faces of a thing or place&#8230;and how this would make a larger work even in such a flow so people moving in a space could also build..</strong></strong></p>
<p><strong><strong>what of ar traces left as people move calibrated to user traffic and trash as estimated in an urban space&#8230;like it goes back to chris burden in the 70&#8242;s making you know that as you turn the turnstyle you are drilling into the foundation and may be the one that collapses the building?</strong></strong></p>
<p><strong><strong>so their movements leave trash. Natalie is all about raising awareness to cause and effect and data , space and ecology. love that.Â  so maybe &#8230;<br />
a feedback loop , artifact and user end responsibility can leave traces &#8230;trash&#8230;</strong></strong></p>
<p><strong><strong>.. cybernetics vs ecology and human waste</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>could you elaborate?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> brain fart&#8230;that the mass of trash people leave is a piece at a tiime&#8230;.and how like the space shuttle mission when it was argued first true cybernaut occured&#8230;.one cord to air for astronaut..one for computer on their back to fix broken bay arm&#8230;if there is a way to build on that and in relation to the topic&#8230;..how this can go further, that machines do not waste as much&#8230;as ar is a means to cybernetic raise awareness..eh..</strong><strong><span>In a sense it is likeÂ  the space shuttle mission when arguably the first true cybernaut occurredâ€¦.one cord to air for astronaut..one for computer on their back to fix broken bay armâ€¦if there is a way to build on that and in relation to the topicâ€¦..how this can go further, that machines do not waste as muchâ€¦as ar is a means to cybernetic raise awareness..eh.. hmmm.</span>.. </strong><strong> sensors etc&#8230;wearables too &#8211; could be eco awareness with data and machine and human</strong></strong></p>
<p><strong><strong>what about a cloud computing system with a slight ai in the sense of intuitive word cloud and interest scans&#8230;..so as one moves through say new york they can be offered new ai data and services as they move ? could also be of eco interests? concerns about urban farming, eco waste, air pollution etc&#8230;.perhaps with (jeremijenko element here) Â sensors placed in locations and these also giving data reads in public areas Â with no input but hard data itself&#8230;&#8230;hmm..could be interesting</strong></strong></p>
<p><strong><strong>it can also give info of the carbon footprints (estimated prob unless data is public record somehow) of chain businesses Â and data on which are more eco friendly as well as an iconography color coded and icon coded to the best places to go to support greening and eco friendly business? Â and the companies could promote themselves on this service to attract eco aware customers who would be seeing them as kindred spirits and helping the<br />
larger effort?</strong></strong></p>
<p><strong><strong>kind of eco mapping..and ar on mobile app</strong></strong></p>
<p><strong><strong>what about sensors that read air pollution levels, levels of solar radiation (to aid with skin protection in shifting light values in a city space..ie put on some skin cream now&#8230;), light sensors that detect density and over density in public spaces&#8230;to use the old trope in art of reading crowds in a space..but instead could indicate overcrowding, failing infrastructure in public spaces (which is a congestion that leads to greater pollution levels as well as flaws in city planning over time..), and perhaps a tie in to wearables&#8230;&#8230;worn sensors Â on smart clothes&#8230;.this could form a node network of people in the crowds &#8230;.and also send data within moving in a space&#8230;</strong></strong></p>
<p><strong><strong>here is a kooky thought&#8230; what of taking the computing power and data of people moving in a space..and not only get eco data and make available to them levels of<br />
data..but make possibly a roving super computer&#8230;crunching the deeper data of people open to this&#8230;&#8230;a hive crunching deeper analysis of the space, scan properties from sensors, and even a game theory esque algorithm of meta data if say 40 people out of 50 hit on a certain spike or reading&#8230;and even their input&#8230;..I worked in game theory for paleontology in this manner for a time as a teen&#8230;.a private project&#8230;&#8230; Â  the reading can lead to a sort of meta read by what hits most consistently..as well as in their input..text of what they experienced, observed,postulated,analyzed even&#8230;. this could be really interesting&#8230;even if just the last part from collected data and not from any complex branching of servers..</strong></strong></p>
<p><strong><strong>I thought at 19 or so that the flaw in paleontology was in how so many larger theories were shifting exhibitions and larger senses of things like were there pre-historic birds that were mistaken for amphibean and then back again&#8230;.so why not make a computer program and feed all the papers published into it and see what hits were counted in terms of an emerging meta theory&#8230;and landscape of key points being agreed upon&#8230;this data would be in a sense both algorithmic and a sort of unspoken dialogue &#8230;came from a lot of study of game theory one summer&#8230;</strong></strong></p>
<p><strong><strong>hope this makes some sense&#8230;I forgot to mention that I originally planned to be a research meteorologist and my plan in middle school or so was to get a phd and develop new software to have a global map and then run models of hypothetical storms across it in real time animations of cloud forms, radar and wind analysis/fields, barometric pressure spaghetti charts etc&#8230;.and to also do 3d cut away models of storm architectures&#8230;so been into visualizations of complex data and mapping for a long time!</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>Wow let me think about this one!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/feed/</wfw:commentRss>
		<slash:comments>18</slash:comments>
		</item>
		<item>
		<title>Games, Goggles, and Going Hollywood&#8230;How AR is Changing the Entertainment Landscape: Talking with Brian Selzer, Ogmento</title>
		<link>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/</link>
		<comments>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/#comments</comments>
		<pubDate>Mon, 31 Aug 2009 03:38:38 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[alternate reality RPG]]></category>
		<category><![CDATA[ambient intelligence]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR spam]]></category>
		<category><![CDATA[ARBalloon]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[augmented reality baseball cards]]></category>
		<category><![CDATA[augmented reality development]]></category>
		<category><![CDATA[augmented reality eyewear]]></category>
		<category><![CDATA[augmented reality hotspots]]></category>
		<category><![CDATA[augmented reality industry]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[augmented reality toys]]></category>
		<category><![CDATA[Blockade]]></category>
		<category><![CDATA[Brad Foxhoven]]></category>
		<category><![CDATA[Brian Selzer]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cyberpunk]]></category>
		<category><![CDATA[Evolutionary Reality]]></category>
		<category><![CDATA[EyeToy]]></category>
		<category><![CDATA[eyewear for AR]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[Kensuke Tanabe]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Layar Developer Conference]]></category>
		<category><![CDATA[location based RPGs]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markerless mobile augmented reality]]></category>
		<category><![CDATA[markerless natural feature tracking]]></category>
		<category><![CDATA[Masunaga]]></category>
		<category><![CDATA[Metroid]]></category>
		<category><![CDATA[Metroid Prime]]></category>
		<category><![CDATA[Mirrorshades]]></category>
		<category><![CDATA[multiperson mobile AR experiences]]></category>
		<category><![CDATA[Nano Air Vehicles]]></category>
		<category><![CDATA[near field object recognition]]></category>
		<category><![CDATA[new augmented reality trade jargon]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Pentagon's Robot Hummingbirds]]></category>
		<category><![CDATA[Project Natale]]></category>
		<category><![CDATA[Put a Spell]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Sekai camera]]></category>
		<category><![CDATA[social gaming platforms]]></category>
		<category><![CDATA[sticky light]]></category>
		<category><![CDATA[The Dawn of the Augmented Reality Industry]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[Topps AR baseball cards]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Vuzix]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[Yoshio Sakamoto]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4334</guid>
		<description><![CDATA[Picture on the left Mirrorshades, picture on the right a Metroid Hud. &#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; Bruce Sterling in Beyond the Beyond &#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg"><img class="alignnone size-full wp-image-4349" title="mirrorshadespost3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/mirrorshadespost3.jpg" alt="mirrorshadespost3" width="124" height="204" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2.jpg"><img class="alignnone size-medium wp-image-4350" title="metroid_hud1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/metroid_hud1post2-300x204.jpg" alt="metroid_hud1post" width="300" height="204" /></a></p>
<p><em>Picture on the left <a href="http://www.amazon.com/Mirrorshades-Cyberpunk-Anthology-Greg-Bear/dp/0441533825" target="_blank">Mirrorshades</a>, picture on the right a <a href="http://en.wikipedia.org/wiki/Metroid" target="_blank">Metroid Hud</a>.</em></p>
<p><strong>&#8220;Augmented Reality is like a Philip K Dick novel torn off its paperback rack and blasted out of iPhones,&#8221; <a href="http://www.wired.com/beyond_the_beyond/2009/08/the-key-take-aways-for-investors-interested-in-the-augmented-reality-field/" target="_blank">Bruce Sterling in Beyond the Beyond</a></strong></p>
<p><strong>&#8220;a techno visionary dream come true &#8211; those are rare, really rare, you have to be patient,Â  it&#8217;s super cyberpunk&#8221;&#8230; Bruce Sterling, <a href="http://vimeo.com/6189763" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry.&#8221; </a></strong></p>
<p>The Dawn of the Augmented Reality Industry continues to brighten, and now we have two augmented reality companies, <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, firmly established in Hollywood &#8211; the dream mother of so many of our augmented realities.<a href="http://ogmento.com/" target="_blank"></a></p>
<p><a href="http://ogmento.com/" target="_blank">Ogmento</a> is the most recent of these two pioneering augmented reality companies to set up shop in LA.Â  <a href="http://www.t-immersion.com/" target="_blank">Total Immersion&#8217;s</a> CEO Bruno Uzzan moved to LA from France two years ago, although he still has a fifty person RandD team in France.Â Â  Total Immersion began 10 years ago in the quiet, lonely, hours before the dawn of an AR industry.Â  But <a href="http://gamesalfresco.com/2009/07/23/mattel-launches-augmented-toys-at-comic-con/" target="_blank">Total Immersion&#8217;s AR toys for Mattel,</a> and augmented reality for <a href="http://www.youtube.com/watch?v=I7jm-AsY0lU" target="_blank">Topps baseball cards</a>, fired CNet writer Daniel Terdiman up enough to say, &#8220;I have seen the future of toys, and it is augmented reality&#8221; (<a href="http://news.cnet.com/8301-13772_3-10317117-52.html" target="_blank">see full post here on CNet</a>).</p>
<p>Recently, I talked withÂ <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank"> Ori Inbar, one of the founders of Ogmento </a>andÂ  the premier augmented reality blog <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> about his new venture in Hollywood. Bruce Sterling, <a href="http://twitter.com/bruces" target="_blank">@bruces</a>, had some fun with my invention of <a href="http://www.wired.com/beyond_the_beyond/2009/08/augmented-reality-ogmento/" target="_blank">brand new augmented reality trade jargon here</a>!Â  Ori pointed out Ogmento brings two important new facets to the rapidly growing augmented reality field: firstly they are bringing leadership from veterans of the entertainment industry into augmented reality development. <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> have partnered with Ori on Ogmento.Â  And, in an another important step forward for a young industry, Ogmento announced they will be acting as publishers for a fast growing cohort of augmented reality application developers and helping AR development teams out there bring their concepts to the market.</p>
<p>So I was very happy also to have the opportunity to talk with Brian Selzer.Â  Bruce Sterling pointed out in his seminal<a href="http://eurekadejavu.blogspot.com/2009/08/augmented-realitys-sermon-on-flatlands.html" target="_blank"> sermon from the flatlands</a> at the <a href="http://layar.com/" target="_blank">Layar</a> Developer Conference, AR is kind of a &#8220;Hollywood scene.&#8221; We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser experiences in recent weeks in an awesome wave of enthusiasm. Are Hollywood creatives equally smitten? For the answers see the full interview with Brian Selzer below.</p>
<p>Brian Selzer (<a href="http://brianselzer.com/" target="_blank">www.brianselzer.com</a> and <a href="http://twitter.com/brianse7en" target="_blank">twitter &#8211; brianse7en</a> ) has an extensive involvement with emerging platforms:</p>
<p><strong>&#8220;from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC [user generated content] projects (<a href="http://www.mashade.com/" target="_blank">mashade.com</a>, <a href="http://www.instafilms.com/" target="_blank">instafilms.com</a>).&#8221;</strong></p>
<p>I began my own career in Hollywood doing motion control photography and creating software that bridged the language of robotics and servo motors with the visions ofÂ  film directors. Eventually our little company, NPlus1, moved on to 3D vision systems and image recognition stuff.Â  So yes, I have been really, really patient waiting for this particular techno visionary dream.Â  And, while I have been waiting for augmented reality to manifest, I have grown to love the internet.Â  But now, how awesome, <a href="../../2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">It is OMG finally for mobile AR!</a></p>
<p>Augmented reality is busting out all over &#8211; through our laptops, our phones, on the streets, toys, baseball cards, art installations, <a href="http://www.youtube.com/watch?v=9noMfsg486Y" target="_blank">sticky light calligraphy</a> and more.</p>
<p>Many of my questions to Brian were directed at at how and when we will see augmented realities with near field object recognition, image recognition and tracking and, of course, the illusive eyewear.Â  As Bruce Sterling points out we are just at the very, very beginning &#8211; the dawn of an industry.Â  I created the photomontage below on the right to compliment <em> <a href="http://www.tonchidot.com/">Tonchidot&#8217;s</a> </em>illustration suggesting the evolutionary inevitability of holding our phones up (below on the left).Â  The Evolutionary Reality of AR will not end there.Â  It is just a step into eyewear, hummingbirds or <a href="http://http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Nano Air Vehicles</a>, and more&#8230;&#8230;.</p>
<h3>The Evolutionary Reality of AR</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96.png"><img class="alignnone size-medium wp-image-4359" title="Picture 96" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-96-300x97.png" alt="Picture 96" width="300" height="97" /></a></p>
<p><em>Cartoon on the left  by  <a href="http://www.tonchidot.com/">Tonchidot</a> on the right a collage of a stock photo and the <a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">Pentagon&#8217;s Robot Humming Birds &#8211; </a><a href="http://http//gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank">&#8220;Nano Air Vehicles</a>.&#8221;</em><strong><em><strong><a href="http://gizmodo.com/5306679/pentagons-robot-hummingbird-christened-nano-air-vehicle" target="_blank"> </a></strong></em> </strong></p>
<p>While we finally we have, an affordable mediating device with the horse power, mindshare and business model to bring AR mainstream with the iphone.Â  The much anticipated Apple 3.1 Beta SDK to be released in September will not, I am sure, open up the Video API at the levels that augmented realities with near field object recognition and tracking require (I would love to be proved wrong though). But the magic wand to deliver even <span id="b9-2" title="Click to view full content">tightly registered AR graphics/media (that require a lot of CPU and GPU)</span> to a wide audience is in our hands, so full access to may not be far off. And others, of course, can/will/might knock the iphone off its current pedestal.Â  AR made it&#8217;s mobile phone debut on the Android after all.</p>
<p>Like everyone else who loves AR, I wish that Apple would open up faster (and I wish Android would manifest on some rocking hardware). But we will see enough of the iphone Video API open for the next generation of mobile augmented reality games and applications to emerge in the coming months.</p>
<p>One of these will be Ogmento&#8217;s.  Although Ogmento is in stealth mode, they have released <a href="http://www.youtube.com/watch?v=EB45O7-6Xrg&amp;eurl=http%3A%2F%2Fogmento.com%2F&amp;feature=player_embedded" target="_blank">a teaser for their first game, &#8220;Put A Spell,&#8221;</a> developed by ARBalloon â€“ screenshot below.Â  Ori did reveal to me in <a href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">th<span style="color: #551a8b;">is interview</span></a> that they are doing image recognition and using the Imagination AR engine.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95.png"><img class="alignnone size-medium wp-image-4356" title="Picture 95" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-95-300x177.png" alt="Picture 95" width="300" height="177" /></a></p>
<p>As Brian notes, Hollywood has had the AR bug for a long time. AR has been everywhere in Science Fiction Movies and video games. Nintendo&#8217;s SPD3 head Kensuke Tanabe, &#8220;effectively the man in charge of overseeing all the <em>Metroid</em> franchise underneath original co-creator Yoshio Sakamoto,&#8221; explains the story of <em>Metroid</em> to Brandon Boyer of <a href="http://www.offworld.com/2009/08/retro-effect-a-day-in-the-stud.html" target="_blank">Offworld here</a> (an image of a Metroid Hud on the right opening this post) :</p>
<p><strong>&#8220;the idea of the different visors you use in the <em>Prime</em> games to interact with the world: the scan visor, for instance, set the game apart from other first person shooters in that the player was using it to proactively collect information from the world, rather than having the story come to them passively, in the form of cut-scenes or narration. &#8220;<em>Prime</em> could have adventure elements with the introduction of this visor,&#8221; says Tanabe, &#8220;That&#8217;s how we came up with the genre &#8212; first person adventure, instead of shooter.&#8221;</strong></p>
<p>But as Brian points out:</p>
<p><strong>&#8220;the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear).&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Brian Selzer</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg"><img class="alignnone size-full wp-image-4363" title="me" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/me.jpg" alt="me" width="188" height="227" /></a></p>
<p><strong>Tish Shute: </strong>Bruce Sterling&#8217;s sermon at the Layar Developer conference, <a href="http://www.wired.com/beyond_the_beyond/2009/08/at-the-dawn-of-the-augmented-reality-industry/" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry,&#8221;</a> was absolutely awesome. He spread the future feast/orgy of augmented reality before usÂ  &#8211; and described many of the dishes we will tasting both delectable and diabolical.Â  One of the many things he points out is, AR is kind of a &#8220;Hollywood scene.&#8221; And, as Ogmento is one of only two augmented reality companies in Hollywood, I am interested to hear how it looks from your neck of the woods. We have seen the web early adopter/developer/blogger communityÂ  embrace augmented reality browser in recent weeks in an awesome wave of enthusiam &#8211; are Hollywood creatives catching the buzz?</p>
<p><strong>Brian Selzer: Â It was a thrill to hear Bruce Sterling mention Ogmento. I devoured all of his Cyberpunk books back in the 80&#8242;s, along with writers like Gibson, Rucker, Shirley&#8230; To me, sci-fi writers are the visionaries who define and influence our technological paths into the future. They make science and tech sexy enough to want to manifest those experiences in the real world. Clearly Bruce sees the AR industry as being sexy. I love that he called it &#8220;a techno-visionary dream come true&#8230; and super-cyberpunk.&#8221; Â And yes, kind of a Hollywood scene.</strong></p>
<p><strong>Hollywood creatives caught the AR bug before they knew what AR was. Â Look at science fiction movies and video games to see AR everywhere. Terminator, The Matrix, Minority Report, Iron Man.. the list goes on. Â Look at any video game with an integrated heads-up display. Â It&#8217;s clear Hollywood loves AR. Â It&#8217;s only been in the past few months though that the light bulb has been lit and Hollywood is seeing that the software and hardware are here today to deliver these types of AR experiences in real life (to a lesser extent of course, but the path is getting clear). So yes, the buzz is here and it&#8217;s strong. Â With that, we all have to be prepared for the good, the bad and the ugly as AR goes mainstream.</strong></p>
<p><strong>It certainly goes to show how young this industry is when Ogmento and Total Immersion are currently the only AR companies based in Los Angeles. It&#8217;s very exciting to be the only company right now demonstrating a natural feature tracking (markerless) iPhone experience in Hollywood. We are in talks to bring some very big brand and properties to the mobile AR space. The goal is to deliver experiences that create added engagement and value to the consumer.</strong></p>
<p><strong>Tish Shute:</strong> Also in his landmark sermon Bruce Sterling noted that augmented reality has been around for 17 yrs and now at last we are seeing the dawning ofÂ  an augmented reality industry. What inspired you to take up the challenge of launching an augmented reality company in Hollywood?Â  Oh congrats that Bruce Sterling name checked Ogmento in his list of companies that prove that this really is the dawn of an industry!</p>
<p><strong>Brian Selzer: I&#8217;ve always been involved in emerging platforms&#8230; from launching dot com entertainment sites in the late 90&#8242;s to creating early versions of social gaming platforms, or bringing big brands like Spider-Man and X-Men into the mobile space for the first time. Â Last year I was focused on bringing video game characters and worlds into the online space as UGC projects (mashade.com, instafilms.com). Working with all these great CG game assets, I continued to think about what&#8217;s next, and that&#8217;s when I started to follow AR very closely and started engaging with those who were pioneering in the space.</strong></p>
<p><strong>I remember swapping instant messages with <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> (<a href="http://twitter.com/robertrice" target="_blank">@robertrice</a>) right after the 2008 Super Bowl.Â  We were not chatting about the football game, but rather about some of the commercials that aired during the event as a sign that AR was making its way into the mainstream.Â  A lot of people became aware of AR for the first time when the <a href="http://ge.ecomagination.com/smartgrid/" target="_blank">GE SmartGrid commercial</a> aired.Â  There were all these YouTube videos popping up of people blowing on holographic wind turbines.</strong></p>
<p><strong>The commercial that really got me excited though was the <a href="http://www.youtube.com/watch?v=Kwke0LNardc" target="_blank">Coke Avatar commercial</a>.Â  In that commercial people in the city were sporadically being portrayed as their digital persona&#8217;s, avatars, gaming characters, etc..Â  For me that spot did a great job showing how many of us already have these â€˜alter egosâ€ that live in cyberspace, and how the line between these worlds can sometimes be blurred. I remember watching that commercial and thinking that is exactly the type of experience Iâ€™d like to create with mobile AR.Â  I want to overlap the virtual world into our every-day reality. Why cant I bring my World of Warcraft or Second Life persona with me into the real world?</strong></p>
<p><strong>I am big on the notion of â€œGames and Goals.â€ I believe that games have the power to motivate people in a very powerful way. By challenging ourselves while playing a game we can climb mountains.Â  Augmented Reality is the perfect platform to bring gaming into the real world.Â  By mixing the virtual world with the physical world, this added layer of perception provides a very powerful experience for something like a role-playing game.</strong></p>
<p><strong>One of my earlier social-gaming projects was a website called Superdudes.Â  This was a â€œBe Your Own Superheroâ€ concept that celebrated and motivated kids to create superhero avatar/persona&#8217;s online, and we gave members all sorts of games, challenges, and rewards, some of which carried into the real world. The site recognized members for teamwork, creativity, volunteer work and things like that. So the Superdudes were often involved in charity events and benefits to help children. Â Everybody called each other by their Superhero names, and the line between fantasy and reality were being blurred. Â This project really got me thinking about what happens when you take positive role-playing like this and mix it into the real world.Â  I started to work on a plan for location-based activist missions for points and rewards, but never got to complete that. So I have some unfinished business here.</strong></p>
<p><strong>I think it would be fantastic to be able to show up to some type of fun event with friends, and everybody could see each others alter ego personas standing before them. When you can turn the world into a playground, and use the power of gaming to make a positive impact on the planet&#8230; well, I donâ€™t think there is anything better than that.Â  These are the types of projects that drive me, and I think AR is the best platform to support these types of social gaming experiences.</strong></p>
<p><strong>Tish:</strong> Does Ogmento have any RPGs under development?Â  I noticed in the Google Wave on RPG someone has been working on doing something with the Dungeons&amp;Dragons API.Â  I am interested in exploring the web of protocols underlying Wave as a transport mechanism for multi-person, mobile, AR experiences (not requiring downloads), on an open global outdoor AR network. If not Wave, what do you see as the potential infrastrucure and protocols we could harness for an open augmented reality network?</p>
<p><strong>Brian: Â Ogmento has a deep background in video games and we interact regularly with most of the major game publishers. As a company we are not so much developing our own RPGs right now, but rather exploring what mobile AR extensions make sense for existing brands. Â There are many limitations to location-based gaming, but a global AR network is exactly along the lines we are thinking. Â Lots of discussions are taking place on protocols, platforms, API&#8217;s, and there are numerous ways to approach this. Â We need to be able to use what&#8217;s available now and continue to refine and customize for AR&#8217;s specific needs and issues as we progress. </strong></p>
<p><strong>In general though, Ogmento is focused on what types of experiences can be had today and over the next couple of years. I still think we are several years out from a truly open augmented reality network. Â We are certainly looking at launching our own &#8220;Ogmented Network&#8221; which would support some fun treasure hunt type experiences, or add an entertainment layer on top of traditional outdoor marketing campaigns.</strong></p>
<p><strong>Tish:</strong> I don&#8217;t know whether you have read Thomas Wrobel&#8217;s ideas for an open augmented reality network that I just <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">published here on Ugotrade</a>.Â  The principals he talks about are very important for augmented reality to become a major part of our lives &#8211; .Â  Considering the difficulty open networks can pose for emerging business models how can we fund the development of an open framework for augmented reality?</p>
<p>&#8220;<em>a future AR Network, I mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.<br />
Where people can just point their phone, webcam, or pair of AR glasses anywhere were a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.&#8221;</em></p>
<p><strong>Brian: I think funding for these types of projects will definitely come from Venture Capital groups in the near future. Â It&#8217;s early in AR, but the VC&#8217;s are watching and deciding which horses to bet on. Â Until that time, it&#8217;s about service work, and developing AR experiences for others with what is possible today. That work will help fund internal development of original AR products, and platform development.</strong></p>
<p><strong>Tish:</strong> How did you get started with Ogmento?</p>
<p><strong>Brian: My first conversation with Ori was actually about my interest in Location Based RPG concepts.Â Â  We had a long conversation about the possibilities with AR, and it was clear that we shared similar interests, but were coming from different complimentary backgrounds. The idea of collaboration was exciting, so we just kept talking until the timing felt right. Now, with Ogmento we bring a unique blend of AR development experience with a deep backgrounds in AR technology, animation, video games, entertainment, social media, etc.Â Â  I think this is a powerful mix that will allow us to do some great things.</strong></p>
<p><strong>Itâ€™s still so early, and things are just getting started in AR. There are only so many webcam magic tricks you can enjoy before you are ready for something else.Â  The location-based apps have the most potential in my opinion, which is why we are really focused on mobile AR.Â Â  We have some board-game type projects, which do not instantly scream location-based gaming, but if you look at something like the ARhrrr board game, you can see how much more compelling it can be when the game invites the player to be actively moving around during the experience.</strong></p>
<p><strong>Tish:</strong> I am interested in your perspective on how we can create the kind AR experiences that really embody what has always been so exciting about AR &#8211; the tight alignment of graphics and media with real world objects and ultimately a rich immersive 3D experience, so I am going to hit you with a bunch of those, &#8220;Is this really eyewear or vaporware?&#8221; questions.Â  The real deal eyewear changes everything!</p>
<p>While eyeware is a big challenge technically and aesthetically,Â  I am pretty sure that there are several outfits out there that can pull off the optics and projection. â€¨Will the entertainment industry get excited enough to put a major push into delivering the eyewear in short order instead of the 5 to 10 year project that some people still think it is? Â Â  The business development challenge is bigger perhaps than the technical obstacles perhaps? What is your view on this?</p>
<p>And, perhaps, the eyewear is a clear example of a need for partnerships. For example, we have seen efforts from companies like <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a> and <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a>, and recently a Japanese Company, <a href="http://www.masunaga1905.jp/brand/teleglass/">Masunaga</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97.png"><img class="alignnone size-medium wp-image-4386" title="Picture 97" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-97-300x80.png" alt="Picture 97" width="300" height="80" /></a></p>
<p>I have no reports from people who have tried the Maunaga eyewear yet.Â  But,Â  limited by small field of view, and tethered, currently eyewear offerings, available at a reasonable price point, are not workable solutions for augmented reality experiences. But the problems are not insurmountable. What will facilitate the real deal?Â  â€¨â€¨â€¨It seems that it is critical to start creating hardware relationships now. The industry is costly and slow moving and as Robert Rice put it to me in a recent conversation, &#8220;once the software cat is out of the bag, its going to go wild and if the hardware isnt there, its going to stutter.&#8221;</p>
<p>As Ori notes some of the hardware companies like Intel and others don&#8217;t seem to be paying enough attention to AR.Â  Ori points out they donâ€™t see the demand yet.Â  But in order to create an awesome AR experience and demand from a mass audience, don&#8217;t we need to work in conjunction with hardware designers?</p>
<p><strong>Brian: Itâ€™s fun to think about who will eventually deliver a great hardware solution for AR glasses. It will happen. It would be cool to see somebody like an Oakley or Nike partnered up with a company like Vuzix to deliver something people actually might wear in public. Â Perhaps a hardware manufacturer like Apple or Nokia will bring us something like the iSight or the NGaze down the line. Â Iâ€™d love to see a set of glasses designed by Ideo.Â  Microsoft or Sony are already playing with technologies like Project Natale and the EyeToy, so I think its only a matter of time before they deliver an eyewear solution. I would even look to the toy companies to eventually make an investment here.</strong></p>
<p><strong>Gamers will be the early adopters, and in a few years we may start to see people running around in the park wearing glasses with headsets, but it will be acceptable because it&#8217;s clear they are using them for a game. Â Itâ€™s going to take a very sexy and stylish piece of hardware for everyday people to be willing to wear AR glasses in public while going about their everyday business. Â Â Itâ€™s like the recent cover of Wired magazine where Brad Pitt is wearing a mobile headset in his ear, and the editors point out that even he canâ€™t pull that look off, so why do you think you can. Â When AR glasses come in designer frames, and you can&#8217;t tell them from non-AR glasses, to me thatâ€™s when things get really interesting from a mass-adoption perspective. Â Â Compare how many people were carrying around a mobile phone in the 80s to now.Â  I think it will be the same thing with glasses.</strong></p>
<p><strong>I was in an AR pitch meeting the other week at a very significant media company, and brought up the point that todayâ€™s handheld Smartphones will eventually evolve into tomorrows Smartglasses. My comment was quickly shrugged off as sort of a sci-fi notion that was irrelevant to the business at hand. Â Probably true, but I think it is important to understand where digital media and entertainment is going, so you can adapt quickly, and evolve into those spaces more naturally. Â The more we see people walking around with their Smartphones in front of their face (like a camera), the sooner it will be that we make the jump to eyeglasses as a key hardware device for AR experiences.</strong></p>
<p><strong>At Ogmento, we definitely are working on AR experiences with the hardware and software available today. Â We will get some product out this year, and 2010 will be a banner year for markerless mobile AR in general.Â  I think the entire AR community is looking forward to bringing this technology to the mainstream in the form of games, marketing campaigns, virtual docent apps, and much more.Â  It might not be the full experience we are all dreaming about for some time, but we can see the path and the true potential, and it&#8217;s pretty spectacular.</strong></p>
<p><strong>You mention the tight alignment of graphics and media with real world objects. Â That is really our focus. A lot of well-deserved attention is going to the browser overlay &#8220;post-it&#8221; approach right now, which uses compass and GPS. Â We are focused on markerless natural feature tracking, so once you identify something that is AR enhanced in your environment, you can interact with that integrated experience. Â On an iPhone that can be as simple as using your touch screen to interact. Â When you are wearing glasses, it becomes more about visual tracking. There are lots of smart people thinking through these issues. Many of which you have interviewed. It is my hope that there are exciting collaborative efforts to be had in the coming months to get us all there together and faster.</strong></p>
<p><strong>Tish:</strong> Bruce touched on some of the hard problems that have to be solved for augmented reality &#8211; and he noted for instance security needs to be tackled in the early stages. Robert made a nice list, <em>â€œprivacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others.â€</em> Will Ogmento be leading the way in solving some of these hard problems?</p>
<p>And, won&#8217;t trying to solve these hard problems for networked AR in walled garden scenarios one company at a time lead to a lot of reinventing the wheel wasted energy?</p>
<p><strong>Brian: These are all important issues, and again there are a lot of smart people thinking about solutions to these problems on a daily basis. Â Ogmento is interested in partnering with developers and supporting their efforts as a publisher of mobile AR experiences. Â While we intend to roll up our sleeves in these areas, we are currently more focused on taking AR mainstream with the hardware and software available today. Â As the industry evolves, so will Ogmento. As the opportunities evolve, our ability to make a greater impact tackling these issues will be realized.</strong></p>
<p><strong>Tish: </strong>Another area of development that could really kick AR into high gear might be creating augmented reality hotspotsÂ  where we use can deliver the kind of location accuracy/instrumentation necessary to create interesting AR experiences (partnership with Starbucks, perhaps ?!).Â  Augmented reality hots spots, could deliver the kind of high quality AR experience that isn&#8217;t possible ubiquitously at the moment, and may be a real way to get people really exploring the potential of AR now, rather than later?</p>
<p><strong>Brian: Â Agreed. I see a great opportunity here with this approach.</strong></p>
<p><strong>Tish:</strong> Although there are many obstacles to Green AR &#8211; the energy hogging servers at the backend for starters! Last week I had a conversation with Gavin Starks, <a href="http://www.amee.com/?page_id=289" target="_blank">AMEE</a>, and <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice </a>and <a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick</a> about how to work with AMEE and the technology available and encourage Green Tech AR development (<a href="http://blog.pachube.com/2009/06/pachube-augmented-reality-demo-with.html" target="_blank">see an early exploration of green tech AR from Pachube here</a>).</p>
<p>We came up with the idea of holding a competition perhaps centered around a targeted instrumented space. But I would really love to hear your thoughts on the topic of Green Tech AR (the energy hogging servers at the back end being the first cloud on the horizon!.)Â  Cool GreenTech AR imaginings, social gaming ideas, RPGs, not even necessarily even tied to the immediately practical, would be like rain in a drought!</p>
<p><strong>Brian: Â I go back to &#8220;Games and Goals&#8221;&#8230; If you make environmental and other activist efforts fun and rewarding, more are likely to be motivated and participate. Â Can you imagine having a personal &#8220;carbon footprint stat&#8221; floating over your self at all times? Or over your home or factory? Â How would that change your behavior? Â We all love stats. Look at how the Nike+ campaign has used technology and gaming to motivate people to run. Â I think there is a lot that can be done to make being green fun. It starts with the individual, and spreads from there. Â Keep me posted on that one!</strong></p>
<p><strong>Tish:</strong> I would also like to explore further the <a href="http://www.readwriteweb.com/archives/augmented_reality_human_interface_for_ambient_intelligence.php" target="_blank">RRW suggestion that ambient intelligence is both the Holy Grail of AR and possibly snake oil</a>:</p>
<p><em>&#8220;The holy grail of the mobile AR industry is to find a way to deliver the right information to a user before the user needs it, and without the user having to search for it. This holy grail is likely in a ditch somewhere beside a well-traveled road in the district of the semantic Web, ambient intelligence and the Internet of things. Be wary of any hyped-up invitation to invest in a company that claims to have gotten the opportunity right. What we&#8217;ve seen in the commercial industry to date is a rather complex version of a keyboard, mouse, and monitor.&#8221;</em></p>
<p><em> </em></p>
<p>So Holy Grail, Snake Oil, or a ditch somewhere&#8230;.?</p>
<p><strong>Brian: Â I instantly think of Minority Report, where Tom Cruise&#8217;s character is being bombarded with holographic ads personalized with his name and to his current situation. Â In the future, Spam is a nightmare, especially when it knows who you are. Â I think the key thing here is delivering &#8220;the right information&#8221;, and we still dont have that down. I do see a day where we can truly customize what comes to us, how we want it, when we want it. Â My future vision of ambient intelligence is the ability to &#8220;turn everything off&#8221; if I want to&#8230; block out the stimuli and replace it with images of nature, or natural surroundings, etc. Â Where I live in Los Angeles, we have those digital billboards everywhere, so it&#8217;s like advertising overload wherever you look (hints of Blade Runner). Â I personally don&#8217;t mind them, but I know there is great debate on there being simply too many billboards everywhere. So AR would only add to the noise of life by adding yet another digital overlay of information, right? </strong></p>
<p><strong>Perhaps the holy grail is to use technology to filter things out. AR might become a solution to leading a simpler life, or a perfectly customized life if you want that. Ultimately the control needs to be with the individual. Â I guess I am talking about something like TiVo taken to the extreme.</strong></p>
<p><strong>Tish:</strong> And then that other biggy &#8211; augmented reality search! I am asking this next question ofÂ  <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a href="http://sekaicamera.com/" target="_blank">Sekai </a>camera too and now I must also ask <a href="http://www.acrossair.com/" target="_blank">Acrossair</a> and several others I guess! Obviously a huge area of opportunity in this broader landscape that uses location-awareness, barcode scanners, image recognition and augmented reality is to harness the collective intelligence &#8211; a whole new field of search. There is the beginning of a discussion on this <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">in the comments here</a>.</p>
<p>What will it take, in your view, to become a leader in augmented reality search?</p>
<p><strong>Brian: Â I&#8217;m more of a content guy, so I tend to focus on things like UI, quality of creative, etc.. Â From that perspective, I am looking forward to evolving beyond the &#8220;post-it&#8221; text overlay user-experience we see now in AR search. I was impressed with the TAT Augmented ID concept and hope we start seeing more smart design solutions like that emerging in the space. Â There are some great new design approaches coming out of the location-aware space that should be applied to AR search. I&#8217;ve been studying the heads-up display designs being used in video games, and re-watching movies like Iron Man for ideas. This is another example where Hollywood has painted a polished picture of what AR can and should look like, and the masses have already accepted these design approaches. Â So from that perspective, from my view the leaders in search will be delivering sexy, smart and simple solutions. It&#8217;s all about the S&#8217;s.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/30/games-goggles-and-going-hollywood-how-ar-is-changing-the-entertainment-landscape-talking-with-brian-selzer-ogmento/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
	</channel>
</rss>
