<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; distributed augmented reality</title>
	<atom:link href="http://www.ugotrade.com/tag/distributed-augmented-reality/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Visual Search, Augmented Reality, and Physical Hyperlinks for Playfulness, Not just Purchases: Talking with Paige Saez about ImageWiki</title>
		<link>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/</link>
		<comments>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/#comments</comments>
		<pubDate>Fri, 19 Mar 2010 03:25:17 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[Augmented reality Magician]]></category>
		<category><![CDATA[Augmented Reality Meetup]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Chris Grayson]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[Imagwik]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[Jason Kolb]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[linked data and augmented reality]]></category>
		<category><![CDATA[Makerlab]]></category>
		<category><![CDATA[Marco Tempest]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open Frameworks]]></category>
		<category><![CDATA[open Frameworks and augmented reality]]></category>
		<category><![CDATA[OpenCV]]></category>
		<category><![CDATA[OpenCV and augmented reality]]></category>
		<category><![CDATA[optical character recognition]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[physical hyperlinking]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[RDF and Augmented Reality Search]]></category>
		<category><![CDATA[semantic web and augmented reality]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[Social Commons for Augmented Reality]]></category>
		<category><![CDATA[SPARQL]]></category>
		<category><![CDATA[SPARQL and ARWAVE]]></category>
		<category><![CDATA[SPARQL and Wave]]></category>
		<category><![CDATA[SPARQL and XMPP]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubiquity]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5262</guid>
		<description><![CDATA[The video above, The Imawik commercial, is a collaboration between In The Can Productions and Paige Saez for Makerlab &#8220;The Imawik (ImageWiki) is a visual search tool for mobile devices. It allows for the ability to turn images into physical hyperlinks, conflating visual culture with a community-editable universal namespace for images.&#8221; Paige Saez is an [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em>The video above, <a href="http://www.vimeo.com/2818525" target="_blank">The Imawik commercial</a>, is a collaboration between <a href="http://www.inthecanllc.com/" target="_blank">In The Can Productions</a> and <a href="http://makerlab.com/who.html" target="_blank">Paige Saez</a> for <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a></em></p>
<p>&#8220;The Imawik (<a href="http://imagewiki.org/" target="_blank">ImageWiki</a>) is a visual search tool for mobile devices. It allows for the  ability to turn images into physical hyperlinks, conflating visual  culture with a community-editable universal namespace for images.&#8221;</p>
<p>Paige Saez is an artist, designer and researcher.Â  In 2007 she founded <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a> with <a href="http://www.hook.org/" target="_blank">Anselm  Hook</a>, an arts and technology incubator focused on civic and  environmental projects.</p>
<p>Paige and Anselm (see my interview with Anselm Hook here, <a title="Permanent Link to Visual Search,  Augmented Reality and a Social Commons for the Physical World Platform:  Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>) have been asking a very important question:<strong></strong></p>
<p><strong>&#8220;Who Will Own Our Augmented Future?&#8221;</strong></p>
<p>But most importantly, they have been actually developing applications (again<a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank"> see my interview with Anselm</a> for more background on this), to allow people to play with, hack and explore and create with the physical world platform, and to imagine new possibilities for physical hyperlinking and augmented realities.Â  This is pretty important stuff, and kudos to Paige and Anselm for beginning this work before the big players &#8211; <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>,  and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> came hurtling into the field of visual search and physical hyperlinkingÂ  &#8211; <a href="http://techblips.dailyradar.com/video/translation-in-google-goggles-prototype/" target="_blank">see this demonstration of translation and optical   character recognition</a> in Google Goggle&#8217;s.Â  Also check out Jamey Graham&#8217;s (Ricoh Research) Ignite presentation at Tools of Change, 2010 &#8211; <a href="http://www.toccon.com/toc2010/public/schedule/detail/13370" target="_blank">Visual Search: Connecting Newspapers, Magazines and Books to Digital Information without Barcodes</a>, for more see <a href="http://ricohinnovations.com/betalabs/visualsearch">ricohinnovations.com/betalabs/visualsearch</a>.</p>
<p>We are only just beginning  to get a glimpse of how contested the social commons of the physical  world platform is going to be &#8211; see the Yelp <a href="http://blogs.wsj.com/digits/2010/03/17/small-businesses-join-lawsuit-against-yelp/" target="_blank">controversy.</a> <strong> </strong></p>
<p>As Paige points out:</p>
<p>&#8220;<strong>The lens that you are actually  looking through was as important as what you were looking at. And  democratizing that lens became the most important thing that we could  possibly do.&#8221;</strong></p>
<p>I<strong> </strong>am in total agreement.Â  One reason I have so much enthusiasm for <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> (note: if you are interested in following the developer conversations there are several public Waves) is I see this open framework playing an important role in the democratization of our augmented views, by creating an open, distributed, and universally accessible platform for  augmented reality that will allow the creation of augmented reality content and games to be as  simple as making an html page, or contributing to a wiki.</p>
<p>Federation, real time collaboration, <a href="http://linkeddata.org/" target="_blank">linked data</a> &#8211; ARBlips that contain metadata that is usable for semantic searches, and modified wave servers that can listen to and respond toÂ <a href="http://www.w3.org/TR/rdf-sparql-query/" target="_blank"> <span> </span>SPARQL</a> HTTP  requests properly (see Jason Kolb&#8217;s <a href="http://jasonkolb.com/" target="_blank">many interesting posts </a>on XMPP and Wave).Â <span> These are just some of the reasons why </span>ARWave could revolutionize augmented reality  searches and more! (see<a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"> my presentation at MoMo13</a> &#8211; video <a href="http://www.youtube.com/watch?v=Y7iqg8X24mU" target="_blank">here</a>)</p>
<p>For more on real time social augmented experiences see our panel, <a href="http://en.oreilly.com/where2010/public/schedule/detail/11046" target="_blank">The Next Wave of AR: Exploring Social Augmented Experiences</a> at <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0 2010</a>, and don&#8217;t miss the <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0</a> conference which has been the crucible for the emergence of location technologies.</p>
<p>Augmented realities, proximity- based social networks,  mapping &amp; location aware  technologies, sensors everywhere, <a href="http://linkeddata.org/" target="_blank">linked data</a>, and human  psychology are on a collision course in what <a href="http://www.schellgames.com/" target="_blank">Jesse Schell</a> calls the &#8220;Gamepocalypse&#8221; Â  See <a href="http://g4tv.com/videos/44277/dice-2010-design-outside-the-box-presentation/" target="_blank">Jesse Schell&#8217;s Dice 2010  talk here,</a> and check out his <a href="http://www.gamepocalypsenow.blogspot.com/" target="_blank">Gamepocalypse Now</a> blog.Â  As Bruce Sterling&#8217;s notes in <a href="http://www.wired.com/beyond_the_beyond/2010/02/jesse-schell-future-of-games-from-dice-2010/" target="_blank">his post here</a>:</p>
<p><strong>*Another  precious half hour out of your life.Â   However: if youâ€™re into   interaction design, ubiquity, social networking, and trendspotting, in   the gaming biz or out of it, youâ€™re gonna wanna do yourself a favor and   listen to this.</strong></p>
<p>And don&#8217;t forget to <a href="http://augmentedrealityevent.com/register/" target="_blank">register now</a> for <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event (ARE2010 in 2-3 June, 2010 â€“ Santa Clara, CA</a><a href="http://augmentedrealityevent.com/" target="_blank">)</a><strong>.</strong></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://www.stupidfunclub.com/" target="_blank">Will Wright</a>, and Jesse Schell <a href="http://augmentedrealityevent.com/speakers/" target="_blank">will be keynoting, and there is a totally awesome line up of AR innovators and industry leaders</a>, including Paige and Anselm!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling.jpg"><img class="alignnone size-thumbnail wp-image-5289" title="bruce_sterling" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling-150x150.jpg" alt="bruce_sterling" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright.jpg"><img class="alignnone size-thumbnail wp-image-5290" title="will_wright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright-150x150.jpg" alt="will_wright" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost.jpg"><img class="alignnone size-thumbnail wp-image-5291" title="Jesseschellpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost-150x150.jpg" alt="Jesseschellpost" width="150" height="150" /></a></p>
<h3>And:</h3>
<p>You are in luck!</p>
<p>Here is a discount code for the first 100 folks to register to the  event (before the end of March). Go to the <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">registration page</a>, type in code AR245 and &#8220;youâ€™ll be  asked to pay onlyÂ $245 for 2 full days of AR goodness.&#8221;</p>
<p>&#8220;Watching AR prophet Bruce Sterling, and gaming legend Will Wright, visionary game designer Jesse Schell  deliver keynotes for this price â€“ is aÂ magnificentÂ steal.Â  And on top,  participating in more than 30 talks by AR industry leaders will turn  these $254 into your best investment of the year,&#8221; as OriÂ  put is so well on Games Alfresco!</p>
<p>If you want a preview of just how exciting it is to be involved in augmented reality right now check out <a href="http://gamesalfresco.com/2010/03/17/magic-games-education-and-live-coding-at-the-augmented-reality-meetup-in-nyc/" target="_blank">Ori Inbar&#8217;s great round up</a> on our latest monthly <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">Augmented Reality Meetup NY</a> (or as, Ori notes, we fondly like to  call itÂ <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a>.)Â  There is lots of video up now (much thanks to <a href="http://www.chrisgrayson.com/" target="_blank">Chris  Grayson</a>, whoÂ  <a href="http://armeetup.org/001_arny/video/index.html" target="_blank">live  streamed it</a>).Â  <a href="http://www.marcotempest.com/" target="_blank">Augmented Reality Magician, Marco Tempest</a>, is an absolutely <strong>must</strong> see.Â  (developers note this is an awesome use of <a href="http://www.openframeworks.cc/" target="_blank">open Frameworks</a> and <a href="http://opencv.willowgarage.com/wiki/">OpenCV</a>).Â Â  The video of the show includes a rare explanation of how it  all worksÂ  &#8211; see <a href="http://www.youtube.com/watch?v=6TluCaxz7KM&amp;feature=player_embedded" target="_blank">here</a>.</p>
<h3>Talking with Paige Saez &#8211; &#8220;Software is candy now!&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg"><img class="alignnone size-full wp-image-5266" title="paige_headshot_sq135" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg" alt="paige_headshot_sq135" width="135" height="135" /></a><br />
<strong> </strong></p>
<p><strong>Tish  Shute:</strong> What interests me about ImageWiki is that you have thought  about physical hyperlinking beyond the obvious of where to get your  next good hamburger and beer, right?</p>
<p><strong>Paige Saez:</strong> Right. It was interesting for  me in just thinking about the two things. How do you design a tool to  work in a way that people are getting value from it? And also, how do  you make it work in a way where people can explore and hack it? I think  the most interesting technologies, and this is probably something  somebody else said sometime, are the ones that disappear, that we don&#8217;t  see, instead we see <em>through</em>. They become just the  intermediaries.Â  They don&#8217;t interfere with what we are trying to do.</p>
<p>It&#8217;s a struggle whenever you are developing a new way for  people to get information or make something happen, because you are  playing with magic a little bit. And you have to make it vanish the way a  good magic trick makes an experience a magical one. But at the same  time you also need to reveal just enough that you let people in and they  can see how to change it and make it their own. That is the interesting  tension for this space right now, the idea of augmented reality begins  to lead the idea of a social commons for physical things. The Imagewiki  project was a locus of just this tension. Tish you and I have previously  discussed how difficult it was to even get people to understand the two  concepts independently.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png"><img class="alignnone size-full wp-image-5269" title="dhj5mk2g_515dwxtjnds_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png" alt="dhj5mk2g_515dwxtjnds_b" width="642" height="163" /></a></p>
<p><strong>Tish Shute:</strong> Right, until  recently most people hadn&#8217;t even heard the term augmented reality and I  am not sure that a particularly high percentage of people would  recognize it now despite the recent interest in smart phone apps.</p>
<p><strong>Paige Saez:</strong> It&#8217;s very  difficult to get people to understand the two concepts, and now you are  adding in the third level of participation as well. So I don&#8217;t think it  is impossible, but I do think it requires narrative. It is interesting  that you were talking about the stories you heard this morning from the  creatives at the event [Tish mentioned David Curcurito, Creative  Director, Esquire gave an excellent presentation at Sobel Media event  NYC] because it&#8217;s narrative and the attention to telling a story that  help you walk through all of the ways you can understand how completely  expansive this area is right now.</p>
<p>So I think we have to play with it, play with the space and the  tools. I think we need to have an idea of what we want people to use  the tool for, and we need to not only introduce them to the tool and the  technology, but also introduce them to the concepts as well. So I see  it as a three part process.</p>
<p>I&#8217;m really excited to be there with people,  helping them do that. I think we need to do this face to face. I don&#8217;t  think this can be only through a social network. The ImageWiki website  is like one quarter of the entire picture, you know? The website is the  resource center and the place where you can see people adding images,  but what value is it to you to see an added image? It is more valuable  for you to be interacting with the image or interacting with the object  in the real world.</p>
<p>Designing for the experience of using the  ImageWiki got very complicated very fast. I was trying to figure out the main  thrust of the design for the UI for the ImageWiki and at a certain point  I had to take a step back and say â€œOkay, this has to be good enough for  now because we can lay it out and prototype as long as we want on the  Web or mobile UI. What we need to be doing is going outside and actually  aggregating and putting images into the database in order to see what  exactly happens when we are adding.â€Â  It&#8217;s not just like you are taking a  picture of something and adding it to Flickr. Using the tool is very  context specific and the information is context specific, and you can&#8217;t  necessarily make that all happen at the exact same time. I think these  are really fascinating spaces to be struggling in and I&#8217;m so glad to be  working in this space.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2.jpg"><img class="alignnone size-medium wp-image-5300" title="imagewiki_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2-300x225.jpg" alt="imagewiki_2" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1.jpg"><img class="alignnone size-medium  wp-image-5299" title="imagewiki" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1-300x225.jpg" alt="imagewiki" width="300" height="225" /></a></p>
<p><em>Images by Chris Blow of <a href="http://unthinkingly.com/" target="_blank">unthinkingly.com</a></em></p>
<p><strong>Tish  Shute:</strong> Could you explain why we need ImageWiki? I mean I think I  have ideas on this, but perhaps you can explain to me from you point of  view why we need an ImageWiki, as opposed, to say, extending the image  space of Wikimedia or something added on to Flickr.Â  I mean maybe  something leveraging the geotagged photos sets and APIs we already have?</p>
<p><strong>Paige Saez:</strong> Yes, definitely. It&#8217;s a really good question, I mean it really is. Like,  do you need an entirely new place to be holding images outside of the  places that we are already holding images? That&#8217;s a huge question;  enormous. Especially when you take a look at the problems around that.  Its&#8217; exhausting for an end user. Who the heck wants to go and reload  everything into <em>yet another place</em>, right?</p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Paige Saez:</strong> Moreover, who is going to  really bother? Another problem would be what happens to the existing  datasets that people have already committed to? And then of course there  is the problem of authority and explanations why&#8230;.Gaining interest  and authority in a space when nobody even understands why that space  should exist in the first place. And those are just three, you know, off  the top of my head problems with that idea.</p>
<p>And yet at the same time, I don&#8217;t actually know  how else to go about thinking about the ImageWiki unless I think about  it as it&#8217;s own thing. Then you start thinking about models of large  independant image databases that exist already, examples of this from a  product standpoint- references to consider. The Getty Foundation comes  to mind. There are many other historical centers that have huge  resources and images that are licensed out and used. So here we have a  working example of people already doing this. But succesfully? I don&#8217;t  know. We do have a ton of intellectual property rights and copyright  issues and ownership and use issues with images currently. As a working  artist these issues for me were a major red flag to consider. Working on  the social commons for augmented reality starts paralleling issues  found in digital rights management and intellectual property.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png"><img class="alignnone size-full wp-image-5274" title="dhj5mk2g_518gpgpr7gd_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png" alt="dhj5mk2g_518gpgpr7gd_b" width="441" height="606" /></a></p>
<p><strong>Tish Shute:</strong> But one good thing about Wikimedia, why I focused on Wikimedia, is Flickr and Wikimedia already use a creative commons licensing, right?</p>
<p><strong>Paige Saez:</strong> Creative commons, you know they have their own resource center, too. But you know they haven&#8217;t been successful as great databases for images so far.</p>
<p><strong>Tish Shute:</strong> What would you like to see that they don&#8217;t have? Like say maybe start with Wikimedia, right?</p>
<p><strong>Paige Saez:</strong> There&#8217;s just still a lot of issues with how to encourage people to want to contribute. It&#8217;s hard to show the value to someone who doesn&#8217;t already understand the value for some reason. At least for me personally this is something I have run into frequently. I don&#8217;t know if it is necessarily what Wikimedia doesn&#8217;t have, I think it is a lack of understanding of what creative commons really means. And there is still a very strong sense of ownership and concern about creative property rights. Being paid to be creative is a tremendously difficult thing to do. People fear losing their livelihoods. They think this is possible. Is it? I dunno.</p>
<p>For example : Look at me, I take a photograph of something, I can sell that.  And there&#8217;s a question about whether or not, as an artist, I want to have my photographs in a pool of images that is open and accessible when I could be making money on it instead. Now that is just an example. Me personally, I can see the value. But that is a common concern. The gist of the question being, &#8216;what value does it bring to give something away versus holding on to it?&#8217; A hugely popular discussion right now.</p>
<p>This is the same crux of the problem we are dealing with when we talk about thinking about images in the social commons for the real world. It&#8217;s a conversation about ownership. It&#8217;s about, who does this belong to really? If I take a photograph of a Levi&#8217;s billboard, does that photograph belong to me or does it belong to Levi&#8217;s? We know the boundaries of that. But when the image becomes a living image, an image capable of transmutation; an image that provokes an action or hyperlinks to a product, experience, information&#8230;.where are the boundaries in that?</p>
<p><strong>Tish Shute: </strong>But how is ImageWiki handling that differently from Wikimedia, I suppose is my question.</p>
<p><strong>Paige Saez:</strong> We haven&#8217;t solved the problem.</p>
<p><strong>Tish Shute:</strong> Yes, I suppose it is not like we have fully solve the problem of a creative commons for images on the internet let alone the issues of a social commons for the real world! So neither one has solved the problem, right?</p>
<p><strong>Paige Saez:</strong> Exactly. To be honest, it made my head spin. I realized we were building a web application and a mobile tool doing augmented reality, real time feedback on the world and suddenly we weren&#8217;t. Suddenly we were dealing with DNS and talking about physical hyperlinks and ownership and property. And basically at that point you just have to sit and really start looking at catching up on IP issues and figuring out how to deal with that space in a much more wholistic way. It became so important that we had to take a step back and go</p>
<p>â€œOh my god I think we have really uncovered a real problem here.â€</p>
<p>At the point when we were building out the tools we realized something was really going on with our project. Here we were thinking that this was just a beautiful experience of learning about the world around us. We reallyâ€¦Anselm and I both just really wanted this tool to exist. It was something that we both just really wanted to happen in the world, something that we felt really just thrilled to make. And we looked at and used it and realized that instead of it just being a beautiful experience, it was a fundamental shift in how we understood everything. That it impacted our world in the same way the Internet impacted our world. It was a fundamental shift in understanding. A sea-change.</p>
<p>So I put down the prototype and went back to researching, read a ton of books on IP and went and presented to friends, family, schoolmates and co-workers trying to explain the project and then the larger conceptual framework that had emerged from the project. I began using the metaphor of thinking about Magritte&#8217;s &#8220;Ceci n&#8217;est pas une pipe.&#8221; Thinking about a pipe that isn&#8217;t actually a pipe.</p>
<p><strong>Tish Shute:</strong> Oh, yes!</p>
<p><strong>Paige Saez: </strong>..to try to help explain to people that the image that you see is actually not, you know, it&#8217;s not an image of a thing. It&#8217;s an image. And that image has a tone and that image has a voice, and that image was chosen. And there were decisions that were made through the interface of the camera, specific decisions that defined the view of what you were looking at. And that that wasn&#8217;t being acknowledged and that that was a fundamental part of what the ImageWiki was aiming to do. The lens that you are actually looking through was as important as what you were looking at. And democratizing that lens became the most important thing that we could possibly do.</p>
<p><strong>Tish Shute:</strong> So the emphasis for you on ImageWiki was in fact the lens, even though you found obstacles to creating the interface, right?</p>
<p><strong>Paige Saez:</strong> Yes. Definitely. That&#8217;s what I fell in love with first. I really wanted to be able to use my phone to learn about what kind of tree this was or to buy tickets for the band on the poster I just saw, or see a hidden secret. For me it was very much a story, a narrative experience that I just thought was magical. And that is how I fell in love with it, which is not where I ended up.  Where I ended up was realizing it was a fundamental shift in not only my own understanding of how to use the world around me, but in our understanding of looking at the world.</p>
<p><strong>Tish Shute: </strong>It would be pretty scary if an image DNS was basically in the hands of either one or very few people, right?  I mean even ImageWiki would be stuck with this problem, that if you set up a bunch of servers, you are going to be holding a very, very large image database. I mean, whatever your motivation, right?  I think at the minute that is why I am very into seeing everything through the lens of federation, I see that unless we have federation, these giant central, databases are inevitable aren&#8217;t they?</p>
<p><strong>Paige Saez: </strong>Essentially, yes. I mean I wasn&#8217;t able to walk through it as quickly as that. It kind of just overwhelmed me. Looking back on it, it seems perfectly obvious. I was just like â€œOh my god, what have we done? Like what is going on?â€ Particularly for me because so much of my life has been spent in art, it was really easy to immediately understand the connection between the view, the viewer, and whatâ€™s being viewed as all just different layers of ownership and understanding that it is a gaze. Right? We know that we are never able to look at something without passing judgment on it, but to see that become a part of the interface in a real-time fashion just blew my mind.</p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Paige Saez:</strong> I think you are right. Getty Images, Flickr images, no matter what you are always holding on to something and you have to be responsible for it. Right? So how do you deal with the responsibility but don&#8217;t take on too much ownership? Where is the boundary with that?</p>
<p><strong>Tish Shute: </strong>And for me, the simple answer to that is loosely connected small parts, distributed systems and federation.  Because there is only one way to be able to utilize these things is to have them distributed so that no one holds all the cards. Right?</p>
<p><strong>Paige Saez: </strong>Definitely and I personally agree with you wholeheartedly. However, the idea of distributed power is a concept that most people just don&#8217;t know how to deal with.</p>
<p><strong>Tish Shute:</strong> And it&#8217;s easier said than done because actually the root problems that you are talking about aren&#8217;t got rid through federation, because if someone really holds the, sort of, all the good image databases just because they have the potential to be federated, they may not choose to open them up on many levels.</p>
<p><strong>Paige Saez:</strong> And even then you have to think about, sort of, like the next level of it, which is we want it to be all open and accessible, but everything is owned by somebody. Like, what really is public anymore, in general?</p>
<p><strong>Tish Shute:</strong> And what is interesting though, regardless of what we speculate conceptually on this, we already set off down the road. I mean we have already several largeâ€¦they are all in beta I suppose, Google Goggles, Point and Find, right? But we have applications that are beginning to implement this. They are beginning to implement search on it, and it is geo-located even if it&#8217;s not in an augmented view, right? So it is proximity based.</p>
<p><strong>Paige Saez: </strong>Right, right. I mean maybe the solution is that if we follow that line of thinking then Flickr will be partnering with Google Goggles. And then my images would stay under my ownership through the authority of Flickr. And I would use Flickr as my place to add images and they would just be responsive via my devices via AR.</p>
<p><strong>Tish Shute:</strong> That&#8217;s very interesting.</p>
<p><strong>Paige Saez:</strong> Definitely I think so. It is also the shortest distance between things.</p>
<p><strong>Tish Shute:</strong> Yes, and as Anselm kept pointing out, basically it is going to happen in the simplest way possible, really, regardless of the implications of that. But OK, getting back to ImageWiki. As you say neither Wikimedia nor Flickr were really designed to take this role, right?</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute:</strong> With ImageWiki, you&#8217;ve had these ideas and a concern with the social implications of physical hyperlinking  in your mind since it&#8217;s inception. Are there any design ideas you&#8217;ve come up with that you know, as opposed to sort of, as you say, connecting Flickr to Point and Find, or who knows, Google Goggles.  How is ImageWiki going to be different, do you think? Is that a hard question at this point?</p>
<p><strong>Paige Saez:</strong> It is, and it&#8217;s a great question, and it&#8217;s a question I really love to think about. I think we have to introduce the politics with the tools. It has to be acknowledged that it&#8217;s not just a place to hold information, that&#8217;s what I feel in my heart.</p>
<p>At the same time, is that too much for people to really grasp at one time? In my experience it really has been, so the design of the experience needs to allow for an understanding of the power of the tool and the level of authority that the tool offers, while not getting in the way of it; just using it.  Because ultimately, at the end of the day, nobody will use anything if it isn&#8217;t valuable to them. And so I could talk for miles and miles and miles about how important it is that corporations don&#8217;t own all of the rights to all of the visual things in my life, right? For the rest of my life I could talk about that. The idea that advertising is dominating all of our views of anything in the world around us is horrifying. It doesn&#8217;t matter unless I can show somebody why it matters to them or how it affects them. It&#8217;s just that that is a tremendously difficult thing to explain through a user interface.</p>
<p>And I actually think that it&#8217;s great that tools like Google Goggles and Nokia Point and Find are here to do a lot of the hard work of showing people how it works. Recently somebody explained to me their experience of using Google Goggles. They went through this process of saying how the Google Goggles took a picture and then did this really complicated visual scanning thing over the image and it took a full minute.</p>
<p>And I said, â€œWell of course they did it that way.â€  And they said, â€œWell what do you mean?&#8221; I said, â€œWell, what they are really doing there when they are doing all these fancy graphics, is they are showing you how it works.â€ And even if it isn&#8217;t actually related at all to how it functionally works, algorithmically, that&#8217;s not the point. The point is that this gesture of the time taken to make it look like it&#8217;s scanning an image and going back and forth with pretty colors is giving people the time to process that as an experience. That&#8217;s a metaphor for what&#8217;s really happening. And these kinds of metaphors are crucial with user experience design. We have lots and lots of examples of them and how they work, and many of them aren&#8217;t necessary. Like you know, for example, the bar that shows you the time it&#8217;s taking for something to process.There is no relationship between that and reality. But it is really important.</p>
<p><strong>Tish Shute:</strong> Yes those bars often have no relationship between the actual time..</p>
<p><strong>Paige Saez:</strong> And that&#8217;s the thing. Like the idea of time versus our perceived understanding of time. Right? The length of time it takes for your Firefox browser to open and load your last 30 tabs, versus the reality of what&#8217;s actually happening. When you are doing that sort of research you are actually accessing millions and millions of places and points of interest all over the world, so we need more of that. We need more of the process shown. Anselm and I worked with a film maker named Karl Lind from In the Can Productions here in Portland to try and make a video about the ImageWiki. We made this little video and I can try to show it to you or send it to you if you want.</p>
<p><strong>Tish Shute:</strong> One of the issues with this kind of visual search is that it is inherently dependent on large databases, regardless of where they are federated, are going to be very large. Right? I mean someone is going to have something big, and aggregated there.   I suppose someone will figure out the challenges of federated search eventually but that is quite a big challenge!</p>
<p>So I suppose I am still trying to understand what ImageWiki can offer that we can&#8217;t get with any other existing service?  How will their be a social commons and even a social contract for the world as a platform for computing and physical hyperlinks?</p>
<p>Eben Moglen  brought up something when I talked to him about virtual worlds, he said we need code angels to let us know what was going on in the virtual space &#8211; who was gathering data and how, for example.</p>
<p><strong>Paige Saez:</strong> Tell me more about that, I want to hear more about that.</p>
<p><strong>Tish Shute: </strong> Eben suggested this metaphor for when I was asking him about privacy in virtual worlds. The fact that people just didn&#8217;t know that when they were pushing avatars around virtual worlds what metrics were being gathered on their behavior.  And he basically said that what we need is code angels when we enter these spaces because having the rules of the game buried in a TOC was ridiculous.</p>
<p><strong>Paige Saez:</strong> That is a really interesting idea.</p>
<p><strong>Tish Shute: </strong> Maybe ImageWiki needs to be our code angel to navigate the augmented world. I mean that&#8217;s what I want to see it as. And when I hear you talk, what I hear is you talking in broad categories about what a code angel might be in the space of images and image links to the physical world. I mean that is what I hear from you.</p>
<p><strong>Paige Saez:</strong> Yeah. No, I definitely agree with that. It is interesting. In that sense, it is kind of a protection layer. Is that what you are thinking?</p>
<p><strong>Tish Shute: </strong>Yes, I suppose because we can&#8217;t be navigating a lot of complicated opt-ins and opt-outs just to get around our neighborhood safely (in terms of privacy (also see Eben Moglen&#8217;s definition of privacy hereâ€¦)  We will need a code angel that is sort of keeping up with you in real time!</p>
<p><strong>Paige Saez:</strong> Right, right. I wonder how that would work in regards to images, though. That is a really interesting thing to try and put on an image. I guess why I am having such a hard time being specific about it, is I am <strong>just trying to work it in my head, thinking of a specific use case, like what would be an example of that?</strong></p>
<p><strong>Tish Shute: </strong>Well I suppose the example, and this is a crude one, is when you point your Google Goggles to the book jacket, the code angel, this is very crude, would say â€œYou are right now drawing images from the Amazon database &#8211; they are collecting data such and such data from your search.</p>
<p>And then of course the ability to have crowd sourced tagging and corrections..</p>
<p>There was a wonderful book that came out last year on how we can have commercial intelligence -Dan Golemanâ€™s new book: â€œEcological Intelligence: How Knowing the Hidden Impacts of What We Buy Can Change Everything&#8221;&#8230;</p>
<p>how corporations various different stakeholders, including their customers will drive corporations to do the morally right thing because they will lose the commercial support of customers who wonâ€™t support them unless they are more green, fairer, do the things we would like them to do whatever that happens to be &#8211; physical hyperlinking and tagging I guess would be a big part of this.</p>
<p><strong>Paige Saez:</strong> Sort of a transparency issue.  And that almost becomes a page rank algorithm in and of itself. I mean now we are really talking about search more than anything, and what tool becomes the dominant search tool. Anselm and I talked a lot about one platformâ€¦  I mean eventually we will have a unified platform. It willâ€¦No matter what, for the Internet and for physical objects and visual objects in the real world. It will just be a matter of, literally, who can find the best and most valuable, most relevant information on a thing. Currently we just have it very proprietary.</p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Paige Saez: </strong>That definitely won&#8217;t last. It just can&#8217;t, because of the exact problem that you are raising. And we already know too much about resources and information as they pertain to products for us to ever go back to a time where we are not considering other ways of getting information about it anyway. Right?</p>
<p>Like I have the same concerns nowadays when I look at fruit. I look at a piece of fruit in the store. I would never just assume that the person who put the sticker on that fruit, anymore, is the ultimate authority necessarily. I would always assume at this point I could go online and go find out more information about a company. Issues about like eco-footprint or how much toxicity, or pesticides or whatnot are now totally accessible already.</p>
<p>So I am thinking when you look at that piece of fruit and that sticker for Google, say what you are describing, do we just go immediately to the company&#8217;s website, or is it even more specific? Do we know that the sticker on that piece of fruit is going to tell us specific information about that? Or are we just getting back the nutritional resources, or are we getting a listing of all of the different options out of a page rank algorithm that shows us, â€œWell this is the website for the fruit.  Here is the nutritional information.  Here are the last 15 comments on it.â€  It&#8217;s basically just a basic search.</p>
<p>Have you heard of Good Search?</p>
<p><strong>Tish Shute:</strong> you mean http://en.wikipedia.org/wiki/GoodSearch</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute: </strong>A code angel interface would have to give you options, wouldn&#8217;t it on possible views available?</p>
<p><strong>Paige Saez:</strong> Yes. You are then talking about filtering your view. Then it really gets really interesting, of course. I don&#8217;t even know if we have a choice in that. I think we are really kind of hitting a wall with who owns the space and the platform. Is it just a basic search because we are already familiar with search? If you had an option to choose, say, â€œI want to look at this apple sticker and I only want to getâ€¦programmatically only looking at my friend&#8217;s opinions of this company.â€</p>
<p>Or I have a safety valve on it that only shows me certain information based on what the code angel knows about me, my preferences, my age, things like that. Then that gets really, really interesting, because we are trying to do all that work right now just with social media and the Internet. We are already overwhelmed with too much information. It is already past the point of comprehension. So to think that we would actually drill down even more specifics is very interesting.</p>
<p><strong>Tish Shute:</strong> That was a point Anselm made about the fact that once you are into this mobile, just in time, one view kind of situation, it is quite different than the Internet where you can bring up all these different screens and go to another website.</p>
<p><strong>Paige Saez: </strong>Well yes, mobile is a different level of engagement. Very contextual. Much less information. Much more about timeliness. I don&#8217;t want to look an apple and get back a Google search. Oh my God no. Thatâ€™s the last thing I want. I would love to be able to look at an apple and my phone already knows exactly what I want, information-wise, to get back from that apple. But I don&#8217;t know. It&#8217;s all contextual and personal.  So I think the code angle concept you are talking about is really interesting because you still need to think about who is the person that is adding or creating those level filters- is it you, a filtered friend network, an algorithm? How much work is too much work? Where do we draw the line? How much of this are we willing to let the machine do for us?</p>
<p><strong>Tish Shute: </strong>Right.</p>
<p><strong>Paige Saez: </strong>And then of course once you have those filters in place, you need control over them. You will need to dial them up and dial them down, be able to choose and add new ones, so on and so forth. It becomes very modal at that point. For example, I want to change my view: To walk into a grocery store and instead of finding out information, Iâ€™d want to see where the hidden Easter egg puzzles were that my friends left last week because weâ€™re playing a game.</p>
<p>Iâ€™m still really attracted to the creative opportunities with the ImageWiki. Iâ€™m really attracted to changing this experience from being a one-to-one relationship (from Corporation to Consumer) to an open-ended relationship (From Person to Person). If I look at a book jacket, sure I can find out where to buy the book, but thatâ€™s boring. Who cares? Iâ€™d like to find out a link to a story or an adventure or a movie or something unthought-of before.</p>
<p>How do we build that in? How do we encourage serendipity? Mystery? I think the ImageWiki is the space for building that in, actually. Not how, that would be the one place, right? Thatâ€™s my really big fear is that this relationship just stays one-to-one. Click an image of consumable object, get back objects retail value. How completely dull. We have to do better than this.</p>
<p>Additionally, what if I want to take a photograph of a book, an apple, or something and I donâ€™t want to pull back data. Instead, I want to pull back music, or I want to pull back a video, or I want to pull back a song, or lyrics, or a story, or another image. Itâ€™s just a hyperlink at the end of the day, you know? Thatâ€™s all weâ€™re really doing. Hyperlinks can pull back so many different things.</p>
<p><strong>Tish Shute:</strong> And thatâ€™s one of the reasons I&#8217;m into mobile social interaction utility building, because without that, if we donâ€™t have that way to do that in mobile technologyâ€¦thatâ€™s very available on the Internet, as weâ€™ve seen, with Twitter. These applications are very easy to do on the Internet. Theyâ€™re not easy to do natively in a mobile application..</p>
<p>hey, Iâ€™m just promoting AR Wave again. I should shut up.</p>
<p><strong>Paige Saez:</strong> Oh, no.  I think itâ€™s a fascinating concept, I really do. I totally agree. As weâ€™ve talked about it before, itâ€™s amazing that marketing and advertising are helping push forward AR, and itâ€™s great. Itâ€™s fantastic.</p>
<p>But itâ€™s also the worst possible thing that could ever happen because it is such a singular way of looking at an overall ubiquitous computing experience. There are other ways.</p>
<p>The best experience I ever had was trying to explain to people about physical hyperlinks. I had to walk them through it. Good interactive isnâ€™t something you present or show, itâ€™s something you do. Nothing beats just walking around and showing people with a device or a tool or something else.</p>
<p>I mean, God forbid it always stays in our computers and our phones. I really hope we donâ€™t have to be stuck living our entire lives with these horrible interfaces.  But for the time being, we will. Having an AR app show you a puzzle, or a mystery, or a game, or an adventure is a magnificent experience, totally overwhelming, and people get it right away. Thereâ€™s no question; they totally understand.</p>
<p><strong>Tish Shute:</strong> Yes, I agree.</p>
<p><strong>Paige Saez:</strong> You walk them through the experience with a physical hyperlink and then you say, â€œHere, I could use this device and I could show you where to buy this thing, or I could use this device and we could start playing a game.â€ Then everybody gets it.</p>
<p><strong>Tish Shute:</strong> So then I have a question, because one of the things Anselm said to me when he wanted me to refer back to you is that he feels that the direction for ImageWiki should be perhaps to focus less on the technology and more on just the actual, I suppose, gathering of the images, how theyâ€™re going to be annotated, the metadata, right? But my question to him was, the problem if you do that, without the platform, thereâ€™s no experience or motivation for people to do that. Right? Is there?</p>
<p><strong>Paige Saez: </strong>Yeah, I agree with you on that one. Iâ€™m curious what hisâ€¦I think the reason why he wants to do that is he wants to be able to show people examples via the resources. Like to be able to show someone a library, essentially, which I think makes sense with some people. I definitely think that some audiences would really relate to that. For me, it doesnâ€™t make sense because Iâ€™m just very experiential. I need to do it and I need to show other people how to do it and I need to grow that way. I think that at the end of the day, those are great ways to go about doing it. Itâ€™s just itâ€™s a huge thing to do in either direction.</p>
<p>What Anselm&#8217;s really thinking on, I believe, is more about exemplifying how we read and understand images culturally. Then youâ€™re really getting into Visual Studies and Critical Theory which is what I did for my Masters at PNCA. I worked on the ImageWiki while I was in grad school, it was something I was doing for fun. Independently of my studies, the project lead to issues on democracy and objects and property and I ended up right smack in the middle of what I was studying; the nature and cultural analysis of images Questions like, &#8216;what exactly do we get out of images?&#8217; and how all these different things are happening in an image, and people get tons of totally different things out of an image depending on many factors.</p>
<p>The questions I began to ask myself got very philosophical. Questions like â€œIs this apple red? Is this apple red-orange? Is this a small apple? Whatâ€™s my understanding of small versus your understanding of small?â€</p>
<p>Because you supposed that you needed a text backup to the search, how would I be able to search for an apple? Because what if my understanding of apple is red and your understanding of apple is green. And so if Iâ€™m looking for a green apple, am I looking for the same green apple as you? Itâ€™s all semantics, sure.  But at the same time, it gets bigger and bigger, and itâ€™s fascinating.</p>
<p><strong>Tish Shute: </strong>Google Goggles seem to work best on book jackets, basically.</p>
<p><strong>Paige Saez: </strong> But book jackets are actually perfect for this.  Book jackets are perfect for this problem, because book jackets are specifically designed art.  So at the end of the day, we are still talking about creative works, artistic works, that have been designed as a communication tool.  But that is not something that people can own.  Creative works that are designed are a communication tool, with varying levels of skill to be sure, but still something anybody can do.  What we need to do is we need to be using that language.  We donâ€™t need to be trying to reach as far as facial recognition.  We need to develop our own logos, our own brand, our ownâ€¦I mean not brand.  Brand is a bad way of saying it.  Another way of saying it would be like, just use it.  Develop a visual language that we can use that is as effective and as well utilized as book jackets or the movie posters or something.</p>
<p><strong>Tish Shute:</strong> What are some of the use cases for ImageWiki you would like to develop first?</p>
<p><strong>Paige Saez:</strong> My dreamâ€¦I have like four or five use cases that I want to see happen.  One of them is I walk down the street and there is a new poster for my favorite band.  And I can just go up to the poster and I use my device, whatever it looks like, and I download the latest album. It&#8217;s transactional. I am able to just plug in my headset and walk down the street and the transaction is done. I saw something I wanted. It was beautiful. I was able to get it and I was able to move on in my life.  And that is totally possible.</p>
<p>Another one would be I walk down the street and there is a piece of graffiti.  And I am able to use my device to find out who the artist was that made it and to give them props, and to point my other friends to the fact that the piece is there and it will most likely be there only for a short period of time- information retrieval and socialization.</p>
<p>Or, use my device to find an Easter egg, to find a narrative puzzle that ends up going on for weeks, and everybody is involved, and we are all playing this game together. Adventure-based, non-linear experiences. I want playfulness, not just purchases.</p>
<p><strong>Tish Shute: </strong> Did you think of piggybacking on the Flickr API for geo-tagged photos as a way to work with those databases or not?</p>
<p><strong>Paige Saez:</strong> Yeah, we definitely thought about that.</p>
<p><strong>Tish Shute: </strong> And why did you decide not to, for any reason orâ€¦?</p>
<p><strong>Paige Saez:</strong> Ultimately, we justâ€¦we were such a small group, we just had to tackle certain things at a certain time.</p>
<p><strong>Tish Shute:</strong> Right.  And you were so prescient, you were working slightly before we had the mediating devices, werenâ€™t you?  You were just before the mobile devices really got adequate for this.</p>
<p><strong>Paige Saez:</strong> Yeah.  We started on itâ€¦I believe it was Januaryâ€¦No. December 2007. Basically, the iPhone had just launched like maybe six months prior or something like that.</p>
<p><strong>Tish Shute:</strong> But not 3G and not 3GS, right?</p>
<p><strong>Paige Saez: </strong>Not 3GS. It was the first generation iPhone. We built the ImageWiki before the App Store existed.</p>
<p>We knew that the App Store was coming out.  And we knew that the App Store was going to be the biggest thing in the whole world. I remember getting into multiple fights with friends about how revolutionary the iPhone and the App Store were going to be and people thinking I was totally crazy; people just thinking I was absolutely nuts for being so excited about it.</p>
<p>It sucks that it is a closed proprietary system, but the App Store has done something for software that nothing has ever done in the whole world.  Software is candy now.  It&#8217;s candy.  It is like when you are waiting at the grocery store at the checkout line and you are stuck behind somebody, and you have got all these little tchotchka&#8217;s, candy bars, magazines, nail-clippers and things. That is the equivalent of software now.  It&#8217;s become an impulse buy, which is amazing.  Nobody would ever have thoughtâ€¦that is actually revolutionary. That&#8217;s huge.</p>
<p><strong>Tish Shute:</strong> <a href="http://www.cs.columbia.edu/~feiner/" target="_blank">Steven Feiner</a>, who is one of the founding fathers of augmented reality said to me during a conversations at the ARNY meetup that one reason that augmented reality, despite the hype, is manifesting very differently from how virtual reality burst onto the tech scene is that it is about affordable apps on affordable readily available hardware.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook</title>
		<link>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/</link>
		<comments>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/#comments</comments>
		<pubDate>Sun, 17 Jan 2010 17:05:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Commons]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY Meetup]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave Wiki]]></category>
		<category><![CDATA[augmented reality conference]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality social commons]]></category>
		<category><![CDATA[brightkite]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Davide Carnivale]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[federated search]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[graffitigeo]]></category>
		<category><![CDATA[hacking maps]]></category>
		<category><![CDATA[Head Map manifesto]]></category>
		<category><![CDATA[imageDNS]]></category>
		<category><![CDATA[imagemarks]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Map Kiberia]]></category>
		<category><![CDATA[Mikel Maron]]></category>
		<category><![CDATA[mobile internet]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[Muku]]></category>
		<category><![CDATA[neo-viridian]]></category>
		<category><![CDATA[Nokia's ImageSpace]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open distributed AR]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[photo-based positioning systems]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[placemarks]]></category>
		<category><![CDATA[Planetwork]]></category>
		<category><![CDATA[Platial]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social cartography]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[SpinnyGlobe]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[trust filters]]></category>
		<category><![CDATA[Viridian]]></category>
		<category><![CDATA[viridiandesign]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[whurley]]></category>
		<category><![CDATA[yelp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5050</guid>
		<description><![CDATA[Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like Google Goggles, Point and Find, and SnapTell &#8211; see this post by Katie Boehret.Â  And while there may be no truly game changing augmented [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook.jpg"><img class="alignnone size-medium wp-image-5051" title="anselmhook" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook-300x225.jpg" alt="anselmhook" width="300" height="225" /></a></p>
<p>Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>, and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> &#8211; <a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank">see this post by Katie Boehret</a>.Â   And while there may be no truly game changing augmented reality goggles for a while, make no mistake, key aspects of our augmented view, factors that will have a lot to do with what we will actually see when an augmented vision of the world is a commonplace, are already in the works.Â  And, as Anselm Hook (pic above <a href="http://www.flickr.com/photos/caseorganic/2994952828/" target="_blank">from @caseorganic&#8217;s flickr</a>) notes:</p>
<p><strong>&#8220;There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see?&#8221;</strong></p>
<p>Cooperating services, e.g., Google Earth, Maps, Streetview, Google Goggles, and leader in local search like Yelp (<a href="http://www.huffingtonpost.com/ramon-nuez/google-is-getting-ready-f_b_426493.html" target="_blank">see here</a>) would have an enormous ability to filter and control a mobile, social, context aware view of the physical world, and Google themselves see an ethical quandary.</p>
<p><strong> &#8220;A Google spokesperson says this app has the ability to use facial recognition with Goggles, but hasnâ€™t launched this feature because it hasnâ€™t been built into an app that would provide real value for users. The spokesperson also cites â€œsome important transparency and consumer-choice issues we need to think throughâ€ </strong><strong> (quote from Wall Street Journal Column</strong><a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank"> by Katie Boehret)</a>.</p>
<p><a href="http://www.hook.org/" target="_blank">Anselm Hook</a> and <a href="http://paigesaez.org/" target="_blank">Paige Saez</a>, with great prescience, have been advocating a social commons for the placemarks and imagemarks to our physical world platform through a number of pioneering projects, including <a href="http://imagewiki.org/" target="_blank">imagewiki</a>.Â Â  I have interviewed both Anselm and Paige (upcoming) in depth, recently.Â  My talk with Anselm was nearly three hours long!Â  So I am publishing the transcript in two parts.</p>
<p>Understanding what it means to have a social commons forÂ  our physical world platform, and augmented reality, are key questions for all of us to think about, but especially important for those of us involved in the emerging industry of augmented reality.</p>
<p>Anselm <a href="http://blog.makerlab.org/2009/11/augmentia-redux/">notes</a> :</p>
<p><strong>â€œThe placemarks and imagemarks in our reality are about to undergo that same politicization and ownership that already affects DNS and content. Creative Commons, Electronic Frontier Foundation and other organizations try to protect our social commons. When an image becomes a kind of hyperlink â€“ thereâ€™s really a question of what it will resolve to. Will your heads up display of McDonalds show tasty treats at low prices or will it show alternative nearby places where you can get a local, organic, healthy meal quickly? Clearly thereâ€™s about to be a huge ownership battle for the emerging imageDNSâ€</strong></p>
<p>The mobile internet is moving beyond the internet in your pocket phase of mobility with mobile, social, proximity-based, context aware networks like <a href="http://www.foursquare.com/">FourSquare</a>, <a href="http://gowalla.com/" target="_blank">Gowalla</a>, <a href="http://brightkite.com/" target="_blank">Brightkite</a> and <a href="http://www.geograffiti.com/">GraffitiGeo</a> (see <a href="http://smartdatacollective.com/Home/23811">Smart Data Collective</a>) likely, soon, to start to take precedence over other forms of social network.</p>
<p>Regardless of the timeline for true augmented reality &#8211; 3D images &amp; graphics tightly registered to the physical world,Â  proximity-based social networking and real time search are already taking us into a hyper-local mode and the realm of augmented reality which is <strong><strong>&#8220;inherently about who you are, where you are, what you are doing, and what is around you&#8221; </strong></strong>(<a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> &#8211; see <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">here</a>).<strong><strong> </strong></strong>The ground is being prepared for augmented reality now.<strong><strong><br />
</strong></strong></p>
<p>If you have been reading Ugotrade, you will know I have been actively involved in developingÂ  an open, distributed AR platform/mobile social interaction utility for geolocated data based on the Wave Federation Protocol &#8211; AR Wave a.k.a Muku &#8211; &#8220;crest of a wave&#8221; (see my posts <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a>, <a href="http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">here</a> for more on this project, and the <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki</a> here).Â  Federation is, I believe, one vital aspect to developing a social commons for augmented reality and the physical world platform.</p>
<p>Also, a bit of news, I am co-chairing the upcoming <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Augmented Reality Event (are2010)</a> with <a href="http://gamesalfresco.com/about/" target="_blank">Ori Inbar</a> of <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://whurley.com/" target="_blank">whurley</a>.Â  Sean Lowery, <a href="http://www.innotechconference.com/pdx/Details/other.php" target="_blank">Prospera</a>, is the event organizer, and <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> has the support of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. Â  The <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> web site is live and there is an <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Open Call For Speakers</a>.Â   You can submit your proposals and demos for one of the three tracks, business, technology, or production <a href="http://augmentedrealityevent.com/speakers/call-for-proposals/" target="_blank">on the web site here</a>.</p>
<p><a href="http://augmentedrealityevent.com/" target="_blank"><img class="alignnone size-medium wp-image-5101" title="are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/are20101-300x60.png" alt="are2010" width="300" height="60" /></a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a> &#8220;prophet&#8221; ofÂ  augmented reality and more, &#8220;will deliver the most anticipated <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality keynote</a> of the year.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost.jpg"><img class="alignnone size-medium wp-image-5105" title="bruces-brasspost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost-300x225.jpg" alt="bruces-brasspost" width="300" height="225" /></a></p>
<p>It didn&#8217;t surprise me when Anselm mentioned that Bruce Sterling was a key influence for his work on the geospatial web and augmented reality.Â  Anselm explained:</p>
<p><strong>&#8220;Iâ€™d seen <a href="http://www.viridiandesign.org/notes/151-175/00155_planetwork_speech.html" target="_blank">a talk by Bruce Sterling</a> at an event called Planetwork [May, 2000]. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about.</strong> <strong>So, his influences is a pretty significant one to me at that exact moment.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> -Â  seems it is time for a &#8220;Neo-Viridian,&#8221;  revival!</p>
<p>This <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">post by Bruce Sterling on Pachube Feeds</a>, and Thomas Wrobel&#8217;s <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">prototype design for open distributed augmented reality on IRC</a>, were key inspirations for me when I began thinking about the potential of Google Wave Federation protocol for augmented reality.Â  I had been exploring <a href="http://www.pachube.com/" target="_blank">Pachube</a> and deeply interested in <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">the vision of Usman Haque</a>, but I had a real <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">aha moment</a> when I read this :</p>
<p><strong>â€œ(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))â€</strong></p>
<p>I think the AR Wave group will earn the extra credit and more very soon!Â  <a href="http://need2revolt.wordpress.com/about/" target="_blank">Davide Carnovale, need2revolt</a>, and <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a><strong> </strong>have been leading the coding charge, and there will be a very early AR Wave demo soon, perhaps as soon as the <a href="http://www.meetup.com/arny-Augmented-Reality-New-York/" target="_blank">Feb 16th ARNY Meetup</a>.Â  <strong><br />
</strong></p>
<p>Open access to the creation of view that will eventually find its way into AR goggles, will depend not only on the power ofÂ  an open distributed platform for collaboration like the AR Wave project.Â  Our augmented reality view will be constructed through complex &#8220;hybrid tracking and sensor fusion techniques&#8221; (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation will drive our augmented experiences, and at the moment, these kind of resources, at least at scale, are for the most part in private hands.</p>
<p>In the interview below, Anselm&#8217;s discussesÂ  how trust filters, and <span id="zuat" title="Click to view full content">being able to publicly permission your searches so that other people can respond and so that people can reach out to you, and the democratization of data in general, are even more of a concern </span>with augmented reality and hyper local search<span id="zuat" title="Click to view full content">.</span> The task of understanding what it means to haveÂ  a social commons for the outernet remains an open, and pressing question.</p>
<p>Anselm explains (see full interview below):</p>
<p><strong><span id="e18n" title="Click to view full content">&#8220;as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</span></strong></p>
<p><strong><span id="e18n" title="Click to view full content"> And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.&#8221;</span></strong></p>
<h3>Augmented Reality will emerge through sensor fusion techniques &amp; cooperating cloud services</h3>
<p>In 2010, sensor fusion techniques, computer vision technology in conjunction with GPS and compass data will create data linking that can enable the kind of augmented reality that has been the stuff of imagination for nearly four decades (see <a href="http://laboratory4.com/2010/01/the-reality-of-augmented-reality/" target="_blank">Jarrell Pair&#8217;s post).</a></p>
<p>Putting stuff in the world in 3D is of course key to the original vision of augmented reality, and one of its biggest challenges.Â  Augmented reality is going to be implicated in a real time mapping of the world at an unprecedented scale and granularity.Â  We have barely an inkling of the implications of this now.</p>
<p>Anselm and Paige have been working in the heart of the social cartography movement for nearly a decade.Â  The vision and experience of this community is vital to understanding how augmented reality and the world as a physical platform can evolve into something that benefits people and allows them &#8220;to have a better understanding of the opportunities around them.&#8221;</p>
<p>We have been hacking maps for millenia â€“Â  â€œfrom conceptual story mapping, to colloquial mapping in European development and the cartographic renaissance created by the global voyages and rediscovery of Ptolemyâ€™s mapsâ€ (<a href="http://highearthorbit.com/" target="_blank">Andrew Turner</a>).Â  And, recently, initiatives on a public-provided GIS, like <a href="http://opengeo.org/" target="_blank">OpenGeo</a>, have led the way toward more open, interoperable, geospatial data.</p>
<p>Mapping takes on a new an crucial role to augmented reality.Â  <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokia&#8217;s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.</p>
<p>And, if we see these kind of projects developed into a &#8220;photo-based positioning systems&#8221; -Â  &#8220;3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing&#8221; (Thomas Wrobel), we would find augmented reality leap forward over night.</p>
<p>It is time to take very seriously the vast opportunities and potential pitfalls of an augmented world.</p>
<p><strong><span id="vix9" title="Click to view full content">&#8220;when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist.Â  It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></strong></p>
<p><strong><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.&#8221;</span></strong></p>
<p>There are some initiatives emerging.Â  <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a> (who <a href="http://www.techcrunch.com/2009/12/08/tonchidot-sekai-camera-funding/" target="_blank">closed on $4 million of VC for augmented reality </a>last December) has helped create the <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> in Japan.Â  <a href="http://www.tonchidot.com/corporate-profile.html" target="_blank">CFO of Tonchidot</a>, <a href="http://www.linkedin.com/ppl/webprofile?action=vmi&amp;id=499984&amp;pvs=pp&amp;authToken=r8TF&amp;authType=name&amp;trk=ppro_viewmore&amp;lnk=vw_pprofile" target="_blank">Ken Inoue</a> explained in <a href="http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/" target="_blank">an interview with me in September 2009</a>.</p>
<p>&#8220;<strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing â€“ quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> would like to tackle in the not so distant future.&#8221;</strong></p>
<p>But<strong> </strong>the task of building a social commons for the physical world platform has only just begun.<strong><br />
</strong></p>
<p><strong><span title="Click to view full content"><br />
</span></strong></p>
<h3>Interview with Anselm Hook</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31.jpg"><img class="alignnone size-medium wp-image-5085" title="anselm3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31-300x225.jpg" alt="anselm3" width="300" height="225" /></a></p>
<p><em>photo from <a href="http://www.flickr.com/photos/anselmhook/3832691280/in/set-72157621946362509/" target="_blank">Anselm&#8217;s Flickr stream here</a></em></p>
<p><span id="u2mq" title="Click to view full content"><strong>Tish Shute:</strong> We <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">first met last year </a></span><span id="zjlm" title="Click to view full content"><a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">at Wherecamp</a>. </span><span id="suh4" title="Click to view full content">The start of 2009 was I think</span><span id="e_r5" title="Click to view full content"> the &#8220;OMG finally&#8221; moment for augmented reality and</span><span id="wo16" title="Click to view full content"> in less than a year AR, at least in proto forms, AR is breaking into the mainstream now! You are one of the founding visionaries/philosophers/hackers of the geo web and you have been thinking about geo web and AR for a long time &#8211; <a href="http://hook.org/headmap" target="_blank">all the way back to the legendary Head Map Manifesto</a>, and before.Â  Mostly recently you led the way in the very successful <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">ARDevCamp</a> in Mountain View. </span><span id="kn-y" title="Click to view full content"> Could you start by telling me a little bit about the history of your pioneering work with geolocated data?</span></p>
<p><strong>Anselm Hook: </strong>I am a long time Geo fanatic. I&#8217;m really interested in social cartography and what some people call public-provided GIS, thatâ€™s some language that people use. Anyway, my personal interest, when I talk to people who are non-technical (and it&#8217;s been a long term interest in the way I phrase it) is that I want to help people see through walls. So, the goal is very simple. I want people to have a better understanding of opportunities around them, the landscape around them. I always get frustrated when people make bad decisions because of a lack of information, especially when it&#8217;s related to their community and related to their environment. But, plainly put, I really just want &#8220;to help people see through walls&#8221;. It&#8217;s a very simple goal.</p>
<p><strong>Tish Shute:</strong> I know you worked on <a href="http://platial.com/" target="_blank">Platial</a>, which is really one of my favorite social mapping applications. It really broke new ground. What was the history of that? How did you get involved with Platial?</p>
<p><strong>Anselm Hook:</strong> Thatâ€™s an interesting question. It actually started at around 2000 when I saw Bruce Sterling talk. I had been writing video games for many years, and I was quite good at it, and I enjoyed it. But, the reasons I was doing it diverged from why the industry was doing it. I was making video games because I like to make shared spaces for my friends to play in and to share experience. I really enjoyed making shared environments. I worked on <a id="jrn-" title="BBS's" href="http://en.wikipedia.org/wiki/Bulletin_board_system">BBS&#8217;s</a> and my friends and I were always making these collaborative shared environments.</p>
<p>Once the video game industry kind of started to take off, I started to do high performance, 3D interactive video games and making compelling shared spaces, and it was a lot of fun. But, the frustration for me was that there was a huge industry growing around it and became very commercial. Although it paid well, it started to diverge from my values which were more centered around community environments, and shared understanding.</p>
<p><strong>Tish Shute:</strong> Yes very rapidly, the big games kind of devolved from the social aspects and became more and more into single player really, didnâ€™t they?</p>
<p><strong>Anselm Hook:</strong> It was the way, actually, because even though often you were in a many player world, you werenâ€™t collaborating, everything else became just a target.Â  I liked the idea of deep collaboration that calls the kind of playful space you see in IRC, or in the real world, where people are solving real world problems.</p>
<p>And I grew up in the Rockies, and I was always had a lot of access to the outside. So, I saw shared spaces and collaboration as a way to protect our environment. [ To step back ] I think people used different metrics <span id="gozb" title="Click to view full content">for measuring their choices in the world and many people have a value system centered around minimization of harm: making sure that the people are not hurt. But, my value system is different. I personally believe that protecting the planet is more important: to maximize biodiversity. I feel like protecting people around me comes from protecting the ecosystems they live in.</span></p>
<p><strong>Tish Shute:</strong> Thatâ€™s interesting, isnâ€™t it, because the history of Keyhole was really that, wasnâ€™t it.Â  Keyhole later became Google Earth, but I mean it began out of a project to look at what was going on in the ecosystem over Africa at that time, didnâ€™t it?<br />
<strong><br />
Anselm Hook:</strong> Yes, in fact many peopleâ€™s projects are stemming from an environmental concern. <a id="zxy9" title="Mikel Mironâ€™s" href="http://brainoff.com/weblog/">Mikel Maronâ€™s</a> works for example &#8211; heâ€™s doing <a id="euvm" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>, and he also worked on OpenStreetMaps.</p>
<p><strong>Tish Shute:</strong> Map Kiberia &#8211; that is the new project?</p>
<p><strong>Anselm Hook:</strong> Oh, yes his project is called <a id="r7ie" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>. Heâ€™s mapping a city in Africa.<br />
[For more see <a id="ngn." title="Map Kiberia's YouTube Channel" href="http://www.youtube.com/user/mapkibera">Map Kiberia&#8217;s YouTube Channel</a> &#8211; <a id="amqx" title="photo below" href="http://www.flickr.com/photos/junipermarie/4098163856/" target="_blank">photo below</a> from <a href="http://www.flickr.com/photos/junipermarie/">ricajimarie</a> ]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b.jpg"><img class="alignnone size-medium wp-image-5052" title="dhj5mk2g_487qfcv76ft_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b-300x199.jpg" alt="dhj5mk2g_487qfcv76ft_b" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> Right, great!</p>
<p><strong>Anselm Hook:</strong> When I started to look at GIS and mapping I started to meet people who had a very similar background. What happened to me is I kind of stepped away from games around the year 2000. Iâ€™d seen a talk by Bruce Sterling at an event called <a id="e8dn" title="PlaNetwork" href="http://www.conferencerecording.com/newevents/pla20.htm">PlaNetwork</a>. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about. So, his influences is a pretty significant one to me at that exact moment.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img class="alignnone size-medium wp-image-5053" title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>[For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> &#8211; seems that it is time for a &#8220;Neo-Viridian,&#8221;  revival.]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because now your paths are crossing again with augmented reality. You are on the same wavelength again.</p>
<p><strong>Anselm Hook:</strong> Itâ€™s funny, actually, Iâ€™ve had a couple of brief overlaps in that way.Â  Well, so in 2000 I<span id="mdsf" title="Click to view full content"> went to see this talk and I did a small project called &#8212; well, I called it <a id="bx3u" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a>. What I did is I mapped protests from a number of websites onto a globe to show the level of community opposition to the pending war in Iraq. It was the first time there had been a protest before a war. So, it was very interesting to me. [ See <a href="http://hook.org/headmap" target="_blank">http://hook.org/headmap</a> ]<br />
<strong><br />
Tish Shute:</strong> Thatâ€™s really fascinating. Do you have any pictures of that you could send me? </span></p>
<p><span id="r0h_" title="Click to view full content"><a href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/" target="_blank"><img class="alignnone size-medium wp-image-5054" title="dhj5mk2g_492ffct2df4_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_492ffct2df4_b-300x225.jpg" alt="dhj5mk2g_492ffct2df4_b" width="300" height="225" /></a></span></p>
<p><span id="mdsf" title="Click to view full content">photo from <a id="j05v" title="anselm's flickrstream" href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/">anselm&#8217;s flickrstream</a></span></p>
<p><strong>Tish Shute:</strong> Yes, Iâ€™ll definitely look <a id="ua2l" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a><span id="m0:j" title="Click to view full content"> up. It sounds very interesting.Â  One of the aspects of your work on geo-located data projects like this and <a id="h.gx" title="Platial" href="http://platial.com/">Platial</a> is that you really started to develop this idea of a culture of place, about how people make place. This was the wake up call to me regarding the power of networks combined with geo-data. </span></p>
<p><span id="m0:j" title="Click to view full content">We are hoping to extend this idea into augmented reality with the an open distributed platform for AR so that we can collaboratively map our worlds from the perspective of who we are, where we are, and what we are doing.Â  I know youâ€™ve just done some work recently in augmented reality.Â  I know you put the code up already. </span></p>
<p><span id="m0:j" title="Click to view full content">By the way, I love the way you take your philosophy into the way you make code &#8211; the practice of making some code, trying some things out, making it all public and publishing your findings, you know, your comments on that experience.Â  Perhaps you could recap sort of how you picked up recently on the state of play with augmented reality and what aspects you looked at, and what came out of that experience?</span></p>
<p><strong>Anselm Hook:</strong> So, itâ€™s a very simple trajectory. Coming out of the work I had done, <a id="cs18" title="Platial" href="http://platial.com/">Platial</a>, among other projects and I started to just look at the hyper-local and I suddenly realize that even those services werenâ€™t really speaking to living, and how to really see and solve local problems. What was missing was a sense of context.</p>
<p>The map doesnâ€™t know how youâ€™re feeling, it doesnâ€™t know if youâ€™re in a hurry, it doesnâ€™t know what you want, itâ€™s very static. Even the web maps are very static. And augmented reality for me I started to recognize as a combination of &#8212; well &#8212; itâ€™s probably collision of many forces, many forces that weâ€™re all a part of. Weâ€™ve also didnâ€™t realize that the real-time web is really important, itâ€™s part of<span id="bja1" title="Click to view full content"> what AR is about.</span></p>
<p>We have all started to realize that the context is important. You know, your personal disposition, your needs, if you want to be interrupted or not. That is the kind of thing that the ubiquitous computing crowd has talked about. We started to recognize that there are sensors everywhere, and the ambient sensing communities talked about that. So what is funny for me about augmented reality is I started realizing it is just a collision of many other trends into something bigger.</p>
<p>Everything else we thought was a separate thing is actually just part of this thing. Even things like Google Maps or mapping systems we think are so great are really just kind of almost an aspect of a hyper-local view. You actually donâ€™t really care what is happening 10 blocks away or 100 blocks away. If you could satisfy those same interests and needs within a single block, one block away, you would probably be really happy. You really just want to satisfy needs and interests, find ways to contribute, or get yourself fed, or whatever it is you want. And AR seemed to be the playground to really explore the human condition.</p>
<p><strong>Tish Shute:</strong> Anyway, I think one of the things that has been very amazing this year is we to have the good mediating devices that, for the first time, give us compasses, GPS, and accelerometers. But one sort of missing pieces with AR at the moment is [tracking, mapping, and registration] &#8211; the kind of things colloquial mappings of the world could be of great help with.</p>
<p>We have seen mapping coming out of the Flickr data, e.g., the University of Washington, put the maps together from the geo-tagged Flickr photos. Now if we could have that linked up with AR, then we have the kind of mapping we need to kind of really hook the geo-data onto the world in a way that goes beyondâ€¦you know, what compass and GPS can really deliver is pretty minimal at the moment.</p>
<p><strong>Anselm Hook</strong>: There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see? Are you going to see corporate advertising? Are you going to see your friendsâ€™ comments or criticisms? It is going to be an Iran or a democracy, right? It is unclear.</p>
<p><span id="vix9" title="Click to view full content">Right now there are some disturbing trends I have noticed. I am a big fan of Google Goggles. I think it is a great project. But when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist. It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></p>
<p><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.</span></p>
<p><strong>Tish Shute:</strong> Yes. And this is how we met, again, recently [over the project to create an open, distributed platform for AR using the Wave Federation Protocol]â€¦</p>
<p><span id="e18n" title="Click to view full content">This is something I feel really deeply is that, you know, basically we need the physical internet to be as open as, as the, as the internet, as the end-to-end internet has been. Or more so, actually, because the end-to-end internet has seen the trend has been to walled gardens.Â  Basically Facebook became enormous, an enormous walled garden which, I think, was despite, our predictions about them, [walled gardens] are the social experience really on the web.Â  It&#8217;s very much in walled gardens still and I, and I really feel that with the physical internet, we need to make great efforts not for it not just to be a series of small pockets of privately funded walled gardens.</span></p>
<p>There needs to be some kind of communications infrastructure that keeps it open so that was when I got interested in looking at the Wave Federation Protocol because it was a real time, you know, an open real time protocol that could possibly be a basis for that. But I think the point you&#8217;ve talked to just now, the mapping of the world and who has the &#8220;goggles&#8221;, i.e., the image data, image databases, that make the world meaningful is really, that&#8217;s still a, it&#8217;s still a BIG question [i.e. who controls the view?].</p>
<p>When I saw <a id="ewxn" title="ImageWiki" href="http://imagewiki.org/">ImageWiki</a>, [I realized] that is a piece that is vital for, for augmented reality. We need to have a huge social effort to be involved in this,Â  linking in and creating theÂ  physical internet, in creating the image hyperlinks that will make that meaningful.</p>
<p><span title="Click to view full content"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b.png"><img class="alignnone size-medium wp-image-5055" title="dhj5mk2g_493fv23rg33_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b-300x219.png" alt="dhj5mk2g_493fv23rg33_b" width="300" height="219" /></a></span></p>
<p><span id="e18n" title="Click to view full content"><strong>Anselm Hook:</strong> I think that&#8217;s a great point. The search interface, the kind of Internet that we&#8217;re used to, the way we talk to the network now, is fundamentally open end to end. Yes, you can have your oligarchies inside of it, as we see with Facebook, but you can always start your own venture up and you can do a search on something, and you can find that, that website and you can join it or you can put up your own webpage and people can find it. </span></p>
<p><span id="e18n" title="Click to view full content">The translation layer, the idea of text search and the ability to discovery power and the serendipity and the openness of that discovery, it&#8217;s pretty open right now. We do have some serious boundaries of language, which is one of the reasons I was working at the <a id="xg:8" title="Meadan.org" href="http://www.imug.org/events/past2007.htm#meadan">Meedan.org</a> [hybrid distributed, natural language translation] for a couple of years, trying to bridge that issue.</span></p>
<p>But here, as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</p>
<p>And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.</p>
<p><span id="fv3x" title="Click to view full content">We have with DNS, for example, the idea that you can register the domain name and people can search for it, and find it, and go to it. There&#8217;s no such thing as an Image DNS, or an image translation to DNS right now. What does it mean when everything is just &#8220;magic&#8221;, when there&#8217;s no way for you to be a part of the conversation, where you&#8217;re just a consumer of what people tell you, or of what one company right now, tells you, is reality? That&#8217;s a real concern.<br />
<strong><br />
Tish Shute: </strong>This, to me is the most important question at the moment. I mean, it&#8217;s the big one and it&#8217;s the place to put energy if you love the Internet [and what it can now become] right. You&#8217;ve got to put a lot of energy into this because this [a democratized view of the physical world as a platform] won&#8217;t just happen, because there&#8217;s a lot of momentum already for it to be heavily privatized, partly because, one reason is, some of the computer vision algorithms that, say, make sense of things like the geotag photographs are not open.Â  I mean, for example, the beautiful maps that have been made from the University of Washington [from Flickr geotagged photo sets], that isn&#8217;t in the public domain.</span></p>
<p><strong>Anselm Hook:</strong> Right. Tish, and in fact you&#8217;re referring to [with the maps from the Flickr photos] to ordinary maps and the fact we&#8217;ve already seen that maps lie, we&#8217;ve already, seen how much maps are reflecting a certain truth that becomes the normative truth. Google maps reflects roads, because this is roads and cars, right? Only recently have they thought about buses and walking. So the normative view that people assume is the reality, is showing off you know Starbucks, and roads, and cars, that becomes the default, those prejudices are just assumed, you know, the truth. But they&#8217;re not the truth at all.</p>
<p>I was talking to a friend of mine in Montreal, [Renee Sieber], and she said that their Indian portage routes are a bridge across land and water, they don&#8217;t think of a piece of land and a piece of water as being different things, they think of them as one thing: a route. It&#8217;s already a different kind of language we can&#8217;t even reflect it.</p>
<p>So not only is there this kind of formal, anthropological lie, in a sense, but there&#8217;s this way that we deceive ourselves because of our own prejudices.</p>
<p><strong>Tish Shute:</strong> Yes I agree and that&#8217;s why I think when I saw some of the things you had written on the ImageWiki point clearly to the need to create a social commons. We need a social commons for the real-time physical internet, we need it for the image hyperlinks that make sense of that.</p>
<p>And it&#8217;s a complicated thing in a sense, though, because we don&#8217;t actually have a good distributed infrastructure for AR yet, and I found exploring AR Wave, that at last we have the suggestion of an open, federated protocol for real-time communication &#8211; the wave federation protocol. [Real time communications is a very important part of AR].Â  It isn&#8217;t an actuality yet where lots of people are able to use it, set up their own servers, and there&#8217;s not a standard all the way throughÂ  [there is not a standard for how data is sent between the client and the server].</p>
<p>But Wave Federation Protocol does make possible truly distributed social AR.Â  I started thinking when I saw ImageWiki that to bring ImageWiki together with the social collaborative power of distributed AR.Â  This really would be the basis of creating a social commons for augmented reality and the physical world as a platform &#8211; the <span id="np6x" title="Click to view full content">start of a bottom up with deep social collaboration on how we create augmented reality colloquial maps that can inform a hyper-local of the world.</span></p>
<p><strong>Anselm Hook:</strong> Yes. When Paige Saez, John Wiseman, and myself, and a few other folksâ€¦ You know, Benjamin Foote, Marlin Pohlmann, and a couple other people started to play with this, we quickly found thatâ€¦ We started to realize, â€œOh, this kind of thing will be at least as popular as IRC. There will be at least as many people doing this as chatting in little virtual spaces. Thereâ€™ll be at least as many people decorating the world with augmented reality markup, and maybe using the real world as a kind of barcode for translating what youâ€™re looking at into an artifact, a digital artifact.</p>
<p>And<span id="csy2" title="Click to view full content"> that the size of that space was going to be huge, basically. Maybe not quite as commodifiable as Twitter, but certainly very energetic.</span></p>
<p>Many of the projects we did were just kind of looking at these kinds of issues sort of from an artistic, technical, and political point of view. We werenâ€™t so much posing complete solutions, but simply using a praxis to explore the idea with an implementation, as a foundation for this discussion. So I think we sort of opened that can of worms for sure.</p>
<p><strong>Tish Shute:</strong> Did you actually set up ImageWiki to be working as a location based app yet?</p>
<p><strong>Anselm Hook:</strong> It is a location based app. It collects your longitude, latitude, and the image and stores it. And then it uses that as a way to translate that image to anything else. It could be a piece of text or a URL.<br />
<strong><br />
Tish Shute:</strong> So there is a smartphone app, but you didnâ€™t take it as far as an AR app yet?</p>
<p><strong>Anselm Hook:</strong> No. We didnâ€™t do a heads-up view. There are apps on the iPhone store that do that, but they donâ€™t do the brute force image recognition that we were using. We used a third party off the shelf algorithm that we found on Wikipedia and downloaded the source code, and threw it on the server. And John Wiseman in LA wrote the scalable database backend so that we could scale the actualâ€¦<br />
<strong><br />
Tish Shute:</strong> So how did you set the iphone app up to work?</p>
<p><strong>Anselm Hook</strong>: The iPhone side was very simple. You take a picture of something and it tells you what it is. That is all it did. We would take the location, but the client side, the iPhone side, just rendered, returned to youâ€¦It said, â€œSomeone said that this picture of a barking dog is an advertisement for a local band.â€</p>
<p><strong>Tish Shute:</strong> Right. So basically it was a geo-tagged?</p>
<p><strong>Anslem Hook:</strong> Yes. We are just collecting the geo information. Actually, there were a whole lot of technical challenges. The whole idea of ImageWiki is actually kind of beyond our technical ability for a small team like us. It really does take a team, a group like Google, to do this kind of thing in a scalable way.<br />
<strong><br />
Tish Shute:</strong> Why is that?</p>
<p><strong>Anslem Hook:</strong> There are two sides. There is the curating the images. I think that is the job of groups like us &#8211; open source groups who can curate images <span id="vxty" title="Click to view full content">that are owned by the community. And then the searching side, the algorithm side, where you are actually matching the fingerprint of one image to images in your database, that takes a much moreâ€¦that is much more industrial.Â  We get both sides, ours is not a scalable solution. It is mostlyâ€¦proving that it could be done was important.<br />
</span><br />
<span id="a3ou" title="Click to view full content"><strong>Tish Shute: </strong>In terms of hooking Imagewiki up to the collaborative possibilities of AR Wave wouldn&#8217;t federation pose some interesting possibilities for scaling search algorithms and all that?</span></p>
<p><span id="vp27" title="Click to view full content"><strong>Anselm Hook:</strong> Yes. And what is funny also, incidentally, is that, nevertheless, we did look for some financial support for it, but we couldnâ€™tâ€¦we just didnâ€™t find the investors to scale it. Now, other companies like SnapTell took a shot at it. And they have an app in the iPhone store where you can point at a beer bottle and get back the name of the beer bottle.</span></p>
<p>The classic example everyone uses is a book. Amazon has all the image jackets of all their books. You can point SnapTell at almost any book and get back links to buy that at Amazon, the price of the book, and user comments on the book. So they are treating Amazon as the canonical voice of the book, for better or worse. That is the state of the art so far, up until Google Goggles came out a little while ago, which actually blows it out of the water. But, that is where we are now.</p>
<p><strong>Tish Shute: </strong>Right. But the point you raise about how when something like Amazon comes canonical of what is book, right, this is the whole point, isnâ€™t it?</p>
<p><strong>Anselm Hook:</strong> Is Amazon truth? Itâ€™s not bad. Jeff Bezos seems like a nice guy, but, you know.</p>
<p><strong>Tish Shute:</strong> And this is the point of having these open infrastructures for this.Â  And this should be obvious in a way, but it comes back to the thing about what made the Internet great was the fact that even though as you note, you get an oligarchy like Facebook, but people always could just go off and do something else, right? Because the fundamental infrastructure was basically open and designed to be available for everyone. And many people have championed that and fought for it hard [to maintain this openness] havenâ€™t they? They have devoted their lives to keeping it that way, even if the oligarchies have done their thing.<br />
<strong><br />
Anselm Hook:</strong> Yes. There are really some things that are underneath all of this that havenâ€™t been solved yet.</p>
<p>One is that the trust in social networks has not been built yet, so we canâ€™t do peer based recommendations very well. We canâ€™t filter noise by peers. Twitter kind of is moving there, but I donâ€™t just want to listen to my Twitter friends. I want to listen to my friends of friends. If I am getting truth from somebody, I want to get that truth from people my friends say that they trust.</p>
<p>Then the second problem is that there is a search business. My friend Ed Bice, who owns <a id="lir5" title="Meedan" href="http://beta.meedan.net/">Meedan</a>, always says that a search itself, a search request, is an opportunity to makeâ€¦is a publishing moment. It is an opportunity to say what you think. In the real world, if you are just hanging out with humans and you look somewhere, other people might look at your gaze and they might look at what you are looking at. Your gaze itself is a public act.</p>
<p>Gaze is a soft act, but it is one that is visible. With Google, the gaze<span id="zuat" title="Click to view full content"> of four billion people is invisible. We don&#8217;t what people are looking at, there is no opportunity to participate. Let me give you a real example.Â  I have taken a image of something of the bust of figure or a statue.Â  Why can&#8217;t the museum in Cairo look at my request and tell me oh yeah that is Tutankhamen, or that is Nefertiti right? Why can&#8217;t they have a chance to participate in the search and respond to me?</span></p>
<p><span id="zuat" title="Click to view full content"> Right now the the only person that responds is Google when I do a search. We need to invert the search pyramid and open up search, so that search is a democratic act, so that you can publicly permission your searches so that other people can respond and so that people can reach out to you, not just you having to do a dialogue. </span></p>
<p><span id="zuat" title="Click to view full content">The common example of this.. and we see this everywhere: I am looking for a slice of pizza right, now I am hungry I want some pizza. I have to ask Google, look find twelve websites, call twelve phone numbers, and talk to each of the twelve stores, and ask them are they open late, is the food organic, is the food in any good, do my friends like it.</span></p>
<p>Whereas what I should be able to do is just say it&#8217;s a search moment and I am interested in pizza. If those pizza places my criteria like you know my friend&#8217;s like them and they are organic, they are open, then that pizza place can call me. I have the money why should I do the search? So the whole business of search, the whole structure of search is predicated around a revenue model, but its a really short-sighted revenue model, its not a brokerage.</p>
<p>Search isn&#8217;t search, search is hand waving.Â  These should be moments for us to have a discourse. So problem we are seeing in AR with communication of the right information is actually underneath AR, at the level of the whole infrastructure.</p>
<p>Search needs to be inverted, trust filters need to be built. We need to democratically own our data institutions.Â  We don&#8217;t right now.Â  That will be more of a concern, especially with AR.</p>
<p><strong>Tish Shute: </strong>Yes, especially with AR, which is this why got all excited about federation.Â  Do you think federation has the potential, an opportunity to create [the new infrastructure you describe?]</p>
<p><strong>Anselm Hook:</strong> Absolutely,Â  its absolutely what we must do. It is much harder to do. It is absolutely critical.</p>
<p><span id="lwzk" title="Click to view full content"><strong>Tish Shute:</strong> And why is it much harder to do? Could you explain that?</span></p>
<p><strong>Anselm Hook:</strong> Well, it&#8217;s very easy for a bunch of hackers to build a service that you log into and fetch some data, it&#8217;s a single thing. They don&#8217;t have to talk anybody, they can use their own protocols, they can hack it, it&#8217;s a big black box, behind the scenes. There&#8217;s running back and forth in a giant Chinese room delivering manuscripts and scrolls to you. Whatever is behind the black box, you donâ€™t care, it just works.Â  But when you federate, you need to actually publish and have standards, and then you&#8217;re talk about semantic, everyone starts getting really excited and wave some hands. It becomes a disaster. It&#8217;s, at least, another power order, more difficult than DIY, build it yourself.</p>
<p><strong>Tish Shute:</strong> So, in terms of what Google Wave have done with their approach to federation, what do you think have been their achievements and what do you think is their obstacles? What do you think are the failings of the Wave? Because it&#8217;s the first big public major player backed approach to something federated, isnâ€™t it? In real time.</p>
<p><strong>Anselm Hook:</strong> Yes. I think the most important non-federated service on the planet today is Twitter.Â  <a id="uhg3" title="Ident.ic.a" href="http://identi.ca/group/identica">Identi.ca</a> it&#8217;s not getting any traction with respect to Twitter. [ Even though ] Identi.ca is a federated version of Twitter and is very good. [ Identica is now <a id="w05j" title="Status.net" href="http://status.net/">Status.net</a> ] . So, we see already there that small players arenâ€™t being competitive. Then look at other services like IRC. IRC is the secret backbone of the Net. All the open source projects, all the teams, all the people that work on opensource projects are all on IRC. It&#8217;s the only way they get anything done.</p>
<p>With Google Wave, and the protocols underneath Google Wave, we see an attempt to build a similar kind of real time, but distributed protocol. I think it&#8217;s the right direction. I think, people should pick up the offering and make their own servers. I think that protocol is really great, I think the fact that is compressed, its high performance, <span id="md2h" title="Click to view full content">it is small, real-time of blobs of data flying around, all exactly the way it should be done. It is getting close to this kind of rewrite of the Internet that people keep talking about, because, you know, the net protocols are so bad, it is starting to treat the idea of intermittent exchanges being more transitory, volatile, and not heavy.</span></p>
<p><strong>&#8230;.to be continued.Â  Part 2 coming soon!<br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
		<item>
		<title>The AR Wave Project: An Introduction and FAQ by Thomas Wrobel</title>
		<link>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/</link>
		<comments>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/#comments</comments>
		<pubDate>Sat, 05 Dec 2009 02:50:18 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Blps]]></category>
		<category><![CDATA[AR DevCamp]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[AR Wave project]]></category>
		<category><![CDATA[AR Wave Wiki]]></category>
		<category><![CDATA[ARBlip]]></category>
		<category><![CDATA[ARDevCampNYC]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[Augmented Realit]]></category>
		<category><![CDATA[augmented reality network]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[markerless augmented reality]]></category>
		<category><![CDATA[multiuser multisource augmented reality]]></category>
		<category><![CDATA[open augmented reality network]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[PyGoWave Qt-Based Desktop Client]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[storing geolocated data on Wave Servers]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Wave enabled augmented reality]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4960</guid>
		<description><![CDATA[ImagesÂ  from Mitsuo Iso&#8217;s Denno Coil (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator. Thomas Wrobel, Sophia Parafina, Joe Lamantia, Matthieu Pierce, and I will lead a Â session tomorrow for AR DevCampNYC introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM.png"><img class="alignnone size-medium wp-image-4961" title="Screen shot 2009-12-04 at 6.43.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-6.43.24-PM-300x181.png" alt="Screen shot 2009-12-04 at 6.43.24 PM" width="300" height="181" /></a><br />
</strong></p>
<p><em>ImagesÂ  from Mitsuo Iso&#8217;s<a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank"> Denno Coil</a> (Click to enlarge), the game &#8220;Metroid Prime,&#8221; and Terminator.</em></p>
<p><a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://opengeo.org/about/team/sophia.parafina/" target="_blank">Sophia Parafina</a>, <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia, </a><a href="http://matthieupierce.com/" target="_blank">Matthieu Pierce</a>, and I will lead a Â session tomorrow for<a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank"> </a><a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> introducing the AR Wave Project.Â  Thomas, Joe and Matthieu will be participate via skype (10am to 11.30am EST), and Sophia Parafina and I will both be at <a href="http://www.ardevcamp.org/wiki/index.php?title=NYC_ardevcamp" target="_blank">AR DevCampNYC</a> at the <a title="http://openplans.org/contact/" rel="nofollow" href="http://openplans.org/contact/">The Open Planning Project office (TOPP)</a>.Â  The <a href="http://pygowave.net/" target="_blank">PyGoWave</a> crew will be introducing <a href="http://livestream.com/pygowave" target="_blank">PyGoWave via LiveStream</a>.</p>
<p>At 1.30pm EST to 2.30pm EST there will be a shared <a href="http://pygowave.net/" target="_blank">PyGoWave</a>/AR Wave session <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">with Mountain View </a>(if bandwidth permits).</p>
<p>The skype conference will be at ardevcampnyc . Â To participate in Wave,Â  please join the public Wave, Â <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BH83lcj6RA" target="_blank">AR Wave: AR DevCamp Session</a>. Â There is also a <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki up now &#8211; see here</a>.</p>
<p><a href="tridarras.com/#http://www.dimitridarras.com/images/dd_work.jpg" target="_blank">Dimitri Darras </a>(avatar Dimitri Illios) is working on streaming the AR DevCampNYC sessions into Second Life,Â  <a href="http://slurl.com/secondlife/Ambleside/228/247/25" target="_blank">SLURL here</a>.</p>
<p>Thomas has done a very nice introduction and FAQ below.Â  This should help people new to this project to get up to speed quickly.</p>
<p>There are already several Waves that show the history of this project including: <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB" target="_blank">AR Wave: Augmented Reality Framework Development</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BeyLQLb4ED" target="_blank">AR Wave Use Cases</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252Bok4URyFyR" target="_blank">PyGoWave AR Tech Discussion</a>,Â  <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252BJAcNzz16A" target="_blank">AR Wave Augmented Reality Wave Development</a>, <a href="https://wave.google.com/wave/#restored:wave:googlewave.com!w%252B0VnNxxoOB.1" target="_blank">AR Wave / Muku Organization and Admin</a>.</p>
<p>Also I have several posts for people interested in more of the background, including: <a title="Permanent Link to The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!" rel="bookmark" href="../../2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/">The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</a>, <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">AR Wave: Layers and Channels of Social Augmented Experiences</a>, <a title="Permanent Link to Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave" rel="bookmark" href="../../2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/">Total Immersion and the â€œTransfigured City:â€ Shared Augmented Realities, the â€œWeb Squared Era,â€ and Google Wave.</a></p>
<p>Thomas uses the term Arn (augmented reality network) which is one of the candidate names for the project, Muku (crest of a Wave) is another suggestion.Â  Thomas&#8217; intro and FAQ below can also be found <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">here</a>.</p>
<p><strong><br />
</strong></p>
<h3><strong>What is the AR Wave Project?</strong></h3>
<p><strong> </strong></p>
<p>In simple terms its a protocol for storing <a id="zblc" title="geolocated" href="http://en.wikipedia.org/wiki/Geolocation">geolocated</a> data on Wave servers that&#8217;s currently being developed.</p>
<p>We believe this will help lay the foundations for an open, universally accessible, and decentralised system for shared augmented reality overlays which various clients can connect to and use.</p>
<p>This AR Network should spark a lot more rapid adoption of AR technologies, give existing browsers more functionality, and provide the network infrastructure, allowing many of the fictional depictions of AR to become a reality one day.</p>
<p><strong>The AR Network.</strong></p>
<p>When we speak of a future AR Network, we mean one as universal and as standard as the internet. One where people can connect from any number of devices, and without additional downloads, experience the majority of the content.</p>
<p>Where people can just point their phone, webcam, or pair of AR glasses anywhere where a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.</p>
<p>The Arn should be an inclusive and open platform where any number of devices can connect to, and anyone can make and host their own location-specific models or data.</p>
<p>It should allow people to communicate both publicly and privately, and not have their vision constantly cluttered with things they donâ€™t want to see.</p>
<p>This is our vision, and we think a Wave protocol will help it become a reality.</p>
<p><strong>Why Wave?</strong></p>
<p>Wave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</p>
<p>These are all incredibly useful properties for any AR-experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the user: a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secure: data is only exchanged when necessary, and will stay local to just one server if no one else needs to view it.</p>
<p>Wave allows bots to run on it and thus allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</p>
<p>For all these reasons and a few more, Wave makes a great platform for AR.</p>
<p><strong>How?</strong></p>
<p>In basic terms, we will diverse a standard way to geolocate a bit of data and store it as aÂ <a id="u0cd" title="Blip" href="http://google.about.com/od/b/g/google_wave_blip.htm">Blip</a> within a wave.</p>
<p>This data could be a 3d mesh, a bit of text, or even a piece of audio.</p>
<p>Then various clients on various devices could logon, locate, interpret and display this data as they see fit.</p>
<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4962" title="Screen shot 2009-12-04 at 7.56.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/12/Screen-shot-2009-12-04-at-7.56.58-PM-300x168.png" alt="Screen shot 2009-12-04 at 7.56.58 PM" width="300" height="168" /></a></p>
<p><em>Click on image above to enlarge.</em></p>
<p>A typical example of this might be holding up your phone and seeing messages written by your friends and family in the locations which they are relevant.</p>
<p>You could see an arrow hovering over the cafÃ© your meeting a friend at, notes above their flat saying if they are in or out, or messages by shops telling you to pick up the particular brand of cereal they like.</p>
<p>This data would be personal to just yourself and whoever you invite to share that wave with.</p>
<p>Other forms of data could be public, like city-maps, online games, or historical landmarks being recreated. Custom views of the world with data for entertainment, commercial, environmental or informative purposes.</p>
<p>The possibilities with geolocated data are endless, as are the various ways to display and make use of them.</p>
<p>One of the things I&#8217;m most passionate about is people being able to see many different types of data, both public and private at the same time and from many different sources at once.</p>
<p>For instance, if your playing a AR game, why shouldn&#8217;t your chat window be viewable at the same time?</p>
<p>If you have skinned your environment with a custom view of the world, why shouldn&#8217;t you also see mapping or restaurant recommendations?</p>
<p>The ways to present these layers of data and toggle them on/off in the most intuitive and flexible ways would be a task for the client markers, and I&#8217;m sure we will see many innovations in those areas.</p>
<p>But by using Wave it at least provides the framework for having multiple information sources controlled by many different people yet accessible, and user-submittable, via the same protocol.</p>
<p><strong>Who?</strong></p>
<p>This idea first sprouted from a paper I route focusing on the potential for IRC to be used for AR;</p>
<p><a id="ig44" title="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf" href="http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf">http://www.lostagain.nl/testSite/projects/Arn/AR_paper.pdf</a></p>
<p>I suggested near the end Wave might be a better alternative (using Google Wave was an idea Tish Shute, Ugotrade, brought up in response to the Arn prototype design on IRC), and it quickly became apparent that Wave was a very suitable medium.</p>
<p>Since then, there was a lot of interest, and numerous people have offered to help.</p>
<p>In particular, recently, the <a id="vms1" title="PygoWave" href="http://pygowave.net/blog/">PygoWave</a> team is helping us out, as they have an existing server supporting c/s protocol, which is currently being actively developed.</p>
<p><strong>Where?</strong></p>
<p>You can join the general discussion here;<br />
<a id="wvja" title="Augmented Reality Wave Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BJAcNzz16A">Augmented Reality Wave Development</a></p>
<p>Technical side here;<br />
<a id="qw95" title="Augmented Reality Wave Framework Development" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Augmented Reality Wave Framework Development</a></p>
<p><strong>When?</strong></p>
<p>There&#8217;s lots still to do, and we are at an early stage.</p>
<p>Our current targets: (last updated 11/12/2009)</p>
<ul>
<li>Getting reading/writing of prototype ARBlips to the PygoWave sever. (the PygoWave team have already made a standalone client and have the protocol for this sorted!)</li>
<li>Establishing a minimal spec for ARBlips to be later expanded.</li>
<li>Writing a very simple prototype online client showing how to store/retrieve the data.</li>
<li>Expanding client to work for some use-cases.</li>
<li>Establish a logo/branding for the project.</li>
</ul>
<p><strong>Other FAQs.</strong></p>
<p><strong>Where&#8217;s the catch?</strong></p>
<p>While we believe Wave is highly suitable for development, it has the drawbacks of being a new system with just a few servers worldwide, which (at the time of writing this), have not yet been federated together yet.</p>
<p>Naturally, as a new technology, its likely to have some growing pains. And building a new technology on other new technology will multiply that somewhat. The first pain is the lack of a standard client / sever protocol. PygoWave have stepped in to the rescue a bit here, by being not just one of the most developed Wave server other then Google, but also leaping ahead with support for Json based c/s interaction. Google has stated they want community to take the lead on on a c/s protocol, so we are hoping they will adopt a Json variant, or a XMPP one and add it to the spec. We hope in much the same way as POP3/IMAP have been a standard for email server interaction, a similar one will develop for Wave.</p>
<p>In the meantime we plan to keep the code for writing ARBlips somewhat abstracted so as to make it easy to adapt in future.</p>
<p>As for the newness of Wave and other potential problems it will bring, we aren&#8217;t that worried as its built on <a id="jnw1" title="XMPP" href="http://en.wikipedia.org/wiki/XMPP">XMPP</a>, which has proved reliable already.</p>
<p>The other catch is we are unfunded, which slows development down considerable as we have to fit it around our other jobs.</p>
<p><strong>I&#8217;m making my own AR Browser, and am slightly interested in maybe supporting you.</strong></p>
<p>We are naturally very keen for support, and particularly for those with skills and visions to give feedback on the proposed protocol. Specifically: what do you want stored in a blip?</p>
<p>That&#8217;s what&#8217;s important at this stage.</p>
<p>We don&#8217;t see the Arn as a replacement for existing browser systems at the moment. We don&#8217;t want to restrict innovation or development in this fast developing market as we are very impressed at what&#8217;s been achieved so far. In many ways our task is small in comparison to what&#8217;s already accomplished.</p>
<p>However, we do believe the Arn will make a good addition to existing browser systems. It will allow users contribute data and have social features without having to worry about accounts or hosting.</p>
<p>It will still be quite some work to support; new GUIs will need to be developed to make it easy to submit data from the devices, as well as to login to waves.</p>
<p>However, we hope over time to build a set of example libs to make the read/writing of ARBlips as as easy as possible to implement in your software.</p>
<p>Perhaps a good way to think about it is existing AR Browsers are like word-processors, supporting the Arn will be like adding support for *.txt, but doesn&#8217;t limit what you can do with your own format.</p>
<p><em>Eventually</em> we do hope ARBlips hosted on Wave will become the majority of AR data, and its functionality will be analogous to the internet is today. We truly believe in the long run a standard is essential.</p>
<p>But for now we think merely getting a baseline format established for how AR data can be communicated will increase user-ability, usefulness, and help the market grow.</p>
<p><strong>Can I help?</strong></p>
<p>Sure.</p>
<p>We particularly need people with technical skills in relevant fields. (both gwt/javascript web programming and c++(/qt)standalone programming help very welcome!).</p>
<p>But we also welcome people just with vision to help focus use-cases and to conceptualise what we want to be able to do with the system.</p>
<p>Please either join the relevant AR Waves or <a href="http://arwave.wiki.zoho.com/HomePage.html">Wiki</a></p>
<p>We are especially interested in those with JSON and Comet experience. Specifically those with the abilities to make standalone applications to read/write to a sever using these methods.</p>
<p><strong>What type of data will a AR Blip store?</strong></p>
<p>This is still actively being decided, but essentially its a physical hyperlink.</p>
<p>A connection between a physical location (or object, see below) and a piece of data.</p>
<p>Specifically, we are thinking about the following fields;</p>
<p>Location in X,Y,Z,<br />
Coordinate System used for the above,<br />
Orientation,<br />
MIMEType <span style="color: #666666;">[the type of data stored]</span><br />
DataItself <span style="color: #666666;">[either a http link for 3d meshs and other larger data, or an inline text string if its just a comment]</span><br />
DataUpdateTimestamp <span style="color: #666666;">[so clients know if its necessary redownload]</span><br />
Editors <span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="background-color: #ffffff;"><span style="background-color: #666666;"><span style="color: #666666;"><span style="background-color: #ffffff;">[the user/s that edited/created this blip]</span></span></span></span></span></span><br />
ReferanceLink <span style="color: #666666;">[data needed to tie the object at a non-fixed location, such as an image to align it to an object in realtime],</span><br />
Metatags <span style="color: #666666;">[to describe the data]</span></p>
<p><strong>Are you purely tying stuff to fixed geolocations?</strong></p>
<p>Certainly not <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
As part of of the spec we wish to be able for people to be able to link data to dynamically moving objects, trackable by image or other methods.</p>
<p>The idea being that one day someone could link a piece of text or 3d mesh to an image on a t-shirt they are wearing, or perhaps link a dynamically updating twitter feed, or perhaps provide information on a product (based on its logo).</p>
<p>There&#8217;s a large number of possibility&#8217;s for image-based linking alone, and that&#8217;s not even considering possibilities like linking RFIDs, or other forms of less precise but invisible binding data.</p>
<p>We need a lot of feedback from those companies already doing markless tracking. What types of images do you need, idly to link a mesh to an object? is one enough?</p>
<h3><strong>Summary of AR Wave Work to Date</strong></h3>
<p><strong>Purpose:</strong> To provide an open, distributed, and universally accessible platform for augmented reality. To allow the creation of augmented reality content to be as simple as making an html page, or contributing to a wiki.</p>
<p><strong>Specific Goal:</strong> To establish a method for geolocating digital data in physical space (or linking it to physical objects) using wave as a platform.</p>
<p>(For justification as to why we are using Wave see: <a href="http://lostagain.nl/testSite/projects/Arn/information.html" target="_blank">our faq</a> )</p>
<p><strong>Wave as a platform</strong></p>
<p>We are developing on the <a title="PyGoWave" href="http://code.google.com/p/pygowave-server/" target="_blank">PyGoWave</a> server at the moment but the goal is to be compatible with all Wave servers</p>
<p>PyGoWave has already achieved an important aspect in enabling the project in being a waveserver with a working and well documented server protocol. This allows both standalone and webbased clients to interface with it already.Â  See -Â <a href="http://github.com/p2k/pygowave-qt">The PyGoWave Qt-Based Desktop Client</a></p>
<p>This is one of the reasons why we have chosen to develop for the Pygo server at this stage.</p>
<p>However, the overall goal of AR Wave is to have a framework compatible with all servers using the Wave Federation Protocol. As more wave servers get c/s protocols then ARblips (the data needed to geolocate objects) could be posted and retrieved from various servers using the same client software. For this a standard should emerge. Just as websites don&#8217;t have to be hosted on specific servers, neither should AR data need to be hosted on specific wave servers.</p>
<p>In order to reach our goal, there are a few very achievable steps involved &#8211; see below.</p>
<p><strong>Feedback</strong></p>
<p>We are still actively seeking feedback, so feel free to join the <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">Wave discussions, </a>and see the history of how the specifications of the protocol evolved. You can also read the justification for some of the choices already made. Note a new discussion for AR DevCamp will be begin at <a href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BH83lcj6RA">AR Wave: AR DevCamp Session</a></p>
<p>This will, of course, only be the first draft of the specification, and it is sure to develop much in future.<br />
The important thing now is to make working prototypes while maintaining flexibility.</p>
<p>So what do we need to do?</p>
<p><strong>Steps :</strong></p>
<p><strong>* Establish the overall method &#8211; Done.</strong></p>
<p>Each Wave will be a layer on reality which an individual or a group can create.Â  Each Blip in this Wave refers to either a small piece of inline data (like text) or a remote piece of larger data (like a 3D mesh) as well as the data needed to pin-point it in either relative or absolute real space.<br />
We call these blips: ARblips. They are simply blips that stored the data necessary to augment a single object onto a specific bit reality.</p>
<p>It is up to the clients how they interpret and display the data. They could interpret it as a simple 2d list of nearby objects, or as an advanced 3D overlay, whereby multiple waves from different sources could to be viewed at once. Whatâ€™s important is that there is a standard way to link the digital data to the real world space.</p>
<p>* Establishing the specification for the ARblip &#8211; In progress<br />
We have a good idea of whatâ€™s needed to be stored in an ARblip, and we have hammered out a rough format.<br />
The data might be stored as blip-annotations, but this has yet to be finalised.<br />
A rough outline of the type of data stored can be seen in this c++/qt header for ARblip data can be seen at the end of this document.</p>
<p>* Storing and retrieving these pieces of ARblip data on the PyGo server &#8211; In progress.<br />
The Pygowave team has made some excellent libraries that should make reading and writing data on the PyGoWave server very trivial for those with c++ skills.<br />
This, however, is a real critical step, so more developers with C++ skills are very welcome!</p>
<p>* Making the above client mobile, and using a devices gps device to place the data. &#8211; Not started.<br />
The next step would be to port the code to a mobile phone and use it&#8217;s gps-inputÂ  to post geolocated data and view what others have posted. This would be a fairly simple and not to useful app in itself. However, it would mark the first time anyone could post AR data and anyone could view it, all using open-source infrastructure.<br />
As a bonus, because we are using wave infrastructure, the updates to any ARblip should appear in near realtime.</p>
<p>* To continue with the proof of concept, we would like to have simultaneous wave input from a PC<br />
and mobile phone at the same time. &#8211; Not started.<br />
For example, someone could post a pin on Google maps API and have that data posted to a ARBlip in a wave. Someone logged into that wave on their mobile device would then see the data posted appear.<br />
More so we hope that when the Google map pin is dragged about, the mobile phone viewer, with just a few seconds lag, will see its location updated in real time.</p>
<p>We hope to make a modest yet practical app at this stage.</p>
<p>* After all this, we can go onto the interesting things:<br />
3D data, camera-overlays, data fixed to objects and many more.Â  There&#8217;s plenty of existing software using these features (such as Wikitude, Layer) and some that are even open source software (like Gamaray and Flashkit). The open source code can give us a leg-up. However, we prefer to establish the protocol first. So naturally, these fancy features aren&#8217;t a priority for us. Rather we think our energy is better spent establishing the protocols and infrastructure so that other people can build more advanced bit of software easier.</p>
<p>However, once our primary goals are established, we will look to make a open source augmented reality browser ourself which will surely feature many of these features.</p>
<p>Overall, we hope once we have a simple proof of concept, there will be many groups, both existing and new, wanting to use this Wave system for their own apps, games and data.</p>
<p><strong>Conclusion</strong>:<br />
Really it&#8217;s now all about growing the community. We hope as soon as we show how great Wave can be for augmented reality, that lots of individuals and teams will start making their own clients to read/write geolocated data.<br />
Overall we don&#8217;t think anything we make will be that impressive in itself. That&#8217;s not our goal.<br />
We instead hope that our project will enable AR-content to be made as easily as web content. That games, information and apps will be able to be created without the creators having to worry<br />
about the infrastructure behind it.</p>
<p><strong>Technical information -</strong><strong> </strong></p>
<p><strong><br />
</strong><strong>Current ARBlip header file</strong></p>
<p>(below is a c++/qt header file for an ARBlip object that should illustrate the data being stored)</p>
<hr />class <strong>arblip</strong></p>
<p>{</p>
<p align="left"><strong>public</strong>:</p>
<p align="left">arblip();</p>
<p>~arblip();</p>
<p>arblip(QString,QString,double,double,double,int,int,int,QString);</p>
<p>QString getDataAsString();</p>
<p>QString getEditors();</p>
<p>QString getRefID();</p>
<p>QString getXAsString();</p>
<p>QString getYAsString();</p>
<p>QString getZAsString();Â bool isFaceingSprite();Â <strong> </strong></p>
<p><strong><br />
private</strong>:</p>
<p>//ID reference. This would be a unique identifier for the blip. Presumably the same as Wave uses itself.</p>
<p>QString ReferanceID;</p>
<p>//Last editor(s)</p>
<p>QString Editors;</p>
<p>int PermissionFlags = 68356; Â // default 664 octal = rw-rw-r&#8211;</p>
<p>//Location</p>
<p>double Xpos;Â Â  // left/right</p>
<p>double Ypos;Â Â  // up/down</p>
<p>double Zpos;Â  // front/back</p>
<p>//Orientation</p>
<p>// names, ranges and directions are taken from aeronautics.</p>
<p>// If no orientation is specified, itâ€™s assumed to be a facing sprite.</p>
<p>// Roll: rotation around the front to back (z) axis. (Lean left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects right side down.</p>
<p>int Roll;</p>
<p>// Pitch: rotation around the left to right (x) axis. (tilt up or down)</p>
<p>// Range +/- 90 degrees with + values moving the objects front up. (looking up)</p>
<p>int Pitch;</p>
<p>// Yaw: rotation around the vertical (y) axis. (turn left or right.)</p>
<p>// range +/- 180 degrees with + values moving the objects face to its right.</p>
<p>int Yaw;</p>
<p>bool FacingSprite; //if no rotation specified, this should default to true</p>
<p>//if set to true when a rotation is set, then it keeps that rotation relative to the viewer</p>
<p>//not relative to the earth.</p>
<p>//Data format</p>
<p>QString DataMIME;</p>
<p>QString CordinateSystemUsed; //The co-ordinate system used. This should be a string representing a Open Geospatial Consortium standard. This could be earth-relative for gps co-ordinates, or in some cases relative to the viewer, for data to be displayed in a HUD like style.</p>
<p>//Data itself</p>
<p>QString Data;</p>
<p>QString DataUpdatedTimestamp; //Time the Data was updated changed</p>
<p align="left">//Note; A seperate timestamp should be used for updates that dont effect the data itself.<br />
//(such as if a 3d object moves, but its mesh isnt changed)</p>
<p>//Data metadataÂ QMap&lt;QString, QString&gt; Metadata;</p>
<p>};</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Everything Everywhere: Thomas Wrobel&#8217;s Proposal for an Open Augmented Reality Network</title>
		<link>http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/</link>
		<comments>http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/#comments</comments>
		<pubDate>Thu, 20 Aug 2009 03:58:57 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[alternate reality games]]></category>
		<category><![CDATA[alternate reality games and augmented reality]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR Network]]></category>
		<category><![CDATA[ARG games]]></category>
		<category><![CDATA[ARN]]></category>
		<category><![CDATA[augmented reality and privacy]]></category>
		<category><![CDATA[augmented reality browser wars]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality concepts]]></category>
		<category><![CDATA[augmented reality filters]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented reality permissions]]></category>
		<category><![CDATA[Bertine van Hovell]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Dark Flame]]></category>
		<category><![CDATA[Denno Coil]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Elan Lee]]></category>
		<category><![CDATA[Fourth Wall Studios]]></category>
		<category><![CDATA[future of augmented reality]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[Google Wave Web of Protocols]]></category>
		<category><![CDATA[Internet Relay Photoshop]]></category>
		<category><![CDATA[IRC paradigm]]></category>
		<category><![CDATA[IRC protocols and augmented reality]]></category>
		<category><![CDATA[J Aaron Farr]]></category>
		<category><![CDATA[Lost Again]]></category>
		<category><![CDATA[Mez Breeze]]></category>
		<category><![CDATA[Mitsuo Iso]]></category>
		<category><![CDATA[Open Augmented Reality Netwrok System]]></category>
		<category><![CDATA[open standards for augmented reality]]></category>
		<category><![CDATA[protocols for augmented reality]]></category>
		<category><![CDATA[real time communications protocols]]></category>
		<category><![CDATA[real time web]]></category>
		<category><![CDATA[res-nova]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[social tesseracting]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[XMPP]]></category>
		<category><![CDATA[XMPP and presence]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4228</guid>
		<description><![CDATA[Today, I was very excited when Thomas Wrobel sent me a draft of, &#8220;Everything Everywhere: A proposal for an Augmented Reality Network system based on existing protocols and infrastructure.&#8221; Thomas has kindly agreed to let me publish his draft, to open a discussion on this topic. The diagram opening this post (click image to enlarge) [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Image1.jpg"><img class="alignnone size-medium wp-image-4277" title="Image1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Image1-300x162.jpg" alt="Image1" width="300" height="162" /></a></p>
<p>Today, I was very excited when <a href="http://www.darkflame.co.uk/">Thomas Wrobel</a> sent me a draft of, <strong>&#8220;Everything Everywhere: A proposal for an Augmented Reality Network system based on existing protocols and infrastructure.&#8221;</strong></p>
<p>Thomas has kindly agreed to let me publish his draft, to open a discussion on this topic. The diagram opening this post (click image to enlarge) shows, <strong>&#8220;An example of how collaborative 3D-spaces could be shared over existing IRC networks.&#8221;</strong> It is from Thomas&#8217; proposal.<strong> </strong>The full text of his paper is included later in this post.</p>
<h3>&#8220;Can we try to avoid a browser war this time?&#8221;</h3>
<p>Thomas notes in the closing remark to his paper:</p>
<p><strong>&#8220;I am absolutely confident in my belief AR will become at least as important as the web has, and probably a lot more so. It will also face much the same hurdles and challenges getting established as that medium did. But, speaking as a web-developer, can we try to avoid a browser war this time?&#8221;</strong></p>
<p><a href="http://www.darkflame.co.uk/">Thomas Wrobel</a> has consistently posted insightful comments on how existing standards could be used for creating open augmented reality networks. But he expressed concern to me that his work and this paper not be overplayed:</p>
<p><strong>&#8220;I&#8217;m hardly a leader, I&#8217;m just an amateur with a load of ideas on AR-related topics, some which might be useful, others might become unworkable. I don&#8217;t want anyone to get the impression this is how I think it has to, or should be done.&#8221;</strong></p>
<p>I have brought/am bringing up this topic of using existing standards and infrastructure where possible for open augmented reality networks in all my interviews with members of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>.</p>
<p>And I am finding agreement on a point that <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> makes, <strong>&#8220;there is no perfect, ultimate solution *now*, but we have to do *something* to work from and refine/evolve.&#8221;<br />
</strong></p>
<p>Thomas Wrobel makes what I consider some crucial opening suggestions. I take my hat off to him for thinking about this early, coming up with some clear, elegant, and practical ideas, and doing the work to articulate these ideas so others can participate in evolving them.Â  Massive props for that, many times over.</p>
<p>Good ideas on standards at an early stage ofÂ  a developing industry like augmented reality are like spring sunshine and April showers for new crops. No one knows what storms and pests the growing season will bring &#8211; but water and sunshine (open standards) are always a good start. And, personally, I can&#8217;t wait to see how this new industry unfolds (see Bruce Sterling&#8217;s Layar Conference awesome keynote : <a href="http://layar.com/video-bruce-sterlings-keynote-at-the-dawn-of-the-augmented-reality-industry/" target="_blank">&#8220;At the Dawn of the Augmented Reality Industry.&#8221;</a>)</p>
<p>Thomas Wrobel is:</p>
<p><strong> &#8220;a web developer working for a small, brand-new company called <a href="http://www.lostagain.nl/" target="_blank">Lost Again</a>, which mostly works on ARGs (That is, the alternate reality games, not the augmented reality games, although there&#8217;s probably going to be big overlap there in the future). We developed two educational ARG games for the Netherlands with <a href="http://www.res-nova.nl/">a company called res-nova</a>.&#8221;</strong></p>
<p>I have been following Alternate Reality GamesÂ  through the amazing work of Elan Lee and <a href="http://www.fourthwallstudios.com/">Fourth Wall Studios</a>. Like Thomas, I think the intersection of ARGs and augmented realities is going to be very interesting.  Thomas wanted me to point out that the website for his company with Bertine van HÃ¶vell, http://www.lostagain.nl/, is just a placeholder for now.<br />
<strong><br />
&#8220;Probably be up fully within a week or two. And, &#8220;despite the logo, we aren&#8217;t an AR company [yet], or a travel firm. The logos supposed to represent being lost in our minds.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/logolostagainsmall.png"><img class="alignnone size-full wp-image-4250" title="logolostagainsmall" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/logolostagainsmall.png" alt="logolostagainsmall" width="162" height="56" /></a></p>
<p>Thomas has been thinking about the topic of an open augmented reality network for a while now.Â  He is an artist also known as <a href="http://www.renderosity.com/mod/gallery/index.php?image_id=1221354&amp;member">DarkFlame</a> and his ARN network is included in this augmented reality concept for 2086 he did in 2006 (click on image below to enlarge).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-78.png"><img class="alignnone size-medium wp-image-4254" title="Picture 78" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Picture-78-300x218.png" alt="Picture 78" width="300" height="218" /></a></p>
<h3>Beyond IRC</h3>
<p>Both Thomas and <a href="http://arsvirtuafoundation.org/research/">Mez Breeze</a> made extensive and insightful comments on my last post, <a href="http://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/">&#8220;Augmented Reality &#8211; Bigger Than the Web: Second Interview with Robert Rice.&#8221; </a>And in particular they both picked up on something I am very interested in &#8211; the potential use of the Google Wave Web of protocols in creating open augmented reality networks.</p>
<p>Mez in her brilliant brainsplosion on social tesseracting takes on the very definition of information:</p>
<p><strong>&#8220;Tish, when you ask Robert â€œâ€¦what is your approach to delivering a massively shared real time [augmented reality] experience that is like Wave not confined to a walled garden?â€ thatâ€™s an extremely relevant question + one that needs to be addressed while considering the entirety of the Reality-Virtual Continuum. Iâ€™ve recently finished a series of articles addressing this: the framework Iâ€™ve developed is termed<a href="http://arsvirtuafoundation.org/research/2009/03/01/_social-tesseracting_-part-1/" target="_blank"> â€œSocial Tesseracting.â€</a></strong></p>
<p>I have recently begun exploring the Google Wave Web of Protocols which are nicely outlined in <a href="http://cubiclemuses.com/cm/articles/2009/08/09/waves-web-of-protocols/">this post</a> by J Aaron Farr which includes the very interesting diagram below (so more on Google Wave in another post).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/wave_protocols.png"><img class="alignnone size-medium wp-image-4255" title="wave_protocols" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/wave_protocols-300x293.png" alt="wave_protocols" width="300" height="293" /></a></p>
<p>But, as Thomas notes, while he demonstrates his ideas using IRC (Internet Relay Chat) they reach<strong> Beyond IRC</strong>:</p>
<p><strong>&#8220;As mentioned before IRC has some drawbacks, which are due to its age or method of working. As such, future systems might yet prove better alternatives for a open AR network. One example of such a system is Google Wave. It shares many of the advantages of IRC (open, anyone can create a channel of data, different permission levels can be set and its free), while avoiding some critical restrictions. (The data can be persistent). I believe some of the ideas I&#8217;ve mentioned, and possibly even the proposed protocol string could be adapted for Google Wave or other future systems. I believe overall the principles are more important then any specific implementation to get to them</strong>&#8221;</p>
<p>Also Thomas pointed that while he uses markers to illustrate some of his examples, they are just a method for tracking.Â  What he is presenting is going to be transparent to the methodology of registration/tracking.</p>
<p><strong><strong>Tish Shute: You mostly use marker based examples but there is no reason why the principles you are suggesting will not be just as relevant as we move more into using more sophisticated image recognition tools is there?<br />
</strong><br />
Thomas Wrobel: No reason whatsoever. I mostly choose familiar markers as something that could be used now, with a lot of coding library&#8217;s already established for them. I think for most future AR use, markers will go completely&#8230;especially outside. Either things will be done purely by gps, object recognition, or the (in the case of advertising) markers will look like normal posters.</strong></p>
<p><strong>However, I do think traditional markers might &#8220;cling on&#8221; as being used for non geographical specific stuff at home. After all, if you need some reference points for moving mesh&#8217;s about in real time&#8230;(say, when playing a board game with a friend on the other side of the world)&#8230;.then there&#8217;s probably nothing that&#8217;s going to be more practical then some simple bits of paper or card.</strong></p>
<p><strong><br />
</strong></p>
<h3>Everything Everywhere</h3>
<h4>-Â  A proposal for an Augmented Reality Network system based on existing protocols and infrastructure.</h4>
<h3>by Thomas Wrobel</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/darkflame2.jpg"><img class="alignnone size-medium wp-image-4260" title="darkflame" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/darkflame2-199x300.jpg" alt="darkflame" width="199" height="300" /></a></p>
<p>The following paper is my vision of a open AR Network and potential methods to implement it with existing technologies. Specifically I&#8217;ll be focusing on a potential for a global outdoor AR network, although the ideas  aren&#8217;t limited to that.</p>
<p>Of course I call it â€œmyâ€ vision, but I&#8217;m obviously not the first to have many of these ideas. I have been influenced and inspired by many things&#8230;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_0new1.jpg"><img class="alignnone size-medium wp-image-4232" title="AR_paper_img_0new" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_0new1-140x300.jpg" alt="AR_paper_img_0new" width="140" height="300" /></a></p>
<p><em>[Some of Thomas Wrobel&#8217;s influences &#8211; watched and played. ImagesÂ  from Mitsuo Iso&#8217;s<a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank"> Denno Coil</a> (Click to enlarge) top,Â  below from the game &#8220;Metroid Prime,&#8221; and Terminator, and the last from Buffy the Vampire Slayer!]</em></p>
<p><strong>The AR Network.</strong></p>
<p>When I speak of a future AR Network, I mean one as universal and as standard as the internet. One where people can connect from any number of devices, <em>and without additional downloads</em>, experience the majority of the content.<br />
Where people can just point their phone, webcam, or pair of AR glasses anywhere were a virtual object should be, and they will see it. The user experience is seamless, AR comes to them without them needing to â€œprepareâ€ their device for it.</p>
<p>From this point forward, I will refer to this future AR Network simple as the <strong>â€œArnâ€.</strong></p>
<p>The Arn should be an inclusive, and open platform where any number of devices can connect to, and anyone can make and host their own location-specific models or data.<br />
It should allow people to communicate both publicly and privately, and not have their vision constantly cluttered with things they don&#8217;t want to see.</p>
<p>There&#8217;s two old, existing paradigms that I think can help reach this goal when they are combined.</p>
<p><strong>The Internet Relay Photoshop.</strong></p>
<p>IRC, or Internet Relay Chat  was a chat system designed by Jarkko Oikarinen in the late 80&#8242;s.</p>
<p>Its a system where people meet on &#8220;channels&#8221;, they can talk in groups, or privately. Channels can be read-only, or open to all to contribute to. There is no restriction to the number of people that can participate in a given discussion, or the number of channels that can be formed. All servers are interconnected and pass messages from user to user over the network.</p>
<p>To me, this relatively old internet technology is a great template, or even foundation, for how the Arn could operate. Rather then text being exchanged, it would be mesh data (or links to mesh data), but other then that much of the same principles could apply.</p>
<p>People could join channels of information to view or contribute. Families could leave messages to each other scribbled in mid-air on private channels. Strangers can watch AR games being played between people in parks. People going into a restaurant could see the comments from recent guests hovering by the menu items.<br />
None of this would have to be called up specially, if they are on the right channel when it was broadcast, they will see it.</p>
<p>The IRC paradigm becomes particularly powerful when combined with another one common to many computer users; that of a â€œLayerâ€ in an art program, such as Photoshop or Paint Shop Pro.<br />
As most of us know, layers allow us to separate out different components of a piece of art while editing, either to focus our attention on one piece, or to make future editing easier.</p>
<p>Now what if we simply have each  â€œchannelâ€ of information represented as a layer?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_1.jpg"><img class="alignnone size-medium wp-image-4265" title="AR_paper_img_1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_1-300x206.jpg" alt="AR_paper_img_1" width="300" height="206" /></a></p>
<p><em>Click to enlarge image above.</em></p>
<p>Having channels corresponding to layers is an easy and intuitive way for the Arn to operate. The user can login and contribute data to any channel, like IRC as well as adjusting the desired opacity and visual range of each layer, like they would a layer in Photoshop.</p>
<p>In this way they can get a custom view of the world, both with shared and personal AR elements visible at the same time.<br />
They would not have to switch between various overlays to their world view, as they could see many at the same time.</p>
<p><strong>Persistence of Data</strong></p>
<p>With IRC or IRC-like system to communicate the data sent is mostly temporary data&#8230;broadcast on the fly from user to user and device to device. Retained in the users local logs, but not â€œhostedâ€ anywhere.</p>
<p>I think for the majority of day to day purpose&#8217;s this is not so much a drawback, but actually desired for AR. Most casual communication doesn&#8217;t need to be recorded permanently in 3D space and, indeed, if it was, the cost of running such a service would increase exponentially with users and with time. Not to mention, our visual view of the world would get very cluttered very quickly. Imagine what your monitor would be like if it kept a history of every window you have ever opened and their positions!</p>
<p>So for most cases AR space should be treated like a 3D monitor letting us display many pieces of data from remote and local sources, and even to share them with others, but not being, by default, a permanent record for it all.</p>
<p>Most data will be analogous to pixels on a display, and if kept in records its only on the clients devices, not on the network itself.</p>
<p>However, occasionally we do want 3d data analogous to a web-page, such as (in the example above), the map layer. Data here should be persistent and visible to all that have that layered turned on.Â  I see no reason why hosting this data needs to use anything else but standard web-hosting with the (read only) #channel on the Arn merely providing a route to the data.</p>
<p>As the user logs onto the channel, the server, using a chat-bot, can send them a list of meshes with location data attached, and the Arn browser can simply pick the data to display that&#8217;s local to them. (Note 1: By doing it this way around, it allows some degree of anonymity to be possible, rather then the server knowing exactly where you are and feeding the specific correct string to you.)</p>
<p>We simply need to establish standards so this data can be pulled up and interpreted.</p>
<p>For instance, this standard could be as simple as a XML string pointing to a KML file on a server. This could then be then displayed in the users field of view at the co-ordinates specified.</p>
<p>In this way permanent data tied to locations, such as historical overlays or maps, could co-exist on the same protocol as temporary data such as mid-air chat&#8217;s or gaming related meshes.</p>
<p>There is also no reason why this shared-space/personal spaces based on channels of data has to be restricted to things given absolute co-ordinates.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_2.jpg"><img class="alignnone size-medium wp-image-4266" title="AR_paper_img_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/AR_paper_img_2-226x300.jpg" alt="AR_paper_img_2" width="226" height="300" /></a></p>
<p>(Different ways to access the same mesh)</p>
<p>It could work just as well with Markers and thus relative co-ordinates.</p>
<p>This would be mostly useful for indoor use, letting people logged onto a channel see the same meshes as everyone else on the markers. Thus allowing multi-player AR games, or AR games with observers very easily.</p>
<p>For example; games like Chess could be played between people with no additional code needed; You simply have a set of markers for only your own pieces, and as you move them the channel updates with the new positions, which are displayed in place in your opponents field of view.</p>
<p>This sort of game comes â€œfreeâ€ with just having a  generic system of shared space supporting markers.</p>
<p>It would also allow AR adverts down the street or in magazines to be viewed by simply logging onto the right AR channel</p>
<p>If markers are designed with URL data in them, this could even be a prompted or automatic process.<br />
â€œThere is visual data in this area on the following channel;  #ABCD  would you like to view this channel?â€</p>
<p><strong>Pros and Cons of using IRC or IRC-like systems</strong></p>
<p><strong>Pros;</strong></p>
<p><strong>â€¢	Anyone can write a IRC interface software.<br />
â€¢	Anyone can create new IRC channels without cost<br />
â€¢	Channels can have read and write permissions set.<br />
â€¢	Users can easily have multiple channels open at once.<br />
â€¢	Already established with thousands of severs worldwide.</strong></p>
<p><strong>Cons;</strong></p>
<p><strong>â€¢	500-or-so character limit. 3D data must be linked too, not sent.<br />
â€¢	Slow update rate. Lines of data can take a whole second or more to send.<br />
â€¢	Non-persistent. Good for a 3d-view, not good for storage.</strong></p>
<p><strong><br />
</strong></p>
<p><strong>An example of how collaborative 3D-spaces could be shared over existing IRC networks;</strong></p>
<p><strong><br />
</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Image1.jpg"><img class="alignnone size-medium wp-image-4277" title="Image1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/Image1-300x162.jpg" alt="Image1" width="300" height="162" /></a></p>
<p><strong><em>Click on the image to enlarge </em><br />
</strong><br />
While in the long run I would hope for a dedicated AR network to be developed, with greater flexibility with persistence of data, there is a lot that can be done with the existing IRC system to implement the ideas mentioned above.</p>
<p>Below I will show an example of simple, crude, pseudo-protocol that could be fairly easily implemented to create shared AR spaces broadcast across IRC channels.</p>
<p>Its important to note, the goal here isn&#8217;t to exchange the mesh data itself on IRC, its to exchange links to the data.</p>
<p>Exchanging the mesh data directly within the 500 character IRC limit would be very hard, and liable to errors.</p>
<p>It&#8217;s also a waste of network bandwidth, as many people logged onto the channel might not have that object in their field of view, so their clients should not bother downloading it. (it should be up to the client browsers when to anticipate and cache mesh data).</p>
<p><strong>Proposed Basic XML link exchange for AR;</strong></p>
<p>Principle;<br />
As user creates or changes an object, the clients softwareÂ  posts a simple xml formatted string to<br />
the IRC channel.<br />
Anyone logged into that channel then sees that mesh displayed in the specified location.</p>
<p>This string could be formatted as follows;</p>
<p>&lt;Mesh<br />
ID=â€DARKFLAME:1â€<br />
Obj=â€http://www.darkflame.co.uk/mesh/church/chuch.kmlâ€<br />
Loc=â€(49.5000123,-123.5000123)â€<br />
Permissions=â€Noneâ€<br />
LastUpdate=â€12/12/0000,2012:12â€<br />
/&gt;</p>
<p>This string allows other users client logged into the channel to automatically load the object from the URL and display it at the correct position in their field of view.<br />
If the permissions are set to allow it, they could then move the object themselves, with the update being feeding back seamlessly to other users on the channel.</p>
<p>The objects posted are given an ID, which can be just the posters name, followed by a unique object number for that name. These unique ID&#8217;s would allow clients to track different instances of the same mesh, as well as making it easy to implement permissions. (if only the poster should be allowed to move this object, then the clients simply check if ID matches the user name posting the update. If its not, they can ignore it).</p>
<p>Next the objects need to be linked to a mesh.</p>
<p>The location of the objects mesh doesn&#8217;t have to be a fixed remotely-hosted url, it could be an IP address and port number of the user posting the mesh,hosted by the application posting the link to the channel.</p>
<p>Obj=â€www.darkflame.co.uk/mesh/church/chuch.kmlâ€<br />
Obj=â€123,223,14,23::3030â€</p>
<p>The objects co-ordinates, likewise, need not be specified as absolute gps co-ordinates, but instead could refer to generic Marker.</p>
<p>Loc=â€(49.5000123,-123.5000123)â€<br />
Loc=â€Marker1â€<br />
Or relative to a marker;<br />
Loc=â€Marker4 (+0.0023,-0.0023)â€<br />
Or relative to a default plane;<br />
Loc=â€Default(+0.213,-0.123)â€</p>
<p>The AR Browsers could then handle the association between the Markers pattern and its Name.<br />
This way the markers are reusable, they do need unique markers to be printed for every new bit of AR they want to look at.<br />
Users could just keep a set of generic markers handy, which they could simply assign to be Marker1,Marker2 etc for any AR use. (Note 2: As mentioned above specific makers could also contain a default ID name and channel built into their data, letting the Arn browser simply prompt the user if they want to see the model even if they aren&#8217;t in the right channel. This set up would be most useful for paper and even billboard advertising.)</p>
<p>The Default location could be a settable region, or marker, on the clients browser that defines a playable/user-able area in the field of view. Mostly useful for home use, this could typical be a square region on a users desk.</p>
<p>So, in the chess-game example, the client of the person making the moves simply updates the position relative to the Default every time they move their marker (which is tied to a chess piece mesh).<br />
Then the (non-owners) clients software could automatically display it relative to their Default plane. This would make games like Chess, Checkers, Go or any other game involving merely moving objects about automatically very intuitive and easy to set up.</p>
<p>So by having meshes settable to absolute gps, marker-relative, or default-relative locations, reduces the bother necessary to experience AR content quite considerably, and makes â€œnon-geo-specificâ€ AR applications and games trivial to implement.</p>
<p>Next is permissions.</p>
<p>Mesh-permissions would be a simple string saying who else can update the data, if anyone.</p>
<p>eg;<br />
Permissions=â€Noneâ€<br />
Permissions=â€RandomPerson1, RandomPerson2â€<br />
Permissions=â€Allâ€</p>
<p>By default you could only update or move your own meshes. (identified by the ID of first posting). If you attempt to update anyone else&#8217;s,Â  their clients would just ignore it.</p>
<p>Thus in a game of chess, you can only move your own pieces. If you attempted to move your opponents (by reassigningÂ  your own marker to their pieces Ids), the clients would just ignore that assignment. You&#8217;d only be fooling your own system.<br />
Likewise, when pinning a message in mid-air for your friends to read, no one else can change that message without your permission, although copying it would be easy.Â  (Note 3: It&#8217;s important to note this sort of object-specific permission system is in addition to the global-permissions, or â€œuser-modesâ€ it&#8217;s possible to set for the IRC channels and users as a whole.)</p>
<p>Finally, as object data could change within all sorts of time-scales, the easiest way to keep everyone logged in up to date is to just have a time-stamp of when each model was last updated.</p>
<p>LastUpdate=â€12/12/0000,2012:12â€</p>
<p>This would not necessarily be the same as the XML string post date, because the models mesh might not be updated, but merely moved, and in such case the Arn browser shouldn&#8217;t redownload the mesh.</p>
<p>This sort of arrangement could be used as a standard today, and users wouldn&#8217;t have to constantly download special AR programs to view a single AR mesh.</p>
<p>In the long-term I would hope for more advanced methods to manipulate Arn-content online, analogous to Dom manipulation in web-pages. But for now, we should at least establish standard methods for devices to pull up meshes and overlay them in the correct position.</p>
<p>So, having a layered system could give the user a seamless blend of dynamic and static data with which to paint their world with.<br />
I believe this is all relatively easy to achieve using modifications of existing web technology, combined with some basic graphics systems.<br />
<strong><br />
Local Data:</strong></p>
<p>However, so far I have only talked about remote data.<br />
What of programs originating on the device itself? This is, after all, how most AR software we have at the moment works.</p>
<p>I think, that just like the remote channels, local software should also be blended into the same list of layers.Â  People shouldn&#8217;t have to â€œAlt+Tabâ€ out of one view of the world, to see another.<br />
They should be able to see both at once, if they wish.</p>
<p>For instance, if your playing a AR game, why shouldn&#8217;t your chat window be viewable at the same time?</p>
<p>If you have skinned your environment with a custom view of the world, why shouldn&#8217;t you also see mapping or restaurant recommendations?</p>
<p>So local data and remote data should be blended in the same view.<br />
How can AR software &#8211; of which I hope, there will beÂ  thousands &#8211; seamlessly be expected to layer their graphics, not only with the real world, but with each other, and with online data too? Will games and software makers need to co-operate to allow their graphics to be integrated together with correct occlusion taken into account? A tall order, no?</p>
<p>I must confess though, my technology knowledge fails me here.</p>
<p>I can only guess special graphics drivers, or 3D APIs,Â  will have to be developed to let programs share their 3D world with that of a Arn browser.<br />
Maybe programmes should simply treat themselves as a local-sever which the browser can connect too, and let the Arn handle all the rendering itself (although I imagine many games designers would find this quite limiting).<br />
So I leave it as an exercise to the readers to discuss and propose the best methods by which this vision of a layered world could be realised..</p>
<p><strong>Beyond IRC:</strong></p>
<p>As mentioned before IRC has some drawbacks, which are due to its age or method of working.<br />
As such, future systems might yet prove better alternatives for a open AR network.<br />
One example of such a system is Google Wave.<br />
It shares many of the advantages of IRC (open, anyone can create a channel of data, different permission levels can be set and its free), while avoiding some critical restrictions. (The data can be persistent).<br />
I believe some of the ideas I&#8217;ve mentioned, and possibly even the proposed protocol string could be adapted for Google Wave or other future systems.<br />
I believe overall the principles are more important then any specific implementation to get to them.<br />
<strong><br />
Summary;</strong></p>
<p>âƒÂ Â  Â In order for AR to flourish the user shouldn&#8217;t need to download a separate application for each mesh they want to see.<br />
âƒÂ Â  Â  Having url&#8217;s embedded into QRCoded markers which point to standard mesh files like dxf or kml would be a way to do this right now.Â  The QR code would only have to be seen preciselyÂ  in shot once, then its borders could be used like a standard marker.</p>
<p>âƒÂ Â  Â An augmented view of the world needs to support visual multitasking, and havingÂ  layers of information is the best way to do that.<br />
âƒ<br />
âƒÂ Â  Â Methods need to be devised to allow drastically different software to contribute to these layers, without restricting either the software&#8217;s rendering ability&#8217;s, or the users ability to pick and choose what layers of information he wants to see.<br />
<strong><br />
Last point;</strong></p>
<p>I am absolutely confident in my belief AR will become at least as important as the web has, and probably a lot more so. It will also face much the same hurdles and challenges getting established as that medium did.<br />
But, speaking as a web-developer, can we try to avoid a browser war this time?</p>
<p>Everything Everywhere , draft.<br />
by Thomas Wrobel<br />
Darkflame a t gmail</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/feed/</wfw:commentRss>
		<slash:comments>34</slash:comments>
		</item>
	</channel>
</rss>
