<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; locative media</title>
	<atom:link href="http://www.ugotrade.com/tag/locative-media/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>The Next Wave of AR: Exploring Social Augmented Experiences at Where 2.0</title>
		<link>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/</link>
		<comments>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/#comments</comments>
		<pubDate>Mon, 29 Mar 2010 05:25:03 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR browsers]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave demo]]></category>
		<category><![CDATA[atemorality]]></category>
		<category><![CDATA[atemporal network culture]]></category>
		<category><![CDATA[augmented reality and federation]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[augmenting the map as interface]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[collaborative augmented reality]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[Dennou Coil]]></category>
		<category><![CDATA[design principles for social augmented experiences]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[layers and channels of augmentation]]></category>
		<category><![CDATA[location technologies]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[real time social augmented experiences]]></category>
		<category><![CDATA[Ruby On Sails]]></category>
		<category><![CDATA[social AR]]></category>
		<category><![CDATA[social AR and crisis response]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[Will Wright]]></category>
		<category><![CDATA[writing within the map]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5332</guid>
		<description><![CDATA[Where 2.0 is going to be epic this year (see my interview with Brady Forrest here), and it is so exciting to be part of it.Â  Location technologies and augmented reality are annointed rulers now.Â  Time Magazine recognized augmented reality as one of its 10 Tech Trends for 2010 (for more see ReadWriteWeb). The photo [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/jeremyandlisahight.jpg"><img class="alignnone size-medium wp-image-5336" title="jeremyandlisahight" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/jeremyandlisahight-300x160.jpg" alt="jeremyandlisahight" width="300" height="160" /></a></p>
<p><a id="jqit" title="Where 2.0" href="http://en.oreilly.com/where2010">Where  2.0</a> is going to be epic this year (see <a id="ysmn" title="my interview with Brady Forrest here" href="../../2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/">my interview  with Brady Forrest here</a>), and it is so exciting to be part of it.Â   Location technologies and augmented reality are annointed rulers now.Â  <a href="http://www.time.com/time/specials/packages/article/0,28804,1973759_1973760_1973797,00.html">Time  Magazine recognized</a> augmented reality as one of its 10 Tech Trends  for 2010 (for more <a href="http://www.readwriteweb.com/archives/augmented_reality_among_times_10_tech_trends_2010.php" target="_blank">see ReadWriteWeb</a>).</p>
<p>The  photo above is by Jeremy and Lisa Hight.Â  <a id="ohzg" title="Jeremy Hight" href="http://34n118w.net/">Jeremy Hight</a> is an information  designer, theorist and artist working in Augmented Reality and Locative  Media. Â  His essay â€œNarrative Archaeologyâ€ was named one of the 4  primary texts in Locative Media.</p>
<p><a id="xel:" title="Jeremy Hight" href="http://en.oreilly.com/where2010/public/schedule/speaker/69399">Jeremy Hight</a> will be part of our  panel: <a title="The Next Wave of AR: Exploring Social Augmented Experiences" href="http://en.oreilly.com/where2010/public/schedule/detail/11046">The  Next Wave of AR: Exploring Social Augmented Experiences</a>, with <a id="b49q" title="Anselm Hook" href="http://en.oreilly.com/where2010/public/schedule/speaker/6545">Anselm Hook</a>, <a id="h3j-" title="Joe Lamantia" href="http://en.oreilly.com/where2010/public/schedule/speaker/26367">Joe Lamantia</a>, <a id="xtfk" title="Sophia Parafina" href="http://en.oreilly.com/where2010/public/schedule/speaker/59688">Sophia Parafina</a> and <a id="uw9f" title="myself." href="http://en.oreilly.com/where2010/public/schedule/speaker/38011">myself.</a> We will <a href="http://www.youtube.com/watch?v=ZjXCTCSKtRQ" target="_blank">debut the video of the  ARWave project demo </a>that brings together augmented reality,  geolocation, and wave federation (more details later in this post).Â  And, Jeremy will bring to our  presentation some augmentations on his recent brilliant work and paper, <a href="http://www.neme.org/main/1111/writing-within-the-map" target="_blank">â€œWriting Within the Map.â€</a></p>
<p>Greg  J. Smithâ€™s points out in <a href="http://serialconsign.com/2010/03/thoughts-writing-within-map#comments" target="_blank">his in depth look at Jeremyâ€™s work</a> that it, <strong>â€œdovetails  with some of the main points in Bruce Sterlingâ€™s recent <a href="http://www.wired.com/beyond_the_beyond/2010/02/atemporality-for-the-creative-artist/">atemporality  keynote</a> at Transmedialeâ€ â€“ </strong>fortunately there is a <a href="http://www.wired.com/beyond_the_beyond/2010/02/atemporality-for-the-creative-artist/" target="_blank">transcription of Bruceâ€™s keynote here</a>.Â  What is so  awesome about this dovetailing is that you can get a feel for the  fun part of living in an, â€œatemporal network culture.â€Â  And, if you want  to really understand just how much locative media and augmented reality  have changed us, youÂ  might want to dig into these texts.</p>
<p>Bruce  Sterling and Jeremy Hight, and members of the ARWave team, and a  superb cast of augmented reality movers and shakers &#8211; including Will  Wright and Jesse Schell, will be <a id="ncnl" title="speaking at Augmented Reality Event in Santa Clara, June 2nd and  3rd." href="http://augmentedrealityevent.com/speakers/">speaking at Augmented Reality Event in Santa Clara, June 2nd and  3rd.</a></p>
<p>But, this week, the AR community&#8217;s attention  will be on the events at Where 2.0.Â Â  The  keynote speakers will be streamed live, so if you are not fortunate  enough to be there, tune in!</p>
<h3>The Next Wave of AR: Exploring Social Augmented Experiences</h3>
<p>On our panel, Jeremy  Hight, Anselm Hook, Sophia Parafina, Joe Lamantia and I will cover some  of the key social, cultural, technical and interactional questions for  exploring social augmented experiences. There will be five lightning  presentations, and an opportunity for questions from the audience, and a  world premier of the ARWave demo!</p>
<p><strong>1)  â€œAugmenting the map as interface: AR and Locative Narrativesâ€ -</strong> Jeremy Hight<strong><br />
</strong></p>
<p><strong>*Map augmentation of the historic route 66  can house an essay contest and publication globally but as embedded  within that map augmentation instead of books or even web sites.</strong></p>
<p><strong>*  A place on a map can be a graphic index and database to save and  collect<br />
the writing of that place with a graphic or textual search  index.</strong></p>
<p><strong>*One can pop immersive visualizations of abandoned or lost  buildings from map location in shared software and collectively augment  (imagine channels within the lost core of detroit where one is memories  and accounts tagged within parts in the immersive visualization while  another is of poems and stories written by people moved by the place and  its semiotics and story).</strong></p>
<p><strong>*The news stand is to be the map.</strong></p>
<p><strong>*New  forms of literature will be born of mapping, spaces,augmentation and<br />
new tools</strong></p>
<p>The concept drawings below (click to  enlarge)Â are  a collaboration between Jeremy Hight and Paul Wehby, Senior Designer at  <a href="http://www.lacma.org/" target="_blank">LA County Museum of Art.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby1post.jpg"><img class="alignnone size-thumbnail wp-image-5342" title="wehby1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby1post-150x150.jpg" alt="wehby1post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby2post.jpg"><img class="alignnone size-thumbnail wp-image-5343" title="wehby2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby2post-150x150.jpg" alt="wehby2post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby3post.jpg"><img class="alignnone size-thumbnail wp-image-5352" title="wehby3post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby3post-150x150.jpg" alt="wehby3post" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby4post.jpg"><img class="alignnone size-thumbnail  wp-image-5353" title="wehby4post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/wehby4post-150x150.jpg" alt="wehby4post" width="150" height="150" /></a></p>
<p><strong>2) </strong>Anselm Hook will look at, <strong>&#8220;10 reasons why AR isn&#8217;t a  flash in the pan,&#8221; </strong>and how,<strong> â€œAR can help us see the world we  would like to have exist.â€</strong></p>
<p>Anselm notes, <strong>â€œSo  much of what we do is so fickle and Iâ€™m looking for ways to connect  digital media work to deep values.â€</strong></p>
<p><strong>3)</strong> Sophia Parafina will present on, <strong>â€œSocial AR and Crisis Responseâ€</strong></p>
<p><strong>â€œAugmented  reality as a multi-party conversation. Â Rather than being passive  viewers of AR with a limited ability to Â checkin to places and make  annotations, current devices can broadcast sensor information that can  be fused into an interactive stream. AR users can send and receive  information, location, and sensor data from their mobile device.Â  The  streams can be federated into a unique AR view composed by the user.</strong></p>
<p><strong>Entertainment  and gaming are obvious applications, but it can also be applied to  crisis situations such as the search and rescue operations in Haiti.  Â Efforts such as Mission 4636, the SMS translation service, could  benefit from AR views. Â The collaboration among the Mission 4636  volunteers was the key element Â in their success for providing location  and rapid translation to responders on the ground.</strong></p>
<p><strong>With an AR  view, responders can send back their sensor information from their  mobiles to provide contextual information to remote volunteers. Â This  extends the conversation between remote volunteers and on the ground  responders and fosters collaboration which was a key element for the  success of Mission 4636â€³</strong></p>
<p><strong>4)</strong> Joe Lamantia,  an experience design and strategy consultant helping to define the  interaction framework and scenarios behind ARWave, will discuss, <strong>â€œDesign  Principles For Social Augmented Experiences:â€</strong></p>
<p><strong>â€œWith  the exotic mixed realities envisioned by futurists and science fiction  writers seemingly around the corner, it is time to move beyond questions  of technical feasibility to consider the value and impact of turning  reality inside out for everyday social settings and experiences. Thanks  to the inherently social nature of augmented reality, we can be sure the  value and impact of many augmented experiences depends in large part on  how effectively they integrate with the social dimensions of real-world  settings, in real time.&#8221;</strong></p>
<p>Joe will share, <strong>&#8220;eight guiding  principles for designing experiences that engage naturally with the  social dimension, and increase the value of augmented experiences.&#8221; </strong></p>
<p><strong>5) <a id="y08e" title="AR Wave" href="http://groups.google.com/group/arwave">&#8220;ARWave</a> &#8211; A demo and state of play,&#8221; </strong>from Tish Shute</p>
<p>I  will have the awesome privilege, on our Where 2.0 panel, of showcasing <a id="y08e" title="AR Wave" href="http://groups.google.com/group/arwave">ARWave</a>.Â Â  We willÂ   premier the ARWave demo which shows how ARWave has accomplished the  basics of geolocating data on Wave Federation Protocol (and real time  collaboration on this geolocated data).Â  <span id="ejpu" dir="ltr">If  you&#8217;re interested in the ARWave project join the <a id="n4k6" title="Mailing  list" href="http://groups.google.com/group/arwave">Mailing list</a>, FAQ are <a id="medt" title="here" href="http://lostagain.nl/websiteIndex/projects/Arn/information.html">here</a>, and have a peek at the current state of  development at <a id="ius-" title="Google Code" href="http://code.google.com/p/arwave/">Google Code</a>, and the <a id="dj:p" title="specification for an AR Blip" href="http://arwave.wiki.zoho.com/ARBlip-Specification.html">specification for an AR Blip</a>.Â   We also have Waves for the project hosted on Google Wave.Â  You can  join the general discussion <a id="xiwt" title="here" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252BJAcNzz16A">here</a>, and the technical side <a id="s393" title="here" href="https://wave.google.com/wave/#restored:wave:googlewave.com%21w%252Bhvk2Fj3wB">here</a>.</span></p>
<p>The picture below is a  screen shot from the demo video produced by core AR Wave developer and  concept designer, Thomas Wrobel.</p>
<p>Click on the  image to enlarge, and note: <strong>â€œThe pink thing is from Dennou Coil. Its  an anti-virus program (that literally chaseâ€™s down bugs and glitches and  removes them).â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.58.55-PM.png"><img class="alignnone size-medium wp-image-5344" title="Screen shot 2010-03-27 at 6.58.55 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.58.55-PM-281x300.png" alt="Screen shot 2010-03-27 at 6.58.55 PM" width="281" height="300" /></a></p>
<h3>ARWave</h3>
<p>In ARWave, stories or art are tied to place. And as Jeremy Hight  writes:</p>
<p><strong>â€œThe possibility exists to take a part of an  area and overlay a dystopia, a utopia, multiples of each of these, or  even recreations of previous incarnations in the past. Writing and  publication thus cannot only be of place, and form(s), but of selected  augmentations of icons, streets, buildings and related texts on top of  the map. These spaces can be built in real time and can be turned on and  off as channels of augmentation that over time illustrate many faces of  place in its present, past, possible futures,etc. with texts within  these alternate spaces as commentary, as fused aesthetic analysis, or  simply creative writing relevant to these charged and hybrid spaces.â€</strong></p>
<p>As  Thomas notes, Jeremy Hightâ€™s,Â  <strong>â€œidea of channels ties into the concept  of waves = a layer, and people can have many layers on at once.â€</strong></p>
<p>This  is different from the <a href="http://layar.com/" target="_blank">Layar</a> concept of a layer or rather â€œlayar.â€</p>
<p><strong>&#8220;We  are not talking about layers in the classical map layer way of  thinking, where you have a layer of all restaurants or a layer of all  mountain peaks, etc.,&#8221; </strong>notes ARWave developer Markus Strickler.</p>
<p>Currently all geo location apps like Layar have to use their own  servers, so users have to use different clients with different log ins  to see data from different sources.Â  But because ARWave uses federation,  we don&#8217;t depend on centralized infrastructure where the client of one  company can only connect to the server of that company.Â  This opens up  many exciting new possibilities for how people can decide to view and  publish geolocated data.</p>
<p>With AR Wave, via one  login, people can access the whole distributed network of servers (see  diagrams below), and any content will be accessible to them. ARWave will  make it easy for individuals, not just developers, to layer their  environment â€“ allowing the creation of augmented reality content to be  as simple as contributing to a Wave.</p>
<p><strong>â€œARWave  will enable individuals to publish easily to everyoneâ€¦.or just a few  people,â€</strong> Thomas notes:</p>
<p><strong>â€œTo â€˜publishâ€™ is also  self publication and distribution in communities or like minded groups  without the hard read of publication or rejection.â€ = publishing on a  Wave. No one approves it, anyone can publish to communities, or their  friends and family. Or even just personal publishing it for their own  reference.â€</strong></p>
<p>But ARWave does not compete with  existing AR Browsers.Â Â  On the contrary, AR browsers like Layar,  Wikitude and others, could implement ARWave and use it to enhance their  applications.</p>
<p><strong>â€œ<a href="http://layar.com/" target="_blank">Layar</a></strong><strong> has a killer  browser already,Â  ARWave would add social features. They can keep their  â€œwalled gardenâ€ of data and still join the federation of open data too <img src="../wp-includes/images/smilies/icon_smile.gif" alt=":)" /> â€ (Thomas Wrobel)</strong></p>
<p>Yup, that is the cool  part of federation â€“ you can have your cake and eat it too!</p>
<p>Sophia  Parafina and I will be organizing a discussion session on ARWave and  Federation at <a href="http://upcoming.yahoo.com/event/4909659/CA/Mountain-View/WhereCamp-SF/Google-Maxwell-Tech-Talk/CA/Mountain-View/WhereCamp-SF-2010/Google-Maxwell-Tech-Talk/" target="_blank">WhereCamp</a>, right after Where 2.0, April 3rd and 4th, and<a href="http://twitter.com/dlpeters" target="_blank"> Dan Peterson</a> who is in leading the  federation effort for Google Wave will join us.</p>
<p>The  diagrams below illustrate how ARWave and federation can revolutionize  the way we share our augmented realities.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.33-PM.png"><img class="alignnone size-medium wp-image-5347" title="Screen shot 2010-03-27 at 6.06.33 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.33-PM-300x218.png" alt="Screen shot 2010-03-27 at 6.06.33 PM" width="300" height="218" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.00-PM.png"><img class="alignnone size-medium wp-image-5345" title="Screen shot 2010-03-27 at 6.06.00 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-27-at-6.06.00-PM-300x214.png" alt="Screen shot 2010-03-27 at 6.06.00 PM" width="300" height="214" /></a></p>
<h3><strong>Real Time Social Augmented Experiences</strong></h3>
<p>Another key  aspect of ARWave is itâ€™s near to real time update capabilities.Â  As Jeff  Pulver pointed out in, â€œ<a href="http://pulverblog.pulver.com/archives/009156.html" target="_blank"><strong>SXSW  2010: The days twitter became less relevant:â€</strong></a></p>
<p><a href="http://pulverblog.pulver.com/archives/009156.html" target="_blank"><strong> </strong></a><strong>â€œAt  <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c149%7c09546&amp;digest=j9iIm6%2b67%2fKjaKaD%2bG459g" target="_blank">South By Southwest</a> 2010 (SXSW), a strange thing  happened on the way to Austin. A community of twitter faithful shifted  from sharing everything about everything on only twitter (and maybe  Facebook) and changed their habits to rely on learning about what was  happening and where things were happening by using <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c140%7c09546&amp;digest=vh5VR%2fg1W2H2FHKwRIGl8g" target="_blank">foursquare</a> and <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c141%7c09546&amp;digest=SyK27R5EP7LzBWYvodNDpQ" target="_blank">Gowalla</a> instead. Iâ€™m sure there were other products  and platforms being used including <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c142%7c09546&amp;digest=Nd55%2flEGjFr3lopcn8%2fqiA" target="_blank">Loopt</a> and <a href="http://click.bsftransmit1.com/ClickThru.aspx?pubids=6954%7c143%7c09546&amp;digest=rJYwQX8VJw9Bww36xQ1Lbg" target="_blank">GySPii</a> but foursquare and Gowalla were the dominant  platforms.â€<br />
</strong></p>
<p>Later Jeff wrote:</p>
<p><strong>â€œThere  were times where I could feel the ebbs and the flows of the people move  as different people checked into various locations. While most of this  was felt locally in the place I was in, it also became apparent on the  platforms when hundreds of people would rush to check in to a location.  There were also times when it felt like I was chasing ghosts; These were  the times I would go to a spot because a friend had checked into that  spot only to discover they were no longer there.â€</strong></p>
<p>ARWaveâ€™s  realtime collaborative capabilities are going to introduce some  fascinating dynamics to â€œchasing ghosts,â€ as the  ARWave framework gets integrated into services like foursquare â€“ a  project we have already begun to look at.</p>
<h3><strong>Augmented Reality  Search</strong></h3>
<p>As I mention<a href="../../2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/" target="_blank"> in my previous post</a>, ARWave presents some  fascinating possibilities for AR Search.Â  For example, one might do  advanced searching within waves using SPARQL, which could then display  in the form of a personal blip in your viewpoint (which in turn could be  shared with others).Â  Linked data will be massively important in  filtering and delivering useful info for augmented views (<a href="../../2010/03/03/the-game-is-about-the-world-not-dragons-talking-with-will-wright/" target="_blank">see my conversation with Will Wright </a>about the  problem of augmented reality overriding our very smart instincts and not  being useless or worse as a result).</p>
<p>Anselm Hook, who I  interviewed in depth recently about,Â <a title="Permanent Link to Visual Search,  Augmented  Reality and a Social Commons for the Physical World Platform:  Interview  with Anselm Hook" rel="bookmark" href="http://docs.google.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>, has  some very interesting thoughts on real time stuff, trading brokerages,  andÂ  the view within a single city block, which he elaborated on in the  second half to this interview which is upcoming on Ugotrade soon!</p>
<h3><strong>The  ARWave Developers</strong></h3>
<p><strong> </strong>There are three  people who unfortunately canâ€™t join us at Where 2.0 â€“ Â the costs of  travelling from Europe being an obstacle. Â But as they have been  developing the code for ARWave that will rock our augmented world, I  asked them, in a Wave conversation, to give me a few comments about  their interest in working on ARWave, and a pic and a short bio. Â  Also I  should mention the work of the PyGoWave team whose incredibly fast work  creating <a id="stt3" title="PyGoWave" href="http://pygowave.net/">PyGoWave</a> has given ARWave a rocket launch pad.Â  Also many thanks to the Wave community, see the <a id="vma_" title="Wave Federation  Protocol documentation" href="http://www.waveprotocol.org/">Wave Federation Protocol documentation</a>, <a id="exsg" title="Google's Wave  Server" href="https://wave.google.com/wave">Google&#8217;s Wave Server</a>, <a id="b:s7" title="RubyOnSails" href="http://wiki.github.com/danopia/ruby-on-sails/">RubyOnSails</a> (Ruby On Rails based Wave server).</p>
<p><a href="http://need2revolt.wordpress.com/" target="_blank"><strong>Davide   Carnovale</strong></a> @need2revolt</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/davide.jpg"><img class="alignnone size-thumbnail wp-image-5349" title="davide" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/davide-150x150.jpg" alt="davide" width="150" height="150" /></a></p>
<p><strong>â€œImho, the coolest  geolocated related thing is that weâ€™re making a world where the info  does not necessarily comes from an explicit search from the user, but  comes also from the actual locaton youâ€™re in. For instance, you can have  special offers in stores like fourquare does, or your friends can leave  geolocated notes for you that are triggered when you walk by.Â  We can  have games based on the treasure hunt schema requiring you to actually  go to specific location.</strong></p>
<p><strong>Other than this I  can think about self-guided tours of the city, maybe user generated  too, or for museums.<br />
</strong></p>
<p><strong>Naturally these are long term  goals with some rl use cases.</strong></p>
<p><strong>As for my  bio, there isnâ€™t much to sayâ€¦ I got a first level degree in computer  science and Iâ€™m taking the second (and last) level. Iâ€™ve developed with  mobile agents, osgart/artoolkit, brain computer interfaces, linux kernel  and thatâ€™s pretty much allâ€¦â€</strong></p>
<p><strong><br />
</strong></p>
<p><strong><a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-28-at-4.35.59-AM.png"><img class="alignnone size-thumbnail wp-image-5354" title="Screen shot 2010-03-28 at 4.35.59 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Screen-shot-2010-03-28-at-4.35.59-AM-150x150.png" alt="Screen shot 2010-03-28 at 4.35.59 AM" width="150" height="150" /></a></p>
<p><strong>&#8220;If you are looking for specific advantages of using Wave I&#8217;d say;<br />
</strong><strong> </strong></p>
<p><strong>*  Federated â€“ Letting creators tap into bigger userbase. Each new app or  data layer will add to the â€œincentiveâ€ for users to join in. Google had  some good stats a few months back as to how much a simple login screen  can put people off using stuff. Â By breaking that barrier it should make  AR userbaseâ€™s grow.</strong></p>
<p><strong>* It deals with user accounts,  permissions, and real-time updating without creators needing to make a  new server standard themselves. It lowers barriers to development.</strong></p>
<p><strong>*  As the clients, servers, and data can be made separately by different  parties, its easier for developers to concentrate on just providing what  they want. You want to just make content? No problem! You dont need to  worry about doing anything else but that. It would become as easy as  making a webpage (or easier!).</strong></p>
<p><strong>* Bots will allow the  development of interactive AR games very easily. Just like modern  version of IRC bots, the infrastructure does the heavy lifting, and  interesting things can be done with just simple scripting.</strong></p>
<p><strong>*  The idea is anyone will be able to make a layer onto the world, and  people can mix, match and share their layers as they wish. Its not just  the data that becomes interesting to see augmenting our world, but the  combinations of data! For example, perhaps you could see the profits  generated by different companies above their buildings, but also see how  environmentally friendly they are at the same time. Or maybe see  pollution levels against health-statistics.Â  Seeing combinations of  geolocated data from different sources at the same time has many  interesting possibilities both for scientific as well as casual (game/  map/ chat) use.</strong></p>
<p><strong>hmz..I could go on forever listing stuff  here reallyâ€¦..</strong></p>
<p><strong>I guess if we are supposed  to be forming a roadmap of significant/interesting things for ARWave?</strong></p>
<p><strong>*  Example clients letting people make their own layers (waves) and add  points to them.</strong></p>
<p><strong>* Letting people log in to different  servers</strong></p>
<p><strong>* Servers federated together. (not our  responsibility, but essential part of the roadmap).</strong></p>
<p><strong>*  Anyone logged into any server can see data from anyone else that&#8217;s shared  with them, regardless of where they are logged into</strong></p>
<p><strong> * 3D  support, demonstrating various sorts of geolocated data.?</strong></p>
<p><strong>*  Use of bots for example games?<br />
â€”-<br />
My Bioâ€™s quite simple.<br />
Studied 3D Animation in Portsmouth, UK.<br />
Moved to the Netherlands,  have since been working in creating ARG games, in the last year founded  Lostagain (Lostagain.nl).â€</strong></p>
<p><strong><br />
</strong></p>
<p><strong><a id="ikdu" title="Markus Strickler" href="http://twitter.com/kusako">Markus  Strickler @kusako</a></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/markus.jpg"><img class="alignnone size-thumbnail wp-image-5350" title="markus" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/markus-150x150.jpg" alt="markus" width="150" height="150" /></a></p>
<p><strong>â€œI think the main point behind ARWave is to go beyond simply  displaying existing placemarks on top of a live camera view, towards a  highly personalized, augmented world where everybody can edit and share  localized information collaboratively and in real time. Wave provides  the means to do this through its model of persistent real time  conversations and adds even more by providing a way for personal agents  (robots) to participate in these conversations.</strong></p>
<p><strong>As  for my Bio: Iâ€™ve been developing Web applications for the last 15  years, hold a degree in Image Sciences and am currently working as a  Java developer in Cologne, Germany.â€</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/29/the-next-wave-of-ar-exploring-social-augmented-experiences-at-where-2-0/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>AR Wave: Layers and Channels of Social Augmented Experiences</title>
		<link>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/</link>
		<comments>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/#comments</comments>
		<pubDate>Tue, 13 Oct 2009 18:52:42 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Amphibious Architecture]]></category>
		<category><![CDATA[AR Blip]]></category>
		<category><![CDATA[AR Browser]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[augmentaion]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Channels and Social Augmented Realities]]></category>
		<category><![CDATA[citi sensing]]></category>
		<category><![CDATA[citizen sensing]]></category>
		<category><![CDATA[Clayton Lilly]]></category>
		<category><![CDATA[cybernetics vs ecology and human waste]]></category>
		<category><![CDATA[distributed]]></category>
		<category><![CDATA[eco mapping]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geoAR]]></category>
		<category><![CDATA[geospatial web]]></category>
		<category><![CDATA[geospatial web and augmented reality]]></category>
		<category><![CDATA[Goggle Wave Federation Protocol]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave as an AR enabler]]></category>
		<category><![CDATA[Google Wave enable augmented reality]]></category>
		<category><![CDATA[Google Wave Protocols]]></category>
		<category><![CDATA[green tech augmented reality]]></category>
		<category><![CDATA[immersive sight]]></category>
		<category><![CDATA[Jeremy Hight]]></category>
		<category><![CDATA[Joe Lamantia]]></category>
		<category><![CDATA[Layers]]></category>
		<category><![CDATA[layers and channels of augmented reality]]></category>
		<category><![CDATA[Life Clipper]]></category>
		<category><![CDATA[life streaming]]></category>
		<category><![CDATA[location based media]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative narratives]]></category>
		<category><![CDATA[Mannahatta]]></category>
		<category><![CDATA[map based augmentation]]></category>
		<category><![CDATA[mapping]]></category>
		<category><![CDATA[modulated mapping]]></category>
		<category><![CDATA[modulated napping]]></category>
		<category><![CDATA[multi-user]]></category>
		<category><![CDATA[narrative archaeology]]></category>
		<category><![CDATA[Natural Fuse]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[networked urbanism]]></category>
		<category><![CDATA[non euclidian geometry]]></category>
		<category><![CDATA[open augmented reality framework]]></category>
		<category><![CDATA[Seanseable Labs]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shared augmented realities]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality experiences]]></category>
		<category><![CDATA[sound augmentation]]></category>
		<category><![CDATA[Thomas K. Carpenter]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Trash Track]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[virtual reality]]></category>
		<category><![CDATA[Wave as a platform for augmented reality]]></category>
		<category><![CDATA[Wave Blip]]></category>
		<category><![CDATA[Wave Bots]]></category>
		<category><![CDATA[Wave playback]]></category>
		<category><![CDATA[Wave playback feature]]></category>
		<category><![CDATA[Wave Robots]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4585</guid>
		<description><![CDATA[It is now nearly two weeks since the Google Wave preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the Google Wave Federation Protocol and servers (click on the image to see [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank"><img class="alignnone size-medium wp-image-4586" title="Screen shot 2009-10-12 at 2.40.39 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/Screen-shot-2009-10-12-at-2.40.39-PM-300x154.png" alt="Screen shot 2009-10-12 at 2.40.39 PM" width="300" height="154" /></a></p>
<p>It is now nearly two weeks since the <a href="http://wave.google.com/" target="_blank">Google Wave </a>preview launch and I am happy to say we have some AR Wave news. The diagram above shows Thomas Wrobelâ€™s basic concept for a distributed, multi-user, open augmented reality framework based on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a> and servers (click on the image to see the dynamic annotated sketch <a href="http://lostagain.nl/tempspace/PrototypeDiagram3_wave.html" target="_blank">or here</a>).</p>
<p>Even in the short time we have had to explore Wave, some very exciting possibilities are becoming clear. Thomas puts some of the virtues of Wave as an AR enabler succinctly when he writes:</p>
<p><strong>â€œWave allows the advantages of both real-time communication, as well as the advantages of persistent hosting of data. It is both like IRC, and like a Wiki. It allows anyone to create a Wave, and share it with anyone else. It allows Waves to be edited at the same time by many people, or used as a private reference for just one person.</strong></p>
<p><strong>These are all incredibly useful properties for any AR experience, more so Wave is open. Anyone can make a server or client for Wave. Better yet, these servers will exchange data with each other, providing a seamless world for the userâ€¦..a single login will let you browse the whole world of public waves, regardless of whoâ€™s providing or hosting the data. Wave is also quite scalable and secureâ€¦data is only exchanged when necessary, and will stay local if no one else needs to view it.</strong></p>
<p><strong>Wave allows bots to run on itâ€¦allowing blips in a waves to be automatically updated, created or destroyed based on any criteria the coders choose. Wave even allows the playback of all edits since the wave was created.</strong></p>
<p><strong>For all these reasons and more, Wave makes a great platform for AR.â€</strong></p>
<p>There will be much more <span>coming soon on Wave enabled AR because the Google Wave invites have begun to flow out to a wider community now. This week, many of our small ad-</span>hoc group looking at the development challenges and implications of Google Wave for AR actually got into Wave for the first time.</p>
<p>Many thanks to all the people who have contributed to this discussion so far including: Thomas Wrobel, Thomas K. Carpenter, Jeremy Hight, Joe Lamantia, Clayton Lilly, Gene Becker and many others.</p>
<p>We will be setting up some public AR Framework Development Waves this week.Â  If you have any trouble finding them, or adding yourself to it, please add Thomas and I to your contact list.Â  I am tishshute@googlewave.comÂ  Thomas is darkflame@googlewave.comÂ  The first two are currently called:<strong> </strong></p>
<p><strong><br />
AR Wave: Augmented Reality Wave Framework Development</strong> (developer forum)</p>
<p><strong>AR Wave: Augmented Reality Wave Development</strong> (for general discussion)</p>
<p>The discussion so far has been in two areas. On the one hand, it is gear-heady and focused on the <a href="http://www.waveprotocol.org/" target="_blank">Google Wave Federation Protocol</a>, code, development challenges, and interfacing to mobile, while on the other hand people have been looking at use cases and questions of user experience.</p>
<p>Distributed, â€œshared augmented realities,â€ or â€œsocial augmented experiences&#8221; â€“ that not only allow mashups, &amp; multisource data flows, but dynamic overlays (not limited to 3d), created by users, linked to location/place/time, and distributed to other users who wish to engage with the experience by viewing and co-creating elements for their own goals and benefit &#8211; are something very new for us to think about.</p>
<p>As, Joe Lamantia, puts it, now:</p>
<p><strong>â€œthereâ€™s a feedback loop between which interactions are made easy by any given combo of device;/ hardware / software / connectivity, and the ways that people really work in real life (without any mediation / permeation by tech).â€</strong></p>
<p>Joe Lamantia whose term, <strong>â€œsocial augmented experiencesâ€</strong> I borrow for this post title, has done some thinking about <strong>â€œconcepts and models for understanding and contributing to shared augmented experiences, such as the social scales for interaction, and the challenges attendant to designing such interactions.â€ </strong>Check out <a href="http://www.joelamantia.com/" target="_blank">Joe Lamantia&#8217;s blog </a>for more on this later this week.</p>
<p>It is very helpful, as Joe points out, to shift the focusÂ  back and forth between the experience and the medium.</p>
<p>It is super exciting to have clear evidence that shared augmented realities are no longer merely possible, but highly probable and actually do-able now.</p>
<p>I shouldÂ  be absolutely clear about what Google Wave does to enable AR because obviously Wave plays no role in solving image recognition and tracking/registrations issues.Â  But, for example, Wave protocols and servers do provide a means to exchange, edit, and read data, and that enables distributed, social augmented realities.</p>
<p>Thomas explains how the newly named &#8220;AR Blip&#8221; works as:</p>
<p><strong>&#8220;An AR Blip is simply a Blip in wave containing AR data. Typically this would be the positional and url data telling a AR browser to position a 3d object at a location in space.</strong></p>
<p><strong>In more generic terms, an AR Blip allows data of various forms (meshes,text,sound) to be given a real-world position.&#8221;</strong></p>
<p>I have mentioned in other posts (<a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/09/26/total-immersion-and-the-transfigured-city-shared-augmented-realities-the-web-squared-era-and-google-wave/" target="_blank">here</a>) that Wave can be used for AR as precise or as loose as the current generation devices can handle. And as the hardware and software for the kind of AR that can put media out in the world to truly immerse you in a mixed space, the frameworkÂ  shouldÂ  be able to handle this too.</p>
<p>(a note on the Wave playback feature &#8211; this opens up a whole new world of possibilities.Â  Check out <a href="http://snarkmarket.com/2009/3605" target="_blank">this post</a> on some of the implications of playback for writing!)</p>
<p>The use cases we have been coming up with are too numerous to go into in detail this post<span>.Â  The open nature of an AR framework/Wave standard will lead to many new applications we have barely begun to imagine.Â  As Thomas points out, different client software can be made for browsing, potentially allowing for various specialist browsers, as well as more generic ones for typicalÂ  use. T</span>he multitudes of different kinds of data in/output that could be integrated in an open AR framework as it evolves are mind boggling.</p>
<p>But, for now, someÂ  obvious use cases do come to mind:<br />
eg.</p>
<p>- Historical environmental overlays showing how a city used to be/and how this vision may be constructed differently by different communities</p>
<p>- Proposed building work showing future changes to a structure/and the negotiations of this future (both the public and professionals could submit their own comments to the plans in context), seeing pipes, cables and other invisible elements that can help builders and engineers collaborate and do their work.</p>
<p>- Skinning the world with interactive fantasies</p>
<p>I asked Thomas to help people understand how Wave enables new interactions to data by explaining how Wave could enable citi sensing and citizen sensing projects (e.g.<a href="http://tinyurl.com/y97d5zr" target="_blank"> this one being pioneered by Griswold</a>):</p>
<p><strong><strong>&#8220;Sensors, both mobile and static could contribute environmental data into city overlays;</strong></strong></p>
<div><strong><strong>â€”temperature, windspeed, air quality (amounts of certain particles) water quality, amount of sunlight, Co2 emissions could all be feed into different waves. The AR Wave Framework makes it easy to see any combination of these at the same time.&#8221;</strong></strong></div>
<div><strong><strong><br />
</strong></strong></div>
<p><strong><strong> </strong></strong>Having these invisible aspects of the world made visible would create ways to improve sustainability, social equity, urban management, energy efficiency, public health, and allow communities to understand and become active participants in the ecosystems and infrastructure of their neighborhoods.</p>
<p>The key is reflecting thisÂ  kind of data back to people &#8220;making it not back story but fore story,&#8221; right where we are, right where it happens, as well as having it available for analysis.</p>
<p>As well asÂ  creating new opportunities to interact/respond to/and enhance data, making visible the invisible as <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko&#8217;s</a> work on <a href="http://www.amphibiousarchitecture.net/" target="_blank">Amphibious Architecture</a> and <a href="http://www.haque.co.uk/" target="_blank">Usman Haque&#8217;s</a> project <a href="http://www.sentientcity.net/exhibit/?p=43" target="_blank">Natural Fuse</a> shows, can also create new connections/understandings between humans and the non human&#8217;s that share our world, e.g. fish, plants, waterways.</p>
<p>At a more prosaic levelÂ  potential buyers of property could see more clearly what they are buying, city planners could see better what needs to be worked on, and environmental researchers could see more clearly the impact people are having on an area.</p>
<p>Also Wave can provide some of the framework necessary to begin to begin to address tricky problems of privacy. Sensitive data can be stored on private waves, e.g. medical data for doctors and researchers, but the analysis of theÂ  data could still be of benefit to all, e.g., if it&#8217;s tied disease occurrences to locations andÂ  relationships between the environmental data and health wereâ€¦quite literallyâ€¦made visible.</p>
<p><strong>&#8220;The publication of energy consumption and making it visible as overlays, could help influence the public into supporting more energy efficiency companies and businesses. It could also help citizens to try to keep their own energy usage down, to try to keep their street in â€œthe green.â€</strong></p>
<p>Thomas notes:</p>
<p><strong>&#8220;With all of the above, it becomes fairly trivial to write persistent Wave-bots that automatically send notice when certain criteria are met (pollutants over a certain level, for example). On publicly readable waves, anyone can use the data in their local computers, process it, and contribute results back on a new wave. Alternatively, persistent remote severs could run Cron jobs, or other automated processing, using services such as App Engine to run wave robots.</strong></p>
<p><strong>All these possibilities become â€œfreeâ€ when using Wave as a platform for geographically tied data.&#8221;</strong></p>
<p>But of course this is just the beginning!</p>
<p><em>Recently, I talked at length with Jeremy Hight who has been thinking about, designing and creating shared augmented realities, that anticipate the kind of dynamic, real time, large scale architecture we now have available through Wave,Â  for quite some time now.Â Â  This is exciting stuff. </em></p>
<p><em><br />
</em></p>
<h3><strong>Modulated Mapping:</strong> Talking with Jeremy Hight about Layers, Channels andÂ  Social Augmented Experiences</h3>
<p><strong><strong> </strong></strong></p>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5.jpg"><img class="alignnone size-medium wp-image-4611" title="modulatedmapping5" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping5-230x300.jpg" alt="modulatedmapping5" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><strong><em><span>image from Volume Magazine (Hight/Wehby)</span></em></strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I know you have been involved in locative media from its early days. Perhaps we can talk about how AR continues the locative media journey?</p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> gave me this distinction, recently:<em> &#8220;AR is about systems that put media out in the world, and immerse you in a mixed space. Â Even the current &#8220;not really registered&#8221; mobile phone AR systems are still &#8220;sort of&#8221; AR (e.g., Layar, etc).</em></p>
<p><em>Locative media/ubicomp/etc are very different, in that they tend to display media on a device (phone screen) that is relevant to your context, but does not attempt to merge it with the world.<br />
The difference is significant, and making it clear helps people think about what they do and what they want to do, with their work. The locative media space though points toward future AR systems (when the technology catches up!).&#8221;</em></p>
<p><strong><strong>Jeremy Hight: The need is to finish the arc that locative media and early AR have started and to now truly return to the map itself, but as an internet of data, interactivity, channels of data , end user options like analog machines once were but in high end tools, a smart AI-ish ability for it to cull data for the user, and to allow social networking to be in real world places on the map both in building augmentation and in using and appreciating it..not hacks..which have their place&#8230;but a rhizome, a branched system with shared root,end user adjustable and variable..this is the key.</strong></strong></p>
<p><strong><strong>This takes AR and mapping and makes a possible world of channels in space and this eventually can be a kind of net we see in our field of vision with a selected percentage of visual field and placement so a geo-spatial net, a local to world wide fusion of lm into a tool and educational tool</strong></strong></p>
<p><strong><strong><span>VR[virtual reality] has greatly advanced, but in nodes as it has limitationsâ€¦LM [locative media] is the sameâ€¦AR [augmented reality] is the way..</span></strong><strong> it now has locative elements and aspects of VR integrated into its functionality and nodes&#8230;it is the best option with all of these elements, greater hybridity and data level potential a well as end user and community sourcing potential</strong></strong></p>
<p><strong><strong>I wrote an essay for Archis&#8217; Volume, the architecture magazine on a near future sense of some of this&#8230;.a visual net on the lens like ar but with smart objects and social networking and dissent.</strong></strong></p>
<p><strong><strong>I also wrote of these things for immersive graphic design, spatially aware museumÂ  augmentation, education through ar and lm and nod to the base interface of eye to cerebral cortex in layered and malleable augmentation in my essay <a href="http://www.neme.org/main/645/immersive-sight" target="_blank">&#8220;Immersive Sight&#8221;</a> a few years back</strong></strong></p>
<div id="gqg9" style="text-align: left;"><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b.jpg"><img class="alignnone size-medium wp-image-4601" title="dgznj3hp_3dj7g8zf7_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_3dj7g8zf7_b-300x225.jpg" alt="dgznj3hp_3dj7g8zf7_b" width="300" height="225" /></a></strong></div>
<p><strong><strong>image [above] is simple illustration of a possible example on a screen or in front of eye where in a mondrian show..the graphic design of information actually builds as one moves</strong></strong></p>
<p><strong><strong>(key is calibrated spatial intervals and related layers of further augmentation which is logical due to location and proximity)</strong></strong></p>
<p><strong><strong>from immersive sight on immersive graphic design:</strong> <em>&#8220;The design can work with this in a way that creates an interactive supplemental set of information that is malleable, shifts based on location, builds and peels away as one moves closer to a work and plays with the forms of the works and the elements of the space itself. The sequence can contain many different elements and their interplay (both in the field of vision and in terms of context and layers of information). This is the model of sections of augmentation turning on and off at key points as individual spatial and concepts moments and nodes.</em></strong></p>
<p><strong><em>Another interesting possibility is that individual points of augmentation donâ€™t turn off, but instead are designed to build as one moves in a direction toward a specific part of the exhibit. The design can work in a sequence both content wise and visually in terms of a delay powered compositional development and style in which each discreet layer of text and image does not fade out, but builds on each other into a final composition. This can form paintings similar to Mondrian perhaps if it is a show of similar works of that era or it can form something much more metaphorical and open interpretation of the space and content but utilizing a sense of emergence spatially in terms of the composition (pieces laid bare until final approach for effect). </em></strong></p>
<p><strong><em>Each section will be well designed, but they build in layers as one moves until finally forming the final composition both visually and in terms of scope of information or building immediacy. The effect can be akin to taking a painting and slicing it into onion skin layers laid out in the air at intervals, each the same dimensions, but only one section compositionally of the greater whole. This has many semiotic applications beyond its potential aesthetically and as spatialized information possessing a sense of inter-relationship as one moves.</em>&#8220;</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>One of the things I found very inspiring when I read your papers was that your ideas are not all dependent on a model of AR that would necessarily require goggles, back packs and lots of CPU/GPU &#8211; not that that wouldn&#8217;t be nice, but that even using &#8220;magic lens&#8221; AR of the kind smart phones has enabled in an open distributed framework would open up a lot of new possibilities for what you call modulated mapping wouldn&#8217;t it?Â  What kind of social augmented realities might be enabled by a distributed infrastructure like this [AR Wave]?</p>
<p><strong><strong>Jeremy Hight: right&#8230;.I see that as wayyy down the road&#8230;most important is the one you talk about as it is more immediate and thus more essential and needed. Eventually the goggles will be like a contact lens and a deep immersive ar version ofÂ  this will come, that to me is certain, but a ways down the road.Â  An incredible amount is possible now, and this is a more pragmatic move as opposed to the more theoretical of what is a few steps from here. Thus it is more important and essential now. Tools like Google Wave are taking what even 2 years ago was more theoretical discussions of what may be and instead introducing key elements to a more immediate, powerful, flexible level of augmentation. What have been hacks and isolated elements are to be integrated and social networking, task completion, shared tools and graphics building and geo-location.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think some people question what augmented reality has to bring to the continuum of location based experiences that other forms of interface/mapping do not?</p>
<p><strong><strong><span>Jeremy Hight: rightâ€¦.and the schism between its commercial </span></strong><strong>flat self and tests with physics etc and in between &#8230;there are a lot of unfortunate assumptions it seems as to where ar and lm cross and how ar can be many things beyond deep immersion or the opposite pole of a hockey puck having a magic purple line etc&#8230;.like lm is seen as either car directions or situationist experiments with deep data&#8230;..the progression to me is deeply organic&#8230;.and now augmentation can be more malleable, variable and end user controlled.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes, it is really exciting time for AR.Â  Historically AR research has gone after the hard problems of image recognition, tracking and registration because we have had available to us these dynamic, real time, large scale architectures like Wave available (until now!),Â  so less work has been done on exploring the possibilities for distributed AR fully integrated with the internet and WWW hasn&#8217;t it?</p>
<p>A distributed augmented reality framework such as we have envisaged on Wave wouldÂ  allow people to see many layers from many different people at the same time. â€¬And this kind of model has been part of your thinking and fundamental to your work for a while, hasn&#8217;t it? But it is a very new idea to most people to think about collaboratively editing layers on the world, and to be able to viewÂ  augmented space through channels and networked communities?Â  Could you explain some of the ways you have explored these ideas and how they could be explored further now to create meaningful experiences for people?</p>
<p><strong><strong><span>Jeremy Hight: right..exactlyâ€¦modulated mapping to me can be an amazing tool for studentsâ€¦back end searching data visualizations and augmentations based on their needsâ€¦while they do something else on their computer or iphoneâ€¦that can be amazing..and not deep </span></strong><strong>immersive..The map can be active, malleable, open source fed, and even, in a sense, intelligent and able to adapt. The possibility also exists for this map to have a function that based on key words will search databases on-line to find maps, animations, histories and stories etc to place within it for your study and engagement. The map is thus a platform and yet is active. Community is possible as people can communicate graphically in works placed on the map and in building mode in the tool. All the tropes of locative media are to be in a </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> system of channels of augmentation and a spatial net. The software by design will allow development on the map and communication like programs such as second life but in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> itself.</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1.jpg"><img class="alignnone size-medium wp-image-4607" title="interactive 3d map copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modultedmapping1-246x300.jpg" alt="interactive 3d map copy" width="246" height="300" /></a></strong></p>
<p><strong><strong><em><strong><span>image from Parsons Journal of Information Mapping Volume 2 (Hight/Wehby)</span></strong></em></strong></strong></p>
<p><strong><strong><span>I wrote an essay a few years ago for the Sarai reader questioning the traditional map and its semiotics and need to reconsider â€“ then did work looking into it and what those dynamics were and they got into 2 group shows in museums in Russiaâ€¦so it actually was my arc toward modulated mappingâ€¦an interesting way to it! But yes the map itself..this is a huge area of potential and non screen based alone navigation etc. I see now that my 2 dozen or so essays in lm,ar, interface design and augmentation have all also been leading in this direction for about 10 years now</span></strong></strong></p>
<p><strong><strong>Tish Shute: </strong>IÂ  love immersive visualization but can we &#8220;return to the map &#8211; the internet of data&#8221; as you mentioned earlier and produce interesting augmentation experiences that go beyond locative media&#8217;s device display mode without having the goggles, for example, through the magic lens of or smart phones?</strong></p>
<p><strong><strong>Jeremy Hight: yes, absolutely.Â  the map in the older paradigm is an artifice born often of war and border dispute and not of the earth itself and its processes&#8230;the new mapping like google maps is malleable, can be open source, can read spaces and can be layers of info in the related space not plucked from it as in the past..this is amazing. the old map also was born of false semiotics/semantics like &#8220;discovery of new lands&#8221; or &#8221; pioneer&#8221;Â  while the places were there already and names often were of empire&#8230;now this is no longer the case</strong></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2.jpg"><img class="alignnone size-medium wp-image-4608" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping2-300x233.jpg" alt="jeremy map small2 copy" width="300" height="233" /></a></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>So geoAR is an a better way to express a new social relationship to mapping? And how does this fit into the evolving arc of locative media that evolves into augmented reality?</p>
<p><strong><strong>Jeremy Hight:&#8230;early lm was mostly geocaching and drawing with gps..it took new paradigms to invigorate the fieldÂ  a lot of folks focus on tools and what already is, cross pollination can ground ideas that are more radical&#8230;a metaphor in a sense to place what can be in a familiar context.</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>one of the great disappointments in VR has been its isolation from networked computing and also, up to now, augmented reality &#8211; to achieve an immersive experience withÂ  tight registration of media/graphics have to create separate system isolated from the internet and power of the web.</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.this will change. vr is to me an island but ar takes a part of it and shifts the paradigm and new things open this way. Do you know the project <a href="http://www.lifeclipper.net/EN/process.html" target="_blank">&#8220;life clipper&#8221;</a>? friends of mine..doing interesting things..they are a clear bridge betwen lm and ar&#8230;.and from vr</strong></strong></p>
<p><strong><strong>in ar augmentation and what is being augmented become fused or in collision or in complex interactions as a means to a larger contextualization and exploration of what is being augmented..this is true in immersive or non ar&#8230;.huge potential</strong></strong></p>
<p><strong><strong>vr is a space, now can be surgery which is amazing. but not layered interaction, thus an island and graphic iconography on a location can use symbolic icons which opens up even more layers (graphic designer/information designer in me talking there I suppose..)</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yes !Â  talk to me more about layers and channels I think this is one of the most interesting questions for meÂ  in augmented reality at the moment &#8211; what can we do with layers and channels and the new possibilities on connections between people and environments that these can create?</p>
<p>The ability for anyone to post something is critical to the distributed idea but one of the reasons I am so excited by Google Wave is I am fascinated by the playback function. How do you think this will enable new forms of collaborative locative narratives (<a href="http://snarkmarket.com/2009/3605" target="_blank">nice post on Wave playback here </a>).</p>
<p><strong><strong>Jeremy Hight: We are in an age of cartographic awareness unseen in hundreds of years. When was the last time that new </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> tools were sold in chain stores and installed in most vehicles? When was the last time that also the augmentation of maps was done by millions (Google map hacks, etc)? The ubiquitous gps maps run in automobiles while people post pictures and graphic pins to denote specific places on on-line maps.</strong></strong></p>
<p><strong><strong>The need is for a tool that combines all of these new elements into an open source, intuitive layered and rhizomatic map that is porous (like pumice, organic in form yet with â€œbreathing roomâ€ ),ventilated (i.e: adjustable, a flow in and out), and open (open source,open access,open spatialized dialog).</strong></strong></p>
<p><strong><strong><span> I wrote of this in my essay &#8220;Revising the Map: Modulated Mapping and the Spatial Interface .&#8221;(</span></strong><span> </span><a id="h0qr" title="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )" href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%20%29"><span>http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf )</span></a></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3.jpg"><img class="alignnone size-medium wp-image-4609" title="jeremy map small2 copy" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/modulatedmapping3-300x206.jpg" alt="jeremy map small2 copy" width="300" height="206" /></a></strong></p>
<p><strong><em><strong><span>image from Parsons Journal of Information Mapping (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> One mapping project I really like is <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>.Â  How could distributed AR contribute to a project like <a href="http://themannahattaproject.org/" target="_blank">Mannahatta</a>?</p>
<p><strong><strong>Jeremy Hight: that is a good example..imagine taking manhattan and having channels of options to overlay, that being an excellent option, and imagine being able to even run a few at once with deliniating icons..you can augment a space with history, data, erasure, narrative, scientific analysis, time line of architecture, infrastructure, archaeological record etc&#8230;.endless possibilities, and this agitates place and place on a map into an active field of information with end user control&#8230;and open options for new layers</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>and do you think we could do interesting things with AR on a project like Mannahatta even with the current mediating devices we have available &#8211; i.e. our smart phones as obviously the rich pc experience of Mannhatta has built for it&#8217;s web interface would not be available as AR at this point?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;.k.i.s.s right?Â Â  these projects do not have to only be immersive and graphic intensive&#8230;&#8230;take how people upload photos onto google maps&#8230;.just make that on a menu of options, there are some pretty cool hacks already..<br />
&#8230;options is key, a space can have a community as well, building on it in software, and others navigating it, i see it near future and down the road..always have with ar really</strong></strong></p>
<p><strong><strong><a href="../wp-content/uploads/2009/10/locativenarratives1.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1.jpg"><img class="alignnone size-medium wp-image-4596" title="locativenarratives1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/locativenarratives1-230x300.jpg" alt="locativenarratives1" width="230" height="300" /></a><br />
</strong></strong></p>
<p><strong><em><strong><span>image from Volume Magazine (Hight/Wehby)</span></strong></em></strong></p>
<p><strong><strong>Jeremy Hight: and yes, a lot of people focus on ar as its limitations and processing power needs as a major road block</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>so do you see AR on smart phones adding any value to a project like Mannahatta?</p>
<p><strong><strong>Jeremy Hight: yes&#8230;that it can be integrated into other similar works and even disparate but cloud linked ones&#8230;so a place can be &#8220;read&#8221; in diff ways on the iphone&#8230;.beyond its map location, and more can be possible if you are there&#8230;others away, so it becomes channels of augmentation</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>AR like locative media puts who you are, where you are, what you are doing, what is around you center stage in online experience but it also &#8220;puts media out in the world&#8221; &#8211; people I think understand this well as a single user experience but we are only just beginning to think about how this will manifest as a social experience &#8211; could explain more about modulated mapping as an experience of social augmentation?</p>
<p><strong><strong style="background-color: #99ff99; color: black;"><span>Jeremy H</span>ight: Modulated</strong> <strong style="background-color: #ff9999; color: black;">Mapping </strong><strong>is a tool that will allow channels to be run along the map itself. This will allow one to view different icons and augmentations both as systems on the map and in deeper layers of information (photos, videos, animations,Â  visualizations, etc) that can be turned on and off as desired. The different layers of icons and data may be history, dissent, artworks, spatialized narratives, and annotations developed that are communally based on shared interests, placed spatially and far beyond. The use of chat functionality in text or audio will be open in building mode and in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> navigation/usage as desired. This also allows a community to develop or augment in the spaces on the earth. These nodes can be larger and open or small and set by groups in their channel. The end result is an open source sense of </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> that will also have a needed sense of user control as one can select which layers of augmentation they wish to see and interact with at any time. It also will incorporate all the functionality of locative media in </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software and </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong>. In building mode and in map mode, icons will be coded to represent within channels (remember that the person using it has selected channels of augmentation from many based on their current interests and needs). Icons will be coded as active to show work in progress in cities and the globe to both invite participation and to further agitate the map from the sense of the static as action is visible even with its icons as people are working and community is formed in common interest/need .</strong></strong></p>
<p><strong><strong>locative media got a buzz for &#8220;reading&#8221; places&#8230;when I helped create locative narrative that was what blew me away back in 2001&#8230;that we could give places a voice by placing data from research and icons on a map&#8230;&#8230;this meant lost history or augmentation was possible as kind of voices of a place and its layers&#8230;&#8230;.I called it &#8220;narrative archaeology.&#8221; We now have tools that can push these ideas and concepts farther..much farther&#8230;and with a range beyond what was before, and then the map was just a tool&#8230;.but now we are returning to the map itself&#8230;..and this as place as much as marker..this is where ar takes the ball to use a bad metaphor</strong></strong></p>
<p><strong><strong>also that project could only work if you came to our spot of a 4 block augmentation and with us there to lend you our gear&#8230;we are far beyond that now but it had its place</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>How do you see &#8220;in context&#8221; AR and something we might call &#8220;context aware&#8221; cloud computing models interacting?</p>
<p><strong><strong>Jeremy Hight: sure&#8230;and I must add that I have issues with cloud computing as much as it is a good idea..</strong>.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>because of loss of autonomy?</p>
<p><strong><strong>Jeremy Hight: tivo is simply a hard drive&#8230;but it keyword reads and givesÂ  suggestions..that is the is cro magnon link to what can be</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>The nice thing about Wave is because of the Federation model, the cloud model and local store ur own data models should work together.<strong><strong><span> </span></strong></strong></p>
<p><strong><strong><span>Jeremy Hight: yes..that is better&#8230;..loss of autonomy also opens up the arbitrary which is the flaw of search engines as we know itâ€¦even Bing fails to me in that sense</span></strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>how do you mean, could you explain?</p>
<p><span> </span><strong><strong><span>Jeremy Hight: spidersÂ  cull from wordsÂ  but cull like trawlers at sea â€¦. tested Bing with very specific requests.. it spat out the same mass of mostly off topic resultsâ€¦.</span><br />
<span> I wonder if there is a way to cull from key words and topics from a userâ€¦not O</span>rwellian back end of courseâ€¦but from their preferences, their searches etc..</strong></strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>did you see the discussion on search in the AR Framework doc? AR search will be a massively important thing that will take a lot of intelligence and all sorts of algorithm development won&#8217;t it?</p>
<p><strong><strong>Jeremy Hight:It also has one area of key functionality that moves into more intuitive software. Upon continued usage, the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> software will â€œlearnâ€ and search based on key words used and spheres of interest the user is </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> or observing as mapped and will integrate deeper data and types of animations, etc. into the map or will have them waiting to be integrated upon user approval as desired. Over time the level of sophistication of additions and of search intuition will increase dramatically. The search can also, if the user wishes, run in the back end while working in the </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> program, or in off time as selected while doing other tasks. It also can never be used if one is not interested. One of the key elements of this </strong><strong style="color: black; background-color: #ff9999;">mapping</strong><strong> is that it is not composed of a closed set or needs user hacks to augment, but instead is to evolve and deepen by user controls and desired as designed. Pre-existing data,visualizations and augmentations can be integrated with relative ease.</strong></strong></p>
<p><strong><strong>Tish Shute: </strong></strong>One of the things that Joe Lamantia points out about social augmented experiences is that they will operate across a number of different scales &#8211; conversation &gt; product design &amp; build team &gt; neighborhood / town fixing potholes &gt; global community for causes. How do designs for channels and layers change across these different social scales?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> quote myself &#8230;&#8221;The &#8220;frontier&#8221; is often defined as the space just ahead of the known edge and limit, and where it may be pushed out deeper into the previously unknown. The frontier in the world of ideas is not the warm comfort of what has been long assimilated; and the frontier in the landscape is not of maps, but of places beyond and before themâ„</strong></strong></p>
<p><strong><strong>The border along what has been claimed is not only that of maps â€“ it is of concepts, functions, inventions and related emergent industries. Ideas and innovations are like the cloud shape that briefly forms around a jet breaking the sound barrier, tangible yet not fully mapped into measure. It is when things are nailed down into specific entities, calibrated and assessed, that the dangers may inflict themselves â€“ greed, competition, imitation, anger, jealously, a provincial sense of ownership either possessed or demanded&#8221;. (from essay in Sarai reader). Otherwise channels and augmentation do not have to be socio-economically stratifying or defined by them. We built 34nÂ  for almost nothing on older tools.</strong></strong></p>
<div id="yqjj" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><img class="alignnone size-medium wp-image-4599" title="dgznj3hp_1g3svj8fq_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b-300x225.jpg" alt="dgznj3hp_1g3svj8fq_b" width="300" height="225" /></a></strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_1g3svj8fq_b.jpg"><span> </span></a></strong></div>
<p><strong><em><strong><span>image from 34north 118westÂ  (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>The ar that is not deep immersion can be more readily available and channels can be what end users need like the diversity of chat rooms or range of Facebook users among us.</strong></strong></p>
<p><strong><strong>I had two moments yesterday that totally fit what we talked about.Â  I went to west hollywood book fair and traditional directions off of mapping for driving directions were wrong and we got lost&#8230;our friend could only get a wireless signal to map on itouch and we had to roam neighborhoods then we called a friend who google mapped it and we found we were a block away&#8230;.so a fast geomapping overlay with an icon for the book fair on some optional grid service or community would have made it immediate.Â  Then at the book fair talked to a small press publisher who is trying to map works about los angeles by los angeles authors on a map..she was stunned when I told her it could be a kind of google map feature option</strong></strong></p>
<p><strong><strong>it also has great potential to publish and place writing and art in places..both for commentary and access. imagine reading joyce in chapters where it was written about and then another similar experience but with writers who published on a service into their city.</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> The challenge of shared augmented realities is not just a matter of shipping bits around, but also of how it we will use channels and layars &#8211; to create and negotiate different, distributed perspectives, understand a shared common core/or expressions of dissent (this came up in an email conversation with <a href="http://www.oreillynet.com/pub/au/166" target="_blank">Simon St Laurent</a>).</p>
<p><strong><strong><strong>Jeremy Hight:</strong> well my example earlier could have been communal in a way too..a tribe sort of augmentation channeling &#8230;.like subscribing to list servs back in the day but of augmentation communities/channels, and for folks to build and use in shared live form, coordinating too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong> </strong>one good thing though about building an open AR Framework is that as bandwidth/CPU/hardware gets better shared high def immersive experiences could be supported by the same framework..</p>
<p><strong><strong>Jeremy Hight: excellent</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>were you thinking of the image recognition and tracking with this example?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> yeah&#8230;.like scanning across a multi channeled google map augmentation with diff icons and their connected data&#8230;and poss social networking and fle sharing even in that mode&#8230;and rastering etc&#8230;.could be cool with google wave </strong><strong><span>- on the map..then zooming in a la powers of ten..(eames film).</span></strong></strong></p>
<p><strong><strong>-</strong><strong><span>I have pictured variations of this for a few years now in my head like the example of my friends and I yesterdayâ€¦we could have correlated a destination by icons in diff channels..one being lit events within lit channel in l.a mapâ€¦maybe things streaming on it tooâ€¦remote info and video etc&#8230; that would be awesome</span></strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> So many of the ideas in you paper on modulated mapping (see <a href="http://piim.newschool.edu/journal/issues/2009/02/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf" target="_blank">here</a>) are brilliant use cases for shared augmented realities. Perhaps you could talk more your ideas about locative narrative because this is something I think is at the core of the kinds of experiences that a distributed AR Framework would make possible?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> on the project &#8220;34 north 118 west&#8221; we mapped out a 4 block area for augmentation of sound files triggered by latitude and longitude on the gps grid and map and the map on the screen had pink rectangles that were the &#8220;hot spots&#8221; where the augmentation had been placed.</strong></strong></p>
<div id="nwc6" style="text-align: left;"><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b.jpg"><img class="alignnone size-medium wp-image-4600" title="dgznj3hp_0gg994bf9_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_0gg994bf9_b-300x225.jpg" alt="dgznj3hp_0gg994bf9_b" width="300" height="225" /></a></strong></strong></div>
<p><strong><em><strong><span>image of interactive map with map based augmentation connected to audio augmentation on site for 34north 118west (Spellman/Hight/Knowlton)</span></strong></em></strong></p>
<p><strong><strong>We researched the history of the area and placed moments in time of what had been there at specific locations &#8230;.I called this <a href="http://www.xcp.bfn.org/hight.html" target="_blank">&#8220;narrative archaeology&#8221;</a> as it allowed places to be &#8220;read&#8221; by their augmentations&#8230;info that was of the place beyond the immediate experience (diff types of info) that otherwise would be lost or only found in books or web sites elsewhere. there now are locative narratives around the world but they need to be linked.Â  from humble origins &#8220;narrative archaeology&#8221; went on to be recently named of the 4 primary texts in locative media which is pretty amazing to me&#8230;but it is growing</strong></strong></p>
<p><strong><strong>- the limitations then were what I called the &#8220;bowling alley connundrum&#8221; &#8211; the specifc data had to reset like pins&#8230;..and was isolated&#8230;.this led me to think about ar back then and up to now.Â  How these could lead to much more from that point, data that would be more layered, variable , fluid..yet still augmented place and sense of place and social networking within data and software</strong></strong></p>
<p><strong><strong><a href="http://34n118w.net/34N/" target="_blank">lifeclipper</a> to me is a bridge</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>But Life Clipper is isolated from the internet currently is it?</p>
<p><strong><strong><span>Jeremy Hight: yes&#8230;ours was too.. that is what google wave makes possible.. our project only ran on our gear..in 4 blocksâ€¦with additional auxi</span>liary info online, and not malleable..but hey 2001 and all..</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>so the sites for 34 north 118 west are still active though?</p>
<p><strong>Jeremy Hight: oh yeah!</strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>nice I really like sound augmentation &#8211; have you seen <a href="http://www.soundwalk.com/blog/tag/augmented-reality/" target="_blank">Soundwalk</a>?</p>
<p><strong><strong><span>Jeremy Hight: yes, very cool..</span> </strong><strong>we chose sound only as it fought the power of image..instead caused a person to be in a sense of two places and times at once</strong></strong></p>
<p><strong><strong>Tish Shute:</strong></strong> and in 2001 that was definitely a visionary project!</p>
<p>You must be very excited that finally the pieces are coming together to make this stuff scale!</p>
<p><strong><strong><strong>Jeremy Hight:</strong> I can&#8217;t even tell you!! it is funny..i have known that this would come..just waited and waited&#8230;</strong></strong></p>
<p><strong><strong>..knew it needed the right people and tools..</strong></strong></p>
<p><strong><strong><span>..so the bowling alley connundrum led me to develop my project shortlisted for the iss (international space station)Â  as I thought a lot about how points and works are not to be isolatedâ€¦but connectedÂ  and should be flowing in diff parts of a mapâ€¦.to open up perspective and connected augmentations , but also to think about the map againâ€¦not as a base only. then moved into my work with new ways to visualize time and it all really began to gell.Â  The ideas first were published as an essay</span></strong><span> </span><a id="qw.2" title="(http://www.fylkingen.se/hz/n8/hight.html)" href="http://www.fylkingen.se/hz/n8/hight.html"><span>(http://www.fylkingen.se/hz/n8/hight.html)</span></a><span> </span><strong><span>and later my project blog</span></strong><span> (</span><a id="bp.b" title="http://floatingpointsspace.blogspot.com/)" href="http://floatingpointsspace.blogspot.com/%29"><span>http://floatingpointsspace.blogspot.com/)</span></a></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>One thing I noticed when I was reading your paper is how you have been exploring non-euclidian geometries.Â  Could you explain how this is part of your idea of modulated mapping?</p>
<p><strong><strong><span>Jeremy Hight: Yes, this first came to me when my wife was reading to me from a book on the Poincare Conjecture and I was hit with a new way to measure events in time and after months of sketches, schematics and research came to see how it could also be connected to a geo-spatial web of projects and augmentations.Â  It was published in the inaugural issue of Parsons School of Design&#8217;s Journal of Information Mapping which was an exciting fit.</span></strong><span><strong> I call it &#8220;Immersive Event Time&#8221;</strong>(</span><a id="o3rt" title="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)" href="http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf%29"><span>http://piim.newschool.edu/journal/issues/2009/01/pdfs/ParsonsJournalForInformationMapping_Hight-Jeremy.pdf)</span></a></strong></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b.jpg"><img class="alignnone size-medium wp-image-4634" title="dgznj3hp_4cxz57xgv_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_4cxz57xgv_b-195x300.jpg" alt="dgznj3hp_4cxz57xgv_b" width="195" height="300" /></a></strong></span></p>
<p><span><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b.jpg"><img class="alignnone size-medium wp-image-4635" title="dgznj3hp_5g68k9ggh_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/10/dgznj3hp_5g68k9ggh_b-300x225.jpg" alt="dgznj3hp_5g68k9ggh_b" width="300" height="225" /></a><br />
</strong></span></p>
<p><strong><strong>so the last 3 years I have been working on how it could all work as channels of augmentation, and building and navigation as open and community in a sense as well as ai capability that was the time work especially. how time as experienced within an event is not a time &#8220;line&#8221;Â  but points on and within a form&#8230;.and how this model is better for visualizing events in time and documenting them. it actually sprang form reading a book on the poincare conjecture, popped a bunch of other stuff together so one could visualize an event in time as like being in the belly of a whale..with time as the ribs..and our measure of time as the skin&#8230;and moving within it&#8230;.hoping this will be used as educational tool</strong></strong></p>
<p><strong><strong>and this also can be tied to ar and map again&#8230;how documentation of important events can be kept within icons on a google map..then download varying visualizations based on bandwidth and desired format</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>I have been thinking about is the new forms of social interaction/agency that these kinds of augmentations of space/place/time will create.Â  it seems there are two poles &#8211; one is the area Natalie Jeremijenko explores of shifting social relations from institutions/statistics to real time/location based/interactions and new forms of social agency.Â  The other pole completely is more like the cloud based AI and perhaps crowd sourced machine learning.</p>
<p>Your ideas explore the possibilities of both these poles.Â  And certainly one of the big deals of distributed AR integrated with would be the possibilities it opened up both for new forms of networked social relationships and for new ways to draw on network effects.</p>
<p><strong><strong><strong>Jeremy Hight:</strong> and cross pollinations within &#8230;that is what my mind goes to</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>The other night I met Assaf Biderman, MIT, from the <a href="http://senseable.mit.edu/trashtrack/" target="_blank">Trash Track</a> team. Trash Track doesn&#8217;t utilize AR but I could see that there are possibilites there.<br />
What do you think?</p>
<p><strong><strong><span>Jeremy Hight: yes, absolutely,</span> </strong><strong>there can sort of skins on locations that user end selection can yield &#8230;like channels of place&#8230;.and can range from pragmatic core to art and play and places between&#8230;.how this recalibrates the semiotics of map&#8230;more than just augmentation as seen as a kind of piggy back on map..map becomes interface and defanged platform if you wil, interestingly my more poetic/philosophic writing led me here too</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong></strong></strong> I know they are at very different poles of the system but I do wonder how AR can bring some of the level of social agency/interaction that <a href="http://www.environmentalhealthclinic.net/people/natalie-jeremijenko/" target="_blank">Natalie Jeremijenko</a> works on into a productive interaction with the kind of innovations in Machine learning that Dolores Lab style machine learning!!and others are pioneering?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> Natalie&#8217;s genius to me is in practical functional tech that also opens deeper questions and even new openings of what is needed..amazing layers in her work that way.. succint yet deep..very deep</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>Yes &#8211; I a just writing a post about her work &#8211; I find it deeply moving the way she has delved into the possibilities to using technology to open us up to our world.Â  One of the reasons I find distributed AR so interesting is because it will make it possible for all kinds of people to create and use augmentation in their lives and communities.</p>
<p>So to return to how a distributed AR framework could contribute to a project like Trash Track?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> what about using it for community, dissent and awareness raising then?Â  like Natalie&#8217;s work but building like a communal work of multiple points, like the old adage of the elephant and the blind menÂ  sorry..metaphor &#8211; like one of my points in immersive sight was how one could take augmentation as multiple works sort of turning the faces of a thing or place&#8230;and how this would make a larger work even in such a flow so people moving in a space could also build..</strong></strong></p>
<p><strong><strong>what of ar traces left as people move calibrated to user traffic and trash as estimated in an urban space&#8230;like it goes back to chris burden in the 70&#8242;s making you know that as you turn the turnstyle you are drilling into the foundation and may be the one that collapses the building?</strong></strong></p>
<p><strong><strong>so their movements leave trash. Natalie is all about raising awareness to cause and effect and data , space and ecology. love that.Â  so maybe &#8230;<br />
a feedback loop , artifact and user end responsibility can leave traces &#8230;trash&#8230;</strong></strong></p>
<p><strong><strong>.. cybernetics vs ecology and human waste</strong></strong></p>
<p><strong><strong><strong>Tish Shute: </strong></strong></strong>could you elaborate?</p>
<p><strong><strong><strong>Jeremy Hight:</strong> brain fart&#8230;that the mass of trash people leave is a piece at a tiime&#8230;.and how like the space shuttle mission when it was argued first true cybernaut occured&#8230;.one cord to air for astronaut..one for computer on their back to fix broken bay arm&#8230;if there is a way to build on that and in relation to the topic&#8230;..how this can go further, that machines do not waste as much&#8230;as ar is a means to cybernetic raise awareness..eh..</strong><strong><span>In a sense it is likeÂ  the space shuttle mission when arguably the first true cybernaut occurredâ€¦.one cord to air for astronaut..one for computer on their back to fix broken bay armâ€¦if there is a way to build on that and in relation to the topicâ€¦..how this can go further, that machines do not waste as muchâ€¦as ar is a means to cybernetic raise awareness..eh.. hmmm.</span>.. </strong><strong> sensors etc&#8230;wearables too &#8211; could be eco awareness with data and machine and human</strong></strong></p>
<p><strong><strong>what about a cloud computing system with a slight ai in the sense of intuitive word cloud and interest scans&#8230;..so as one moves through say new york they can be offered new ai data and services as they move ? could also be of eco interests? concerns about urban farming, eco waste, air pollution etc&#8230;.perhaps with (jeremijenko element here) Â sensors placed in locations and these also giving data reads in public areas Â with no input but hard data itself&#8230;&#8230;hmm..could be interesting</strong></strong></p>
<p><strong><strong>it can also give info of the carbon footprints (estimated prob unless data is public record somehow) of chain businesses Â and data on which are more eco friendly as well as an iconography color coded and icon coded to the best places to go to support greening and eco friendly business? Â and the companies could promote themselves on this service to attract eco aware customers who would be seeing them as kindred spirits and helping the<br />
larger effort?</strong></strong></p>
<p><strong><strong>kind of eco mapping..and ar on mobile app</strong></strong></p>
<p><strong><strong>what about sensors that read air pollution levels, levels of solar radiation (to aid with skin protection in shifting light values in a city space..ie put on some skin cream now&#8230;), light sensors that detect density and over density in public spaces&#8230;to use the old trope in art of reading crowds in a space..but instead could indicate overcrowding, failing infrastructure in public spaces (which is a congestion that leads to greater pollution levels as well as flaws in city planning over time..), and perhaps a tie in to wearables&#8230;&#8230;worn sensors Â on smart clothes&#8230;.this could form a node network of people in the crowds &#8230;.and also send data within moving in a space&#8230;</strong></strong></p>
<p><strong><strong>here is a kooky thought&#8230; what of taking the computing power and data of people moving in a space..and not only get eco data and make available to them levels of<br />
data..but make possibly a roving super computer&#8230;crunching the deeper data of people open to this&#8230;&#8230;a hive crunching deeper analysis of the space, scan properties from sensors, and even a game theory esque algorithm of meta data if say 40 people out of 50 hit on a certain spike or reading&#8230;and even their input&#8230;..I worked in game theory for paleontology in this manner for a time as a teen&#8230;.a private project&#8230;&#8230; Â  the reading can lead to a sort of meta read by what hits most consistently..as well as in their input..text of what they experienced, observed,postulated,analyzed even&#8230;. this could be really interesting&#8230;even if just the last part from collected data and not from any complex branching of servers..</strong></strong></p>
<p><strong><strong>I thought at 19 or so that the flaw in paleontology was in how so many larger theories were shifting exhibitions and larger senses of things like were there pre-historic birds that were mistaken for amphibean and then back again&#8230;.so why not make a computer program and feed all the papers published into it and see what hits were counted in terms of an emerging meta theory&#8230;and landscape of key points being agreed upon&#8230;this data would be in a sense both algorithmic and a sort of unspoken dialogue &#8230;came from a lot of study of game theory one summer&#8230;</strong></strong></p>
<p><strong><strong>hope this makes some sense&#8230;I forgot to mention that I originally planned to be a research meteorologist and my plan in middle school or so was to get a phd and develop new software to have a global map and then run models of hypothetical storms across it in real time animations of cloud forms, radar and wind analysis/fields, barometric pressure spaghetti charts etc&#8230;.and to also do 3d cut away models of storm architectures&#8230;so been into visualizations of complex data and mapping for a long time!</strong></strong></p>
<p><strong><strong><strong>Tish Shute:</strong> </strong></strong>Wow let me think about this one!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/feed/</wfw:commentRss>
		<slash:comments>18</slash:comments>
		</item>
		<item>
		<title>Location Becomes Oxygen at Where 2.0 &amp; WhereCamp</title>
		<link>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/</link>
		<comments>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/#comments</comments>
		<pubDate>Tue, 02 Jun 2009 21:43:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Aaron Straup Cope]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[bottom up urban informatics]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[community sensing]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Dan Catt]]></category>
		<category><![CDATA[Eric Horvitz]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[FireEagle]]></category>
		<category><![CDATA[Flickr Corrections]]></category>
		<category><![CDATA[Flickr Nearby]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo platform]]></category>
		<category><![CDATA[geo platforms]]></category>
		<category><![CDATA[geoblogging]]></category>
		<category><![CDATA[geoplanet]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[geowanking]]></category>
		<category><![CDATA[GigaPan]]></category>
		<category><![CDATA[gigapanning]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[googlewave]]></category>
		<category><![CDATA[headmap manifesto]]></category>
		<category><![CDATA[J.G. Ballard]]></category>
		<category><![CDATA[Jo Walsh]]></category>
		<category><![CDATA[Joshua Schachter]]></category>
		<category><![CDATA[location awaeness]]></category>
		<category><![CDATA[location versus place]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[machine intelligence and human intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[magic words and microsyntax]]></category>
		<category><![CDATA[Mapping Hacks]]></category>
		<category><![CDATA[Marc Powell]]></category>
		<category><![CDATA[Microsyntax]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[Odeo Yokai]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[paleogeography]]></category>
		<category><![CDATA[Papernet]]></category>
		<category><![CDATA[personal informatics]]></category>
		<category><![CDATA[Placemaker]]></category>
		<category><![CDATA[privacy and community sensing]]></category>
		<category><![CDATA[privacy and sensor networks]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[psychosynthography]]></category>
		<category><![CDATA[Raven Zachary]]></category>
		<category><![CDATA[real time web based visualization and mapping]]></category>
		<category><![CDATA[reality mining]]></category>
		<category><![CDATA[Rich Gibson]]></category>
		<category><![CDATA[Schuyler Erie]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shape files]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart cities]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[social geography]]></category>
		<category><![CDATA[social networks]]></category>
		<category><![CDATA[social reality mining]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[The Ubiquitous Media Studio]]></category>
		<category><![CDATA[the web in the world]]></category>
		<category><![CDATA[Tom Carden]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hackers]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[wearable sensory substitution devices for navigation]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[WOEID]]></category>
		<category><![CDATA[yahoo! geotechnologies group]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3567</guid>
		<description><![CDATA[curatingbigdatapost]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime.jpg"><img class="alignnone size-medium wp-image-3578" title="anselmcircletime" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime-300x199.jpg" alt="anselmcircletime" width="300" height="199" /></a></p>
<p>The biggest news at <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0, 2009</a> came from the<a href="http://developer.yahoo.com/geo/" target="_blank"> Yahoo!</a><a href="http://developer.yahoo.com/geo/" target="_blank"> G</a><a href="http://developer.yahoo.com/geo/">eo Technologies Group</a>. Tyler Bell, announced Yahoo! <a href="http://developer.yahoo.com/geo/placemaker">Placemaker</a> and the opening up of the <a href="http://developer.yahoo.com/geo/geoplanet/" target="_blank">GeoPlanet</a> data set, â€œall of the WOEIDs [<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs] available as a free download under Creative Commons in Juneâ€ (see <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrestâ€™s post</a> for more details).</p>
<p><a id="qa9y" title="WhereCamp 2009" href="http://wherecamp.pbworks.com/WhereCamp2009" target="_blank">WhereCamp 2009</a> was held immediately after <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0</a> and was a great place to chew on the events and ideas of Where 2.0.Â  In the picture above Anselm Hook addresses the WhereCamp morning circle in the courtyard outside the <a id="i:ij" title="Social Tex" href="http://www.socialtext.com/" target="_blank">Social Tex</a>t offices in Palo Alto. Anselm pointed out to me:</p>
<p><strong>&#8220;there are interesting implications of placemaker in combination with other yahoo assets &#8211; in particular <a href="http://developer.yahoo.com/yql/" target="_blank">YQL</a> &#8211; placemaker by itself is neat &#8211; but placemaker combined with everything else is a natural missing piece that is a big enabler.Â  Yahoo has been impressive.&#8221;</strong></p>
<p><strong> </strong>With all the Geo platform power available to us now, also (also see<a href="http://radar.oreilly.com/2009/05/new-geo-for-devs-from-google-i.html" target="_blank"> New Geo for Devs from Google I/O</a>), there isnâ€™t a shadow of a doubt in my mind Brady is right when he said, just before the Where 2009 conference: &#8220;<strong>Location is no longer a differentiator it&#8217;s going to become oxygenâ€ </strong> <a href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web" target="_blank">(quote from WebMonkey).</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1.jpg"><img class="alignnone size-medium wp-image-3612" title="spatialjunkies1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1-300x199.jpg" alt="spatialjunkies1" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41.jpg"><img class="alignnone size-medium wp-image-3614" title="yahoogeo41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41-300x199.jpg" alt="yahoogeo41" width="300" height="199" /></a></p>
<p><em>The Yahoo! GeoPlanet team at WhereCamp &#8211; Tyler Bell, (talking to Brady Forrest in picture on the left) is sporting his spatial junkies T-Shirt. Photo on right, Aaron Cope, Tyler Bell, Martin Barnes, Gary Gale.</em></p>
<p>WhereCamp was alive with key figures from the social geography movement who knew the power of these new tools (see <a href="http://www.flickr.com/photos/ugotrade/sets/72157618662411286/" target="_blank">some of my photos of WhereCamp on Flickr here</a>).</p>
<p>The importance of the Yahoo! announcement really became clear to me at <a href="http://www.socialtext.net/wherecamp/index.cgi" target="_blank">WhereCamp</a> where I attended sessions all day Saturday including the Curating Big Data Session led by <a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen Design</a> and <a href="http://www.aaronstraupcope.com/" target="_blank">Aaron Straup Cope</a>, Flickr, (see Aaronâ€™s slides from his<a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank"> Where 2.0 presentation on â€œThe Shape of Alphaâ€ here</a> and video <a href="http://where.blip.tv/file/2167471/" target="_blank">here</a>).</p>
<p>Anselm Hook, a prime mover for WhereCamp, is a leading philosopher of place making and veteran software developer who led <a href="http://platial.com/" target="_blank">Platia</a>l engineering and is now at web consultancy <a rel="nofollow" href="http://makerlab.com/">http://makerlab.com</a><span class="bio">. If you missed Anselm at WhereCamp he will be presenting on, <a href="http://opensourcebridge.org/sessions/246" target="_blank">Ubiquitous Angels</a> at <a href="http://opensourcebridge.org/users/288" target="_blank">The OpenSource Bridge</a>, Portland, Oregon, June 17th -19th, 2009.</span></p>
<p>Anselm describes where he thinks the challenges are:</p>
<p><strong>â€œWe should be mapping information that in some ways has been historically unmappable because it is 1) not valued or is 2) actively seen as threatening or is 3) simply too hard to map using traditional tools.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul.jpg"><img class="alignnone size-medium wp-image-3680" title="wherecampschedul" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul-300x199.jpg" alt="wherecampschedul" width="300" height="199" /></a></p>
<p><em>The WhereCamp Schedule</em></p>
<p><strong><span style="font-size: medium;">The Shape of Alpha</span></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57.png"><img class="alignnone size-medium wp-image-3647" title="picture-57" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57-300x220.png" alt="picture-57" width="300" height="220" /></a></p>
<p><em>Screen capture from Aaron&#8217;s <a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation on â€œThe Shape of Alpha.</a> Original photo from Flickr user <a href="http://www.ï¬‚ickr.com/photos/nickisconfused/3291840240/" target="_blank">&#8220;NickIsConfused&#8221;</a>.</em></p>
<p>Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a> puts key questions about curating big data center stage.</p>
<p>Firstly, the exploration of what it means to curate/collaborate over meaning from â€œthe abundance of data produced in the precise but distant language of machinesâ€ (also see <a href="http://www.archimuse.com/mw2009/abstracts/prg_335001944.html" target="_blank">The Interpretation of Bias (and the bias of interpretation)</a>. The Shape of Alpha uses a process of <a href="http://code.flickr.com/blog/2008/09/04/whos-on-first/">reverse-geocoding</a> to translate machine-generated geographic data into place names that people can understand and relate to.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a> are built with nothing but geotagged photos and some code called clustr (written by the brilliantÂ  <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> &#8211; co-author of <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a>). Anyone can make these <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a>. You can get the shapefiles out of theÂ  <a href="http://www.flickr.com/services/api">Flickr API</a>. Aaron has been keying off WOEIDs (<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs) but as Aaron noted you can key off anything you like &#8211; tags are an obvious choice.</p>
<p>Wow! You can reinvent mapping with this stuff.</p>
<p>Very importantly, <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha,â€</a> tells us something about how we relate to place versus location. The emotions, disputes and behavior related to place also emerge through crowd sourced corrections.Â  For more <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">see this very evocative post by Aaron about corrections and treating airports as cities</a>.Â  There is a glorious thread/riff and ode to the genius ofÂ  J. G. Ballard pursued by Aaron and Dan Catt in their posts (also see Dan Catt&#8217;s, <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes</a>, and Aaron pointed me to <a href="http://www.ballardian.com/the-real-concrete-island" target="_blank">this brilliant &#8220;geo-detective work&#8221; </a>on <a href="http://www.ballardian.com/biblio-concrete-island">Concrete Island</a>, by Mike Bonsall <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">.</a></p>
<p>Dan Catt created <a href="http://geobloggers.com/" target="_blank">geobloggers</a> and â€œseeded the geotagging community around the Web.â€ I met Reverend Dan Catt (Twitter @revdancatt ) at Where 2.0 when he was kind enough to share part of his seat so I could join a very interesting discussion with Aaron on The Shape of Alpha.</p>
<p>As <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">Aaron points out</a> they decided to treat &#8220;the airport itself <em>as</em> the town&#8230;&#8221;Â  not (only) because they admired the work of <a href="http://www.jgballard.com/airports.htm">J.G. Ballard</a>,Â                      &#8220;but because it is the right thing to do.&#8221;</p>
<p>Dan Catt has excellent <a href="http://blog.flickr.net/en/2008/08/08/introducing-a-new-way-to-geotag/">blog posts</a> &#8220;describing                     the nuts and bolts of how &#8216;corrections&#8217; works.&#8221;Â  Aaron points out,Â  &#8220;in <a href="http://code.flickr.com/blog/2008/08/08/location-keeping-it-real-on-the-streets-yo/">the nerdier of                     the two</a> Dan sums it up nicely by saying&#8221;:</p>
<blockquote class="hier"><p><strong>&#8220;On a slightly more philosophical level, itâ€™s a never                         ending process. Weâ€™ll never reach a point where we can                         say â€œRight thatâ€™s in, all borders between places have                         been decided.â€ But what we should end up with are                         boundaries as defined by Flickr users.</strong></p>
<p><strong>&#8230;</strong></p>
<p><strong> </strong></p>
<p><strong>For us, itâ€™s a first small step into an experiment, and actually a pretty big                         experiment as weâ€™re potentially accepting â€œcorrectionsâ€ from our millions and                         millions of users. Weâ€™re not quite sure how itâ€™ll all turn out, but weâ€™re armed                         with Maths, Algorithms and kitten photos.&#8221;</strong></p></blockquote>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Psychosynthography &#8211; &#8220;Wearing Geography as a Perfume&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59.png"><img class="alignnone size-medium wp-image-3649" title="picture-59" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59-300x224.png" alt="picture-59" width="300" height="224" /></a><em> </em></p>
<p><em>Psychosynthography screen capture from Aaron Straup Cope&#8217;s </em><a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation </a><em>. Original photo from Flickr user,Â  <a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">&#8220;</a></em><a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">NiteLynx.&#8221;</a></p>
<p>As I mentioned before, many of the ideas raised at Where 2.0 were unpacked and worked through at WhereCamp. For example, Aaron introduced a word <strong>psychosynthography</strong> in the last 24 seconds of his talk at Where 2.0.</p>
<p>So I spent as much time as I could listening to Aaron at WhereCamp, and asking him about psychosynthography and more (post of this interview upcoming).</p>
<p>Aaron urged the Where 2.0 audience to pay attention to the Psychogeography movement seeded by <a title="Guy Debord" href="http://en.wikipedia.org/wiki/Guy_Debord">Guy Debord</a>, and<strong> â€œto wear geography like a perfume.â€</strong></p>
<p>Joseph Hart writes in a <a href="http://www.utne.com/2004-07-01/a-new-way-of-walking.aspx" target="_blank">â€œNew Way of Walking</a>â€ psychogeography is:<strong> </strong></p>
<p><strong>â€œa whole toy box full of playful, inventive strategies for exploring citiesâ€¦just about anything that takes <span class="mw-redirect">pedestrians</span> off their predictable paths and jolts them into a new awareness of the urban landscape.â€</strong></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Curating Big Data</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden.jpg"><img class="alignnone size-medium wp-image-3625" title="tomcarden" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden-300x199.jpg" alt="tomcarden" width="300" height="199" /></a></p>
<p><em><a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen</a>, (picture above) paired with Aaron for the Curating Big Data session. Tom noted: </em></p>
<p><strong>&#8220;The Curating Big Data session for me was an attempt to learn from other attendees (as opposed to teach/lead, as with the Stamen session, &#8220;Real Time Web-Based Visualization and Mapping&#8221;).Â  Also, it was an excuse to get Aaron to recap parts of the Flickr Shapefile story for WhereCamp folks, and to get *input* on how to do more things like it. I was a bit disappointed that nobody had really good examples for us, but I was happy with Brad Stenger&#8217;s suggestion to look into the upcoming census data as a relevant area.&#8221;</strong></p>
<p>Aaronâ€™s work on the The Shape of Alpha and The Corrections project shows, as Tom noted:</p>
<p><strong>â€œwhat you can do once you have 150 million geotagged photos, and millions of users who are willing to say I took this thing here and my name for that place is â€¦..â€</strong></p>
<p>And part of the significance of opening up the GeoPlanet data set is that now:</p>
<p><strong>â€œwe can try and start talking about the same places, as far as, [for example], these shape files go. So if you are interested in what comes out of the Flickr shape files project and but you also have your own opinion about what shape those places are so the IDs have be open you have to be sure that you are talking about the same thing in the first place.â€</strong></p>
<p>And, as Tom pointed out, collaborating over geo data informs us about curating any big dataset:</p>
<p><strong>â€œit should lead to an overarching discussion about any kind of dataset geo or otherwise and ways in which we can talk about it, and think about patterns for improving that data, for collaborating, even on things like cleanup.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping.jpg"><img class="alignnone size-medium wp-image-3681" title="realtimewebbased-visualizationandmapping" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping-300x199.jpg" alt="realtimewebbased-visualizationandmapping" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost.jpg"><img class="alignnone size-medium wp-image-3739" title="curatingbigdatapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost-300x199.jpg" alt="curatingbigdatapost" width="300" height="199" /></a></p>
<p><em>Warp speed geo-genius Andrew Turner, <a href="http://www.fortiusone.com/" target="_blank">Fortius One</a><a href="http://www.fortiusone.com/" target="_blank">,</a> took these excellent notes for the &#8220;Real Time Web-Based Visualization and Mapping&#8221; (on left) and &#8220;Curating Big Data&#8221; (on the right).</em></p>
<p><em> </em></p>
<p>On my way to Where 2.0 I took the train from SFO to San Jose which was a delight but a little slower than I imagined. So, unfortunately, I arrived on Tuesday just after <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 		<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) presentedÂ  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/20/Maps%20from%20Scratch_%20Online%20Maps%20from%20the%20Ground%20Up%20Presentation.pdf">Maps from Scratch: Online Maps from the Ground Up. </a> This was on my MUST attend list and<em> </em>it was a wonderful opportunity to get into,<em> </em>&#8220;Real Time Web-Based Visualization and Mapping.&#8221;Â Â  I did get a chance to talk to Michal and Shawn a bit later in the conference but I will try to catch up with them soon for an in depth story.Â  Below isÂ  Shawn Allen&#8217;s map of overlapping data sets from, <a href="http://www.flickr.com/photos/shazbot/3282821808/" target="_blank">&#8220;Trees, cabs and crime in San Francisco:&#8221; </a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs.png"><img class="alignnone size-medium wp-image-3743" title="treescrimecabs" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs-300x273.png" alt="treescrimecabs" width="300" height="273" /></a></p>
<p>Another follow up I am really looking forward to making is with <a href="http://lizbarry.com/s+em/contact.htm" target="_blank">Liz Barry</a> and her work on <a href="http://lizbarry.com/s+em/about.htm" target="_blank">S+EM</a>, &#8220;an environmental mapping and social networking design project          that links New York City trees with the people who care for them&#8221; (also see, <a href="http://fuf.net/" target="_blank">Creating a Greener San Francisco Tree by Tree</a>).Â  Also I got a chance to talk to another fellow New Yorker (we have to travel to the West Coast to find time to chat!), <a href="http://radar.oreilly.com/jgeraci/" target="_blank">John Geraci</a> of <a href="http://diycity.org/" target="_blank">DIY City</a> who presented  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/25/DIY%20City_%20An%20Operating%20System%20for%20Cities%20Presentation.zip">DIY City:Â  An Operating System for Cities.</a></p>
<h3>Machine Intelligence and Human Intelligence</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew.jpg"><img class="alignnone size-medium wp-image-3622" title="aaronandandrew" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew-300x199.jpg" alt="aaronandandrew" width="300" height="199" /></a></p>
<p><em>Aaron Cope, Flickr, on the left is talking to Andrew Turner on the right the CTO of FortiusOne (see Andrewâ€™s presentation at Where 2.0, <a href="http://blip.tv/file/2167650" target="_blank">â€œYour Own Private Geo Cloudâ€</a>)</em></p>
<p>Many of the most interesting conversations happened in between sessions at WhereCamp and Where 2.0.</p>
<p>I caught this one in which Aaron Cope and Andrew Turner where discussing some of ideas Aaron raised in his presentation, <a href="http://www.slideshare.net/straup/capacity-planning-for-meaning-presentation-637370?type=powerpoint" target="_blank">â€œCapacity planning for meaning in the age of personal informaticsâ€</a> (see Aaronâ€™s blog post, <a href="http://www.aaronland.info/weblog/2008/10/08/tree/" target="_blank">Tree planting and tree hugging in the age of personal informatics</a>). The core question they were discussing was what happens when you wire the world at the scale people are talking about and it breaksâ€¦ Aaron argues that you already have a whole class of people in systems operations that can tell us a lot about how to answer this question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost.jpg"><img class="alignnone size-medium wp-image-3594" title="rossmayfieldsocialtextpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost-300x199.jpg" alt="rossmayfieldsocialtextpost" width="300" height="199" /></a></p>
<p><em><span class="bio">Ryan and Anselm shared the pulpit for the morning circle pulpit with <a href="http://ross.typepad.com/" target="_blank">Ross Mayfield</a> of <a href="http://www.socialtext.com/" target="_blank">Social Text </a>who was the generous host to WhereCamp.</span></em></p>
<h3>Social Reality Mining</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1.jpg"><img class="alignnone size-medium wp-image-3651" title="benjaminbratton1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1-300x199.jpg" alt="benjaminbratton1" width="300" height="199" /></a></p>
<p><strong>â€œAs it stands today, we have no idea what terms and limits of a cloud based citizenship of the Google Caliphate will entail and curtail. Some amalgam of post-secular cosmopolitanism, agonistic radical democracy, and post-rational actor microecomics, largely driven by intersecting petabyte at-hand datasets and mutant strains of Abrahamaic monotheism. But specifically, what is governance (let alone government) within this?â€ </strong><a href="http://bratton.info/" target="_blank">from Benjamin Brattonâ€™s</a> talk at ETech 2009 (picture above)<strong>, </strong><a href="http://www.bratton.info/emergency.html" target="_blank">Undesigning the Emergency: Against Prophylactic Urban Membranes</a>.</p>
<p>The other big take away from WhereWeek &#8211; Where 2.0 and WhereCamp, was not so much news, but a confirmation of something that has been pretty clear for a while now. (Check out <a href="http://radar.oreilly.com/2008/05/the-results-of-reality-mining.html" target="_blank">Bradyâ€™s posts on reality mining at Where 2.0 last year</a>).</p>
<p>We are moving headlong into the era of reality mining with all its myriad possibilities from: &#8220;hedonistic optimization&#8221; (this term came from <a href="http://brainofstig.ai/" target="_blank">Stig Hackvan</a> when I asked him about some of the ideas central to the <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">HeadMap Manifesto</a> -more about HeadMap later in this post); to new forms of marketing (social reality mining the inside to predict if someone is going to trade business cards in the next 120 seconds &#8211; <a href="http://en.oreilly.com/where2009/public/schedule/speaker/46016" target="_blank">Alex â€œSandyâ€ Pentland, MIT, Where 2.0</a>);Â  to stuff that matters to save us from mass extinction like distributed sustainability &#8211; greening production and consumption and our cities; to open government;Â  empowering indigenous communities (also see Rebecca Moore&#8217;s<a href="http://en.oreilly.com/where2009/public/schedule/speaker/43557" target="_blank"> </a><a class="attach" href="http://assets.en.oreilly.com/1/event/25/Indigenous%20Mapping_%20Emerging%20Cultures%20on%20the%20Geoweb%20Presentation.ppt">Indigenous Mapping: Emerging Cultures on the Geoweb Presentation</a>); and not to be forgotten, the troubling possibility of new forms of social control.</p>
<h3>Smart phones are powerful networked sensor devices in the palm of our hand</h3>
<p>As Sandy Pentland MIT pointed out in his Where 2.0 keynote, <a href="http://en.oreilly.com/where2009/public/schedule/detail/7956" target="_blank">â€œReality Mining for Companies, or, How Social Networks Network Best,â€</a> mobile phones have created an ubiquitous instrumented reality that goes way deeper than location awareness. Smart phones are powerful networked sensor devices in the palm of our hand that know a lot more about us than location. With proximity, motion, (accelerometers), voice, images, call logs, email &#8211; what is enabled is not just knowing where people are but knowing more about them.</p>
<p>Many of the issues raised by <a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield</a> in <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">Everyware</a> and in <a href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">my interview with Adam</a> were on my mind during WhereWeek, also questions that were distilled and explored in this presentation by Matt Jones last year, <a href="http://www.slideshare.net/blackbeltjones/polite-pertinent-and-pretty-designing-for-the-newwave-of-personal-informatics-493301" target="_blank">Polite, Pertinent, andâ€¦ Pretty: Designing for the New-wave of Personal Informatics</a> and <a href="http://www.slideshare.net/tmo/the-web-in-the-world-presentation" target="_blank">Timo Arnallâ€™s presentation, The Web in the World</a>.</p>
<h3>Google Wave, PachubeÂ  Feeds, Sensor Networks and Microsyntax!</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="560" height="340" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="560" height="340" src="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em><a id="o_ok" title="Visualizing 24 hours of @pachube" href="http://is.gd/IYOj" target="_blank">Visualizing 24 hours of Pachube</a> logs, feeds all around the world -Â  built with Processing.</em></p>
<p>I found myself really wishing <a href="http://www.pachube.com/" target="_blank">Pachube</a> founder Usman Haque had been able to come to Where 2.0 this year &#8211; Usman was originally on the Where 2.0 schedule but had to drop out. My small contribution to WhereCamp was to discuss <a href="http://www.pachube.com/" target="_blank">Pachube</a>, <a href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a> and <a href="http://www.shaspa.com/" target="_blank">OpenShaspa</a> in the, Urban Eco-Managment session (<a href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">see my interview with Pachube Founder, Usman Haque here</a>).</p>
<p>Pachube announced &#8211; <a id="du7_" title="mapping mobile feeds in realtime" href="http://is.gd/BjJT" target="_blank">mapping mobile feeds in realtime</a>, with 3d datastream value time &amp; location based graphing just before Where 2.0.</p>
<p>And, as I was writing up this post, I was delighted to see <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">this post by Bruce Sterling on Pachube Feeds</a> and his challenge, offering:</p>
<p><strong>&#8220;(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))&#8221;</strong></p>
<p>Also Anselm Hook, who has an extensive background in video game development, made an interesting point about Google Wave to me:</p>
<p><strong>&#8220;btw &#8211; there is a preexisting metaphor for the wave &#8211; the wave is notable in that it is making the web like a videogame &#8211; its bringing real time many participant shared interaction to the web&#8221;</strong></p>
<div id="a9iz" style="text-align: left;">And see <a href="http://radar.oreilly.com/2009/05/google-wave-what-might-email-l.html" target="_blank">Tim Oâ€™Reillyâ€™s post</a> for more on the significance of Wave, which <a href="http://www.techcrunch.com/2009/05/28/google-wave-drips-with-ambition-can-it-fulfill-googles-grand-web-vision/">Google previewed for developers at its I/O conference</a>:</div>
<p><strong>â€œJens, Lars, and team re-imagined email and instant-messaging in a connected world, a world in which messages no longer need to be sent from one place to another, but could become a conversation in the cloud. Effectively, a message (a wave) is a shared communications space with elements drawn from email, instant messaging, social networking, and even wikis.â€ </strong></p>
<p>For more on microsyntax see <a href="http://www.microsyntax.org/" target="_blank">microsyntax.org</a></p>
<p>Aaron pointed out to me re microsyntax:</p>
<p><strong>&#8220;This is ultimately the &#8220;magic word&#8221; problem, which is essentially the semweb vs. google-is-smarter-than-you problem.&#8221;</strong></p>
<p>I will have some more questions for Aaron on the the &#8220;magic word&#8221; problem in my upcoming interview post.Â  At the moment I am busy studying some of the thoughts in these links.</p>
<p><a href="http://delicious.com/straup/magicwords" target="_blank">http://delicious.com/straup/magicwords</a></p>
<p><a href="http://www.slideshare.net/straup/the-papernet/22" target="_blank">http://www.slideshare.net/straup/the-papernet/22</a></p>
<p><a href="http://www.xml.com/pub/a/2005/02/16/edfg.html" target="_blank">http://www.xml.com/pub/a/2005/02/16/edfg.html</a></p>
<p><a href="http://xtech06.usefulinc.com/schedule/paper/135" target="_blank">http://xtech06.usefulinc.com/schedule/paper/135</a></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Privacy: Towards a Win Win and Community Sensing</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21.jpg"><img class="alignnone size-medium wp-image-3659" title="erichorvitz21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21-300x199.jpg" alt="erichorvitz21" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"><img class="alignnone size-medium wp-image-3655" title="communitysensing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing-300x199.jpg" alt="communitysensing" width="300" height="199" /></a></p>
<p>While a key element ofÂ  Yahoo! Geo Technologies portfolio of platforms, <a href="http://fireeagle.yahoo.net/" target="_blank">FireEagle</a>, not only gives an important set of tools to allow people to &#8220;share their location with sites and services through the Web or a mobile device&#8221; but also offers up some vital privacy tools, the community sensing work of Eric Horvitz takes privacy and data sharing into new terrain.</p>
<p>Eric didnâ€™t have time to discuss his privacy work in his Where 2.0 presentation, <a href="http://en.oreilly.com/where2009/public/schedule/detail/8911" target="_blank">Where, When, Why, and How: Directions in Machine Learning and Reasoning about Location</a>, &#8211; it came up in his very last slide. But I ran up after his talk with my trusty old ipod recorder in hand, and got the part we missed! Fascinating stuff that will be the subject of an upcoming interview post. Hereâ€™s a little taste of what is to come. Eric describes one of the directions his team will be exploring.</p>
<p><strong>â€œOne thing I want to do, on our research team, Iâ€™d like to develop something very simple for people to use. A challenging problem with privacy is usability and controls. Aunt Polly and Uncle Herbie just donâ€™t get all these authentication controls and sliders, nor do they want to invest in figuring them out. They also donâ€™t get why theyâ€™re being asked with pop up windows to yes or no to various questions and so on. One Idea is having a useable privacy lens, that you can hold up anywhere and it tells you what youâ€™re showing anybody or any organization, what does the world know about you. And you would like to have buttons to turn sharing off for some items. You&#8217;d also like to have a way to go back in time and view prior sharing and logging over periods of time, and to have buttons to push to say erase that segment of your logs.â€</strong></p>
<p><strong> </strong></p>
<p>Understanding the social implications of what it means to live in an instrumented world is a topic that we cannot afford not think about. But luckily there are lot of people who have been thinking pretty deeply about this for a while now.</p>
<p>And I did my best at both Where 2.0 and WhereCamp to seek out as many of geothinkers as I could, and do interviews wherever possible (I have not had time to mention everyone I talked to in this post but hopefully all the interviews will get on Ugotrade soon!)</p>
<p><span style="font-family: Arial,Helvetica,sans-serif; font-size: x-small;"> </span></p>
<h3>HeadMap Manifesto</h3>
<p>In the bar of The Fairmont on the last night of Where 2.0, I heard some of the history of Where 2.0, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking</a>, and <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> from Sophia Parafina, Director of Operations for <a href="http://opengeo.org/" target="_blank">OpenGeo</a> and <a href="http://testingrange.com/" target="_blank">Rich Gibson</a>, programmer, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanker</a>,Â <a href="http://gigapan.org/index.php" target="_blank"> Gigapanner</a> and co-author of <a href="http://mappinghacks.com/" target="_blank">Mapping Hacks </a>with <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> and <a href="http://frot.org/" target="_blank">Jo Walsh</a> (Jo did a lot <a href="http://frot.org/s/semantic_city.html" target="_blank">of key early work on bottom up urban informatics </a> but unfortunately couldn&#8217;t make it to WhereWeek this year).</p>
<p>Check <a id="zaq4" title="Gigapan.org" href="http://www.gigapan.org/index.php" target="_blank">Gigapan.org</a> out! <strong>&#8220;The GigaPan<span class="trademark">SM</span> process allows users to upload, share, and explore brilliant gigapixel+ panoramas from around the globe.&#8221;</strong></p>
<p>Also I interviewed Paul Ramsey, Senior Consultant, OpenGeo, so more on OpenGeo is upcoming (see Paulâ€™s <a href="http://blog.cleverelephant.ca/2009/05/where-re-cap.html" target="_blank">Where ReCap</a>). <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43773"> Justin Deoliveira</a> (OpenGeo) andÂ   	 		<a href="http://en.oreilly.com/where2009/public/schedule/speaker/59688">Sophia Parafina</a> did a session, <a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165">GeoServer, GeoWebCache + OpenLayers: The OpenGeo Stack,</a><span class="url uid"> which unfortunately I missed as it </span><span class="url uid">was before I arrived Tuesday.</span><a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165"></a></p>
<div id="page_title"><strong> </strong></div>
<p><span class="bio"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich.jpg"><img class="alignnone size-medium wp-image-3631" title="sophiaandrich" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich-300x199.jpg" alt="sophiaandrich" width="300" height="199" /></a></span></p>
<p>I met Rich Gibson <a href="http://www.flickr.com/photos/ugotrade/sets/72157615022689427/" target="_blank">at Etech 2009 playing Werewolf</a> and Rich introduced me to his co-author on <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a> and alpha geek supreme, Schuyler Erie, who also wrote the clustr code that The Shape of Alpha uses.</p>
<p><a href="http://joshua.schachter.org/" target="_blank">Joshua Schachter</a> founder of Delicious and the <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking mailing list</a>, [and <a href="http://geourl.org/" target="_blank">GEOURL </a>- and <a href="http://memepool.com/" target="_blank">MemePool!] </a> now at Google came to WhereCamp and was mobbed by a small crowd eager to get their hands on one of the developer G Phones he was handing out from a large box.</p>
<p>GeoWanking, which is now run by Oâ€™Reilly Media, has been the incubator for all things location aware and â€œneogeographyâ€ discussions since 2003 &#8211; check out â€˜<a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html" target="_blank">sproke</a> for a <a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html">Paleogeography vs Neogeography </a>(which, as Sophia notes, was a common topic of discussion at Where 2.0) smack down in which geowanking rules in the form of a list traffic comparison.</p>
<p>Sophia and Rich shared some of their perspective on the early days of GeoWanking and the creation of the HeadMap Manifesto with me and pointed me to many other people to talk to. The prime mover of the Headmap manifesto, Ben Russell, has retired from the scene &#8211; perhaps bored by seeing a radical vision gone thoroughly mainstream, or exhausted by the rigors of carrying an idea through the early blue sky years, or just s simply doing something else? I donâ€™t know.</p>
<p><a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> is still vibrant today even as much of what it envisaged has already been realized. HeadMap assembled the future in a poetry of fragments:</p>
<p><strong>â€œyou can search for sadness in new york people within a mile of each other who have never met stop what they are doing and organize spontaneously to help with some task or other.â€</strong></p>
<p>Anselm explained to me what powered all this social cartography revolution, from his POV, was actually IRC.</p>
<p><strong>&#8220;We had a channel on IRC called &#8220;#geo&#8221;. Â And many of us met there.Â  I met Ben Russell at MathEngine in the UK. Ben and I were fascinated by the future of maps.Â  Ben, Jo and I met Schuyler, Dav, Dan Brickley (who worked for Tim Berner&#8217;s Lee who invented the Web), Rich Gibson, Joshua Schachter (who was just a geek at Morgan Stanley at the time ) &#8230; and the snowball took off&#8230;. Â many others.</strong></p>
<p><strong>We stormed ETECH ( Schuyler met Jo there). Â We got invited to FooCamp. Schuyler was married to Jo by Marc Powell (Food Genome) and lived at his house. Â We pushed so hard on the social cartography revolution.</strong></p>
<p><strong>I did a spinny globe for geourl &#8211; a project by some hacker named Joshua Schachter&#8230; Â we were all friends for years and we had never even met.&#8221;</strong></p>
<p><strong></strong></p>
<p><strong></strong></p>
<h3>â€œCan AR researchers harness these new approaches to index reality?â€</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="344" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="425" height="344" src="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Radioheadâ€™s laser (as opposed to video) clip made using <a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a></p>
<p><a id="t7u3" title="If you have read my interview with Ori Inbar," href="../../2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">If you have read my interview with Ori Inbar,</a> you will know how excited I was to attend The Mobile Reality panel.Â  <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">The video is up</a> and it is really awesome to hear <a href="http://en.oreilly.com/where2009/public/schedule/speaker/35457">Raven Zachary</a> (on twitter @<a href="http://www.twitter.com/ravenme">ravenme</a>) get into the fray with augmented reality.</p>
<p>The main take away for me from the Mobile Reality panel was that we shouldn&#8217;t get too hung up on the difficulties of achieving fully immersive visual augmented reality and twiddle our thumbs waiting for the long anticipated sexy lightweight eyeware &#8211; which is still in a coming soon phase (for more on immersive augmented reality see my upcoming interview with <a href="http://www.cc.gatech.edu/%7Eblair/home.html" target="_blank">Blair MacIntyre</a>). Because, in the meantime, there are plenty of delightful and useful ways to augment our experience of the world &#8211; and not all of these augmented realities rely soley on smart phones as John S. Zeleck showed in his presentation on <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigation.â€</a> Also I had an interesting discussion at lunch with Ori Inbar about the use of audio for augmented reality projects.</p>
<p>Where 2.0 clearly demonstrated that we have an unprecedented amount of information from mapping our world, <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar noted in his conference roundup. </a> Ori writes:</p>
<p><strong>&#8220;My point is not a shocker: all we need is to tap into this information and bring it, in context, into people&#8217;s field of view.&#8221;</strong></p>
<p>As Ori noted <strong><a href="http://www.earthmine.com/" target="_blank">Earthmine</a></strong> and <strong><a href="http://www.velodyne.com/lidar/" target="_blank">Velodyne&#8217;s Lidar</a></strong> showed off two new approaches to mapping the world that have potential to create new opportunities for augmented reality:</p>
<p><strong><strong><a href="http://www.earthmine.com/" target="_blank">&#8220;Earthmine</a></strong> uses its own camera-based device to index reality, at the street level, one pixel at a time. They have just announced <a href="http://wildstylecity.com/wsc/" target="_blank">Wild Style City</a> an application that allows anyone to create virtual graffitis on top of designated public spaces. However, at this point, you can only experience it on a pc!&#8221;</strong></p>
<p><a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a>, Ori notes, has also embarked on a mission to map the outdoors. But, the question Ori highlights is:</p>
<p><strong>â€œCan AR researchers harness these new approaches to index reality?â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest.jpg"><img class="alignnone size-medium wp-image-3660" title="johnzelekandbradyforrest" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest-300x199.jpg" alt="johnzelekandbradyforrest" width="300" height="199" /></a></p>
<p>Brady Forrest inspects John S. Zelekâ€™s <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigationâ€</a> at Where Fair before putting it on and being guided by sensory nudges at the cardinal points in the belt.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost.jpg"><img class="alignnone size-medium wp-image-3661" title="bradyforrestpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost-199x300.jpg" alt="bradyforrestpost" width="199" height="300" /></a></p>
<h3>Coolest Mobile Locative Media App. at Where Fair</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png"><img class="alignnone size-full wp-image-3682" title="picture-61" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png" alt="picture-61" width="176" height="269" /></a></p>
<p><a href="http://www.sonycsl.co.jp/person/shio.html" target="_blank">Atsushi Shionozaki </a>of<strong> <a href="http://www.placeengine.com/en" target="_blank">Place Engine</a></strong> &#8211; &#8220;<strong>a core technology that enables a device equipped with Wi-Fi such as a laptop PC or smart phone to determine its current location,&#8221; </strong>demoed the coolest location aware mobile app in Where Fair &#8211; <a id="uwuf" title="Oedo Yokai" href="http://service.koozyt.com/oedo/" target="_blank">Oedo Yokai</a>. Working with ethnologist, Dr. Hiro Kubota and artist Atsushi Morioka, &#8220;Oedo Yokai&#8221; is <a id="gtb2" title="Koozyt's" href="http://www.koozyt.com/" target="_blank">Koozyt&#8217;s</a> <strong>&#8220;first attempt to cross IT (Location Information) and Folkloristics.&#8221; </strong></p>
<p><strong>&#8220;The Japanese &#8220;Yokai&#8221; are known to dwell and appear at specific locations. They can frequently be seen within the grounds of shrines and temples, believed to be the border between this world and the afterlife, or in more common places like on a hill or at a crossroads. If the &#8220;Yokai&#8221; symbolize the mystery, legend, and lore associated with places, as our interests fade from actual locations, the rol, es they play in modern day society will diminish, and the &#8220;Yokai&#8221; might then cease to appear at all.&#8221;</strong></p>
<p><strong></strong>I love this idea of bringing the ancient spirits of place back into our lives with our new tools of location awareness.</p>
<p>Odeo Yokai also reminds me of Aaron Straup Cope&#8217;s work on &#8220;<a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#historybox" target="_blank">the idea of every spot being a &#8220;history box&#8221;</a> which he explained is &#8220;one of the threads behind<a href="http://blog.flickr.net/en/2009/02/24/an-abundant-present/" target="_blank"> the &#8216;nearby&#8217; project at Flickr</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai.jpg"><img class="alignnone size-medium wp-image-3683" title="oedoyokai" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai-300x199.jpg" alt="oedoyokai" width="300" height="199" /></a></p>
<h3>The Food Genome</h3>
<p>I cannot end this roundup of WhereWeek without a mention of <a href="http://www.foodgenome.com/home" target="_blank">The Food Genome</a>.</p>
<p><strong>&#8220;Food Genome is a big hungry brain that scours the internet, trying to learn everything there is to know about food.&#8221;</strong></p>
<p>Watch out for the upcoming launch of this project, it stole the show with an exciting presentation at WhereCamp. You can follow <a href="http://twitter.com/foodgenome">@foodgenome on Twitter</a> now.</p>
<p>To get one of the gorgeous Food Genome brochures you had to ask Mark Powell a good question. Notice an eager hand reaching out in the picture below. I asked, â€œhow would the basic building blocks of the food genome be licensed?â€ I got my brochure and a rain check on an answer to my question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost.jpg"><img class="alignnone size-medium wp-image-3664" title="foodgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost-199x300.jpg" alt="foodgenomepost" width="199" height="300" /></a></p>
<h3>The Ubiquitous Media Studio</h3>
<p><strong></strong>Another highlight of WhereCamp was hearing from <a id="nfup" title="Gene Becker" href="http://lightninglaboratories.com/about.html" target="_blank">Gene Becker</a> about his new project, <a id="bs9-" title="Ubiquitous Media Studio" href="http://ubistudio.org/" target="_blank">Ubiquitous Media Studio</a> which will be located in Palo Alto. The project is still in the early stages of devlopment but it sounds really exciting. I am looking forward to being involved from East Coast.Â  If you&#8217;re curious where this is going, <strong><a href="http://twitter.com/ubistudio">follow @ubistudio on Twitter</a></strong> to stay updated.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene.jpg"><img class="alignnone size-medium wp-image-3684" title="gene" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene-300x300.jpg" alt="gene" width="300" height="300" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/feed/</wfw:commentRss>
		<slash:comments>14</slash:comments>
		</item>
		<item>
		<title>Creating the Information Landscapes of the Future: Locative Media, Loose Interaction Topologies, and The Shape of Alpha</title>
		<link>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/</link>
		<comments>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/#comments</comments>
		<pubDate>Sun, 17 May 2009 20:13:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[3D mapping for AR]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[augmented reality systems]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[body controllers]]></category>
		<category><![CDATA[community mapping]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[experimental human-computer interfaces]]></category>
		<category><![CDATA[flea market mapping]]></category>
		<category><![CDATA[geotagged photos]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[information landscapes of the future]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[internet 2.0]]></category>
		<category><![CDATA[ITP Spring Show 2009]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative media manifesto]]></category>
		<category><![CDATA[loose interaction topologies]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[mining geotagged photos]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[mud pong]]></category>
		<category><![CDATA[Mud Tub]]></category>
		<category><![CDATA[multi-touch surfaces]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[S Ring]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart mud]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[Where Week 2009]]></category>
		<category><![CDATA[WhereCamp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3521</guid>
		<description><![CDATA[I am excited about going to Where Week 2009 &#8211; Where 2.0 and WhereCamp, this week (for more see Brady Forrest&#8217;s post).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future. As you know, if you have read my previous post &#8211; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/looseinteractionphilosophiespost.jpg"><strong></strong></a><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles.jpg"><img class="alignnone size-medium wp-image-3533" title="shapefiles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles-150x300.jpg" alt="shapefiles" width="150" height="300" /></a></strong></p>
<p>I am excited about going to <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">Where Week</a><a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank"> 2009</a> &#8211; <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0 </a>and <a href="http://wherecamp2009.eventbrite.com/" target="_blank">WhereCamp,</a> this week (for more <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">see Brady Forrest&#8217;s post</a>).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future.</p>
<p>As you know, if you have read <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my previous post &#8211; here</a>, I think the <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">â€œMobile Reality</a>â€ panel is a must.Â  And I have been looking forward to hearing more about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope</a>, Flickr, since <a href="http://en.oreilly.com/et2009" target="_blank">Etech 2009</a> when I was introduced to Aaron by <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> (see<a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> my interview with Mike Kuniavsky at Etech here</a> and more on Mike&#8217;s concept &#8220;information shadows&#8221; <a href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">in his Etech talk</a>).</p>
<p>Shape of Alpha is revealing some fascinating possibilities for mining geotagged Flickr images.</p>
<p>As <a href="http://twitter.com/timoreilly/statuses/1777871797" target="_blank">Tim O&#8217;Reilly noted in a tweet</a>, Aaron Straup Cope&#8217;s recent post,<strong> <a href="http://code.flickr.com/blog/2009/05/06/the-absence-and-the-anchor/" target="_blank">The Absence and the Anchor, </a></strong>describes, <strong>&#8220;some of <span class="status-body"><span class="entry-content">the surprising things Flickr is learning about people from geotagged photos.&#8221;</span></span></strong> Aaron&#8217;s post also announces that the &#8220;donut hole shapes&#8221; are available for developers to use with their developer magic via the <a href="http://www.flickr.com/services/api">Flickr API</a>.</p>
<p><strong>&#8220;If the shapefiles themselves are uncharted territory, the donut holes are the fuzzy horizon even further off in the distance. Weâ€™re not really sure where this will take us but weâ€™re pretty sure thereâ€™s something to it all so weâ€™re eager to share it with people and see what they can make of it too.&#8221;</strong></p>
<p>For more on shape files see Aaron&#8217;s blog post about <strong>&#8220;<a href="http://code.flickr.com/blog/2009/01/12/living-in-the-donut-hole/">some experimental work that Iâ€™d been doing with the shapefile data</a> we derive from geotagged photos.&#8221;</strong></p>
<h3>Creating the Information Landscapes of the Future</h3>
<p>I have been thinking and writing a lot about augmented reality lately.Â  And key thought leaders in this space like <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a>, <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a><strong> </strong>(<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>),<strong> </strong> and<a href="http://gamesalfresco.com/about/" target="_blank"> Ori Inbar</a> (<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>), have clued me in to how vital it is, for an ubiquitous experience,<strong> </strong>for us to find ways to allow people to fill in the stories that can be used for augmented reality.</p>
<p>As Ori noted in conclusion to our recent conversation:</p>
<p><strong> &#8220;in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</strong></p>
<p><a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick,</a> another key thinker in this area (interview upcoming), also notes:</p>
<p><strong>&#8220;you can imagine a crowd sourced set of hints for any location so, AR knows roughly where it is and can do photosynth style matchingÂ  to find out exactly what it&#8217;s looking at and get the extra data it needs about that thing (humans are really good image recognition systems, and are also pretty good at interfacing with networks) instead of marking up real objects with ids you take pictures of real objects, tag them and then search them based on images from your ar system.&#8221;</strong></p>
<p>Ori Inbar suggested to me an idea that I really liked &#8211; the notion of bread crumbs where, <strong>&#8220;</strong><span class="ru_50CCC5_tx"><strong>You don&#8217;t have a constant view of what is happening when you walk but you get images and text and all sorts of things from people who walked there before &#8211; like breadcrumbs.</strong>&#8220;Â  And as </span><a href="http://www.designundersky.com/dus/2008/10/31/geotagged-photo-cartography.html" target="_blank">Design Under Sky</a> points out about Shape of Alpha:</p>
<p><strong>&#8220;The truly amazing part of this process is how the &#8220;community&#8221; has the authority to provide areas previously unmapped.Â Â By uploadingÂ personal photos ofÂ areas not covered by mapping software, members have theÂ power of further shrinking our world through greater visual access and understanding ofÂ locations one might not be willing or unable to visit.&#8221; </strong></p>
<p><strong><br />
</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod.jpg"><img class="alignnone size-medium wp-image-3536" title="aaronmiketod" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod-300x265.jpg" alt="aaronmiketod" width="300" height="265" /></a></p>
<p><em>Aaron Straup Cope, Flickr, Todd E. Kurt, <a href="http://thingm.com/" target="_blank">ThingM</a> and Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a></em></p>
<h3>The Locative Media Manifesto</h3>
<p><a href="http://stamen.com/" target="_blank">@stamen&#8217;s</a> tweet brought AndrÃ© Lemos&#8217; brilliant, thought provoking, &#8221; <a href="http://www.andrelemos.info/2009/05/locative-media-manifesto.html" target="_blank">Locative Media Manifesto</a>,&#8221; to my attention.Â  I am also looking forward to hearing about how old maps &#8220;can shed light on modern geography when placed in counterpoint to the state of art in modern maps from Google or Microsoft&#8221; from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/3486">Michal Migurski</a>, Stamen Design, who will present <a href="http://en.oreilly.com/where2009/public/schedule/detail/7276" target="_blank">Flea Market Mapping</a> at Where 2.0.</p>
<p>AndrÃ© Lemos writes:</p>
<p><strong>&#8220;After uploading to Matrix up there &#8211; Internet 1.0 &#8211; now is the time to &#8220;download cyberspace,&#8221; information about things down here &#8211; Internet 2.0. We are not dealing with what is virtual up there, but of what to do with all this information about things and places down here! How can we relate to things and places? And now that these things and places are provided with digital information and Internet connections? Do we invoke Heidegger and Lefevbre?&#8221;</strong></p>
<p>I will leave it to people smarter than I to invoke Heidegger and Lefevbre as Andre Lemos does so eloquently in Locative Media Manifesto. But by reminding us artists and activists created the term &#8220;locative media&#8221; to &#8220;question the mass use of LBS (location based services) and LBT (location based technologies,&#8221;Â  the manifesto delivers 30 principles to inspire creators of Locative Media and explorers of the,<strong> &#8220;current dimension of cyberculture, comprising the era of &#8220;cyberspace leaking into the real world&#8221; (Russel, 1999); an era of the &#8220;internet of things.&#8221;</strong></p>
<p>I feel well primed for Where Week by my visit to the <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">ITP Spring Show, 2009</a> last Sunday. It was an interaction riot, jam packed with brilliance and off beat explorations of locative media which I experienced through the senses of my 9 year old.Â  His pick for best of show is below. But he had many favorites and I have <a href="http://www.flickr.com/photos/ugotrade/sets/72157618216853047/" target="_blank">put some pictures up on my FLickr stream</a> with links to the creator&#8217;s sites.Â  One of my favorite projects Alexander Reeder&#8217;s <a href="http://artandprogram.com/sring/" target="_blank">S Ring</a> &#8211; <a href="http://tishshute.com/seducing-people-by-talking-with-your-hands" target="_blank">&#8220;seducing people by talking with your hands,&#8221; is up on my Posterous blog</a>.Â  You can see a list of the extensive <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">media coverage the show got here</a>.</p>
<h3>Loose Interaction Topologies</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><img class="alignnone size-medium wp-image-3528" title="mudpongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost-300x199.jpg" alt="mudpongpost" width="300" height="199" /></a></p>
<p>The picture above is of a game of mud pongÂ  in <a href="http://dirtycomputing.com/" target="_blank">Tom Gerhardt&#8217;s Mud Tub</a>.Â  The mud interface &#8211; &#8220;a smart tub with some mud&#8221; knows the topology of the mud and where your hand is. Mud Tub takes advantage ofÂ  a complex material &#8211; to explore loose interaction topologies, including as seen above a game of Mud Pong.Â  Loose interaction topologies are a way we can explore meaning in &#8220;the internet of things.&#8221;</p>
<p>Tom explained his own exploration of the internet of things to me very succinctly:</p>
<p><strong>&#8220;I am not trying to make mud better. I am trying to make computer</strong><strong>s better with mud.&#8221;</strong></p>
<p>He elaborates on the value of Mud Tub in this regard on his site, <a href="http://dirtycomputing.com/" target="_blank">dirtycomputing</a>:</p>
<p><strong>&#8220;The Mud Tub occupies a space similar to other experimental human-computer interfaces, like, multi-touch surfaces, body controllers, augmented reality systems, etc, which push the boundaries of codified interaction models, and drive the development of innovative software applications. Beyond its role as a research topic, the Mud Tub also exists as an open-sourced hardware/software platform on which interactive artists and designers explore new meth</strong><strong>ods for creating and displaying their work.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
