<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; loose interaction topologies</title>
	<atom:link href="http://www.ugotrade.com/tag/loose-interaction-topologies/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Creating the Information Landscapes of the Future: Locative Media, Loose Interaction Topologies, and The Shape of Alpha</title>
		<link>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/</link>
		<comments>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/#comments</comments>
		<pubDate>Sun, 17 May 2009 20:13:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[3D mapping for AR]]></category>
		<category><![CDATA[Aaaron Straup Cope]]></category>
		<category><![CDATA[augmented reality systems]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[body controllers]]></category>
		<category><![CDATA[community mapping]]></category>
		<category><![CDATA[Etech 2009]]></category>
		<category><![CDATA[experimental human-computer interfaces]]></category>
		<category><![CDATA[flea market mapping]]></category>
		<category><![CDATA[geotagged photos]]></category>
		<category><![CDATA[image recognition]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[information landscapes of the future]]></category>
		<category><![CDATA[information shadows]]></category>
		<category><![CDATA[internet 2.0]]></category>
		<category><![CDATA[ITP Spring Show 2009]]></category>
		<category><![CDATA[jim purbrick]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[locative media manifesto]]></category>
		<category><![CDATA[loose interaction topologies]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[mining geotagged photos]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[mud pong]]></category>
		<category><![CDATA[Mud Tub]]></category>
		<category><![CDATA[multi-touch surfaces]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[S Ring]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart mud]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[Where Week 2009]]></category>
		<category><![CDATA[WhereCamp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3521</guid>
		<description><![CDATA[I am excited about going to Where Week 2009 &#8211; Where 2.0 and WhereCamp, this week (for more see Brady Forrest&#8217;s post).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future. As you know, if you have read my previous post &#8211; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/looseinteractionphilosophiespost.jpg"><strong></strong></a><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles.jpg"><img class="alignnone size-medium wp-image-3533" title="shapefiles" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/shapefiles-150x300.jpg" alt="shapefiles" width="150" height="300" /></a></strong></p>
<p>I am excited about going to <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">Where Week</a><a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank"> 2009</a> &#8211; <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0 </a>and <a href="http://wherecamp2009.eventbrite.com/" target="_blank">WhereCamp,</a> this week (for more <a href="http://radar.oreilly.com/2009/05/where-week-2009.html" target="_blank">see Brady Forrest&#8217;s post</a>).Â  Where Week will be total immersion for five days in a think tank with creators of the information landscapes of the future.</p>
<p>As you know, if you have read <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my previous post &#8211; here</a>, I think the <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">â€œMobile Reality</a>â€ panel is a must.Â  And I have been looking forward to hearing more about <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">The Shape of Alpha</a> from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43824" target="_blank">Aaron Straup Cope</a>, Flickr, since <a href="http://en.oreilly.com/et2009" target="_blank">Etech 2009</a> when I was introduced to Aaron by <a href="http://www.orangecone.com/" target="_blank">Mike Kuniavsky</a> (see<a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank"> my interview with Mike Kuniavsky at Etech here</a> and more on Mike&#8217;s concept &#8220;information shadows&#8221; <a href="http://www.orangecone.com/archives/2009/03/etech_2009_the.html">in his Etech talk</a>).</p>
<p>Shape of Alpha is revealing some fascinating possibilities for mining geotagged Flickr images.</p>
<p>As <a href="http://twitter.com/timoreilly/statuses/1777871797" target="_blank">Tim O&#8217;Reilly noted in a tweet</a>, Aaron Straup Cope&#8217;s recent post,<strong> <a href="http://code.flickr.com/blog/2009/05/06/the-absence-and-the-anchor/" target="_blank">The Absence and the Anchor, </a></strong>describes, <strong>&#8220;some of <span class="status-body"><span class="entry-content">the surprising things Flickr is learning about people from geotagged photos.&#8221;</span></span></strong> Aaron&#8217;s post also announces that the &#8220;donut hole shapes&#8221; are available for developers to use with their developer magic via the <a href="http://www.flickr.com/services/api">Flickr API</a>.</p>
<p><strong>&#8220;If the shapefiles themselves are uncharted territory, the donut holes are the fuzzy horizon even further off in the distance. Weâ€™re not really sure where this will take us but weâ€™re pretty sure thereâ€™s something to it all so weâ€™re eager to share it with people and see what they can make of it too.&#8221;</strong></p>
<p>For more on shape files see Aaron&#8217;s blog post about <strong>&#8220;<a href="http://code.flickr.com/blog/2009/01/12/living-in-the-donut-hole/">some experimental work that Iâ€™d been doing with the shapefile data</a> we derive from geotagged photos.&#8221;</strong></p>
<h3>Creating the Information Landscapes of the Future</h3>
<p>I have been thinking and writing a lot about augmented reality lately.Â  And key thought leaders in this space like <a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a>, <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a><strong> </strong>(<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>),<strong> </strong> and<a href="http://gamesalfresco.com/about/" target="_blank"> Ori Inbar</a> (<a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">see my interview here</a>), have clued me in to how vital it is, for an ubiquitous experience,<strong> </strong>for us to find ways to allow people to fill in the stories that can be used for augmented reality.</p>
<p>As Ori noted in conclusion to our recent conversation:</p>
<p><strong> &#8220;in order to have a ubiquitous experience like <a href="http://www.curiousraven.com/" target="_blank">Robert Rice</a> and others are striving for, youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</strong></p>
<p><a href="http://jimpurbrick.com/" target="_blank">Jim Purbrick,</a> another key thinker in this area (interview upcoming), also notes:</p>
<p><strong>&#8220;you can imagine a crowd sourced set of hints for any location so, AR knows roughly where it is and can do photosynth style matchingÂ  to find out exactly what it&#8217;s looking at and get the extra data it needs about that thing (humans are really good image recognition systems, and are also pretty good at interfacing with networks) instead of marking up real objects with ids you take pictures of real objects, tag them and then search them based on images from your ar system.&#8221;</strong></p>
<p>Ori Inbar suggested to me an idea that I really liked &#8211; the notion of bread crumbs where, <strong>&#8220;</strong><span class="ru_50CCC5_tx"><strong>You don&#8217;t have a constant view of what is happening when you walk but you get images and text and all sorts of things from people who walked there before &#8211; like breadcrumbs.</strong>&#8220;Â  And as </span><a href="http://www.designundersky.com/dus/2008/10/31/geotagged-photo-cartography.html" target="_blank">Design Under Sky</a> points out about Shape of Alpha:</p>
<p><strong>&#8220;The truly amazing part of this process is how the &#8220;community&#8221; has the authority to provide areas previously unmapped.Â Â By uploadingÂ personal photos ofÂ areas not covered by mapping software, members have theÂ power of further shrinking our world through greater visual access and understanding ofÂ locations one might not be willing or unable to visit.&#8221; </strong></p>
<p><strong><br />
</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod.jpg"><img class="alignnone size-medium wp-image-3536" title="aaronmiketod" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronmiketod-300x265.jpg" alt="aaronmiketod" width="300" height="265" /></a></p>
<p><em>Aaron Straup Cope, Flickr, Todd E. Kurt, <a href="http://thingm.com/" target="_blank">ThingM</a> and Mike Kuniavsky, <a href="http://thingm.com/" target="_blank">ThingM</a></em></p>
<h3>The Locative Media Manifesto</h3>
<p><a href="http://stamen.com/" target="_blank">@stamen&#8217;s</a> tweet brought AndrÃ© Lemos&#8217; brilliant, thought provoking, &#8221; <a href="http://www.andrelemos.info/2009/05/locative-media-manifesto.html" target="_blank">Locative Media Manifesto</a>,&#8221; to my attention.Â  I am also looking forward to hearing about how old maps &#8220;can shed light on modern geography when placed in counterpoint to the state of art in modern maps from Google or Microsoft&#8221; from <a href="http://en.oreilly.com/where2009/public/schedule/speaker/3486">Michal Migurski</a>, Stamen Design, who will present <a href="http://en.oreilly.com/where2009/public/schedule/detail/7276" target="_blank">Flea Market Mapping</a> at Where 2.0.</p>
<p>AndrÃ© Lemos writes:</p>
<p><strong>&#8220;After uploading to Matrix up there &#8211; Internet 1.0 &#8211; now is the time to &#8220;download cyberspace,&#8221; information about things down here &#8211; Internet 2.0. We are not dealing with what is virtual up there, but of what to do with all this information about things and places down here! How can we relate to things and places? And now that these things and places are provided with digital information and Internet connections? Do we invoke Heidegger and Lefevbre?&#8221;</strong></p>
<p>I will leave it to people smarter than I to invoke Heidegger and Lefevbre as Andre Lemos does so eloquently in Locative Media Manifesto. But by reminding us artists and activists created the term &#8220;locative media&#8221; to &#8220;question the mass use of LBS (location based services) and LBT (location based technologies,&#8221;Â  the manifesto delivers 30 principles to inspire creators of Locative Media and explorers of the,<strong> &#8220;current dimension of cyberculture, comprising the era of &#8220;cyberspace leaking into the real world&#8221; (Russel, 1999); an era of the &#8220;internet of things.&#8221;</strong></p>
<p>I feel well primed for Where Week by my visit to the <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">ITP Spring Show, 2009</a> last Sunday. It was an interaction riot, jam packed with brilliance and off beat explorations of locative media which I experienced through the senses of my 9 year old.Â  His pick for best of show is below. But he had many favorites and I have <a href="http://www.flickr.com/photos/ugotrade/sets/72157618216853047/" target="_blank">put some pictures up on my FLickr stream</a> with links to the creator&#8217;s sites.Â  One of my favorite projects Alexander Reeder&#8217;s <a href="http://artandprogram.com/sring/" target="_blank">S Ring</a> &#8211; <a href="http://tishshute.com/seducing-people-by-talking-with-your-hands" target="_blank">&#8220;seducing people by talking with your hands,&#8221; is up on my Posterous blog</a>.Â  You can see a list of the extensive <a href="http://itp.nyu.edu/sigs/news/itp-spring-show-2009/" target="_blank">media coverage the show got here</a>.</p>
<h3>Loose Interaction Topologies</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><img class="alignnone size-medium wp-image-3528" title="mudpongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost-300x199.jpg" alt="mudpongpost" width="300" height="199" /></a></p>
<p>The picture above is of a game of mud pongÂ  in <a href="http://dirtycomputing.com/" target="_blank">Tom Gerhardt&#8217;s Mud Tub</a>.Â  The mud interface &#8211; &#8220;a smart tub with some mud&#8221; knows the topology of the mud and where your hand is. Mud Tub takes advantage ofÂ  a complex material &#8211; to explore loose interaction topologies, including as seen above a game of Mud Pong.Â  Loose interaction topologies are a way we can explore meaning in &#8220;the internet of things.&#8221;</p>
<p>Tom explained his own exploration of the internet of things to me very succinctly:</p>
<p><strong>&#8220;I am not trying to make mud better. I am trying to make computer</strong><strong>s better with mud.&#8221;</strong></p>
<p>He elaborates on the value of Mud Tub in this regard on his site, <a href="http://dirtycomputing.com/" target="_blank">dirtycomputing</a>:</p>
<p><strong>&#8220;The Mud Tub occupies a space similar to other experimental human-computer interfaces, like, multi-touch surfaces, body controllers, augmented reality systems, etc, which push the boundaries of codified interaction models, and drive the development of innovative software applications. Beyond its role as a research topic, the Mud Tub also exists as an open-sourced hardware/software platform on which interactive artists and designers explore new meth</strong><strong>ods for creating and displaying their work.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/mudpongpost.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
