<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; linked data</title>
	<atom:link href="http://www.ugotrade.com/tag/linked-data/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Real Time Big Data at Strata 2011: Ambient Findability, Social Search, GeoMessaging, Augmented Data, and New Interfaces</title>
		<link>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/</link>
		<comments>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/#comments</comments>
		<pubDate>Thu, 20 Jan 2011 22:48:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Alistair Croll]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android Tasker]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[attention data]]></category>
		<category><![CDATA[augmented data]]></category>
		<category><![CDATA[augmented reality ecosystem]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[BackType]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[Big data and new interfaces]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cassandra]]></category>
		<category><![CDATA[Collecta]]></category>
		<category><![CDATA[content-shifting]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Data Engineering]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[digital divide]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[Edd Dumbill]]></category>
		<category><![CDATA[Factual]]></category>
		<category><![CDATA[future of work]]></category>
		<category><![CDATA[geo]]></category>
		<category><![CDATA[geo social aware discovery]]></category>
		<category><![CDATA[geo-search]]></category>
		<category><![CDATA[geodata]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[Geoloqi]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[geosearch]]></category>
		<category><![CDATA[gestural interfaces]]></category>
		<category><![CDATA[Gov2.0.]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Hive]]></category>
		<category><![CDATA[key data trends]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[location data]]></category>
		<category><![CDATA[Maneko Neki]]></category>
		<category><![CDATA[MapReduce]]></category>
		<category><![CDATA[mapufacture]]></category>
		<category><![CDATA[Mesos]]></category>
		<category><![CDATA[Michal Avny]]></category>
		<category><![CDATA[mobile local interactions]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[My6sense]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[NoSQL]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[OpenGov]]></category>
		<category><![CDATA[P2P cloud computing]]></category>
		<category><![CDATA[pervasive computing]]></category>
		<category><![CDATA[Q&A]]></category>
		<category><![CDATA[Q&A ecosystems]]></category>
		<category><![CDATA[Q&A platforms]]></category>
		<category><![CDATA[Q&A The New Search Insurgents]]></category>
		<category><![CDATA[Quora]]></category>
		<category><![CDATA[RabbitMQ]]></category>
		<category><![CDATA[real time data analytics]]></category>
		<category><![CDATA[real time data in mobile development]]></category>
		<category><![CDATA[real time search]]></category>
		<category><![CDATA[real time search engines]]></category>
		<category><![CDATA[real time social discovery]]></category>
		<category><![CDATA[semantic web]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[social graph]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[social web]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Strata 2011]]></category>
		<category><![CDATA[Swift River]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Topsy]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Who owns your data?]]></category>
		<category><![CDATA[XMPP]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6025</guid>
		<description><![CDATA[We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with Anselm Hook, Part 2 upcoming).Â  O&#8217;Reilly&#8217;s Strata Confernence 2011, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31.jpg"><img class="alignnone size-medium wp-image-6034" title="noisedderived3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31-300x163.jpg" alt="" width="300" height="163" /></a></p>
<p>We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, Part 2 upcoming).Â  <a href="http://strataconf.com/strata2011" target="_blank">O&#8217;Reilly&#8217;s Strata Confernence 2011</a>, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new interfaces.&#8221; It is, perhaps, one of the most important events of 2011.</p>
<p>Data is driving a revolution much as coal, oil, and steel powered the industrial revolution.Â  And the world changing insight from Karl Marx that &#8220;the industrial revolution polarized the world into two groups: those who own the means of production and those who work on them,&#8221; is taking on on new life, asÂ <a href="http://twitter.com/#!/acroll" target="_blank"> Alistair Croll</a>, co-chair of <a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a>, points out in his post,Â  <a href="http://mashable.com/2011/01/12/data-ownership/" target="_blank">&#8220;Who Owns Your Data?&#8221;</a></p>
<p><strong>&#8220;The important question isnâ€™t who owns the data. Ultimately, we all do. A better question is, who owns the means of analysis? Because thatâ€™s how, as Brand suggests, you get the right information in the right place. The digital divide isnâ€™t about who owns data â€” itâ€™s about who can put that data to work.&#8221;</strong></p>
<p>Strata is where a vanguard will be meet, not only to discuss this revolutionâ€™s futures, but to define how to create, handle, and build the platforms and experiences that will harness the data.  My flight is booked!Â  (Also check out <a href="http://www.bigdatacamp.org/">BigDataCamp</a> which takes place the night before <a title="Strata Conference" href="https://en.oreilly.com/strata2011/public/regwith/str11dnaff" target="_blank">Strata</a>.)</p>
<p>The picture opening this post is from Michael EdgeCumbe&#8217;sÂ  <a href="http://garden.neocyde.net/thoughts/2010/12/fall-2010-itp-winter-show-project/">Fall 2010: ITP Winter Show Project</a>.Â  A project exploring ways to intuitively get the feel of what it going on with big data sets using &#8220;the gestural manipulation and stereoscopic visualization of complex data to create a meditative state for data analysis.&#8221;Â  Michael project will be part of the <a href="http://strataconf.com/strata2011/public/schedule/detail/17840" target="_blank">Science Fair at Strata</a>.Â  For more on Michael&#8217;s work see <a href="http://www.neocyde.net/derive/2010/12" target="_blank">Noise Derived.</a> I also have a number of theÂ    <a href="http://strataconf.com/strata2011/public/schedule/topic/595 " target="_blank">interesting new interface sessions </a>at Strata in my schedule.</p>
<p>The daily <a href="http://radar.oreilly.com/2010/12/write-your-own-visualizations.html" target="_blank">Strata Gems</a> on O&#8217;Reilly Radar are great place to get a gestalt of some of the Strata themes, and <a href="http://radar.oreilly.com/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank">this  post </a>by <a href="http://strataconf.com/strata2011/profile/1" target="_blank">Edd Dumbill</a>, program chair for Strata,<a href="http://radar.oreilly.com/m/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank"> Three key data trends for 2011</a>, looks at the year ahead.Â  This week, I got the chance to ask Edd a few of the questions that I will have on mind at Strata &#8211; see his responses below.</p>
<p>If you have been reading Ugotrade, you will know I am interested in our mobile social augmented futures and there is no question in my mind that these will be unleashed by our new capacities to work with data (see <a href="http://www.ugotrade.com/2010/10/31/tim-o%E2%80%99reilly%E2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/" target="_blank">my post here</a>).</p>
<p><strong><br />
</strong></p>
<h3>Data is the how.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram.png"><img class="alignnone size-medium wp-image-6045" title="backtypediagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram-210x300.png" alt="" width="210" height="300" /></a></p>
<p><em>The pic above is from <a href="http://www.readwriteweb.com/hack/2011/01/secrets-of-backtypes-data-engineers.php" target="_blank">&#8220;Secrets of BackType&#8217;s Data Engineers.&#8221;</a> This post on ReadWriteHack by <a href="http://twitter.com/petewarden">Pete Warden</a>, an ex-Apple engineer, and founder of <a href="http://www.openheatmap.com/">OpenHeatMap</a>, really lives up to its title.Â  Check it out if you want to know howÂ <strong> &#8220;three guys (the <a title="opens in new window" href="http://backtype.com/" target="_blank">BackType</a> team ) with only seed funding process a hundred million messages a day?&#8221;</strong></em></p>
<p>I asked on Quora, &#8220;<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank">What would be the most important developments for Augmented Reality in 2011,&#8221;</a> <a href="https://sites.google.com/site/michalavny/" target="_blank">Michal Avny,</a> Strategist &amp; Real Time search expert, wrote:</p>
<p><strong>&#8220;AR strongly relies on localized personalized real time information.</strong></p>
<p><strong>Having a stream of tweets based on keyword search, location or circle of friends doesnâ€™t really make the AR experience; it is the processed real time relevant information that will make AR useful and intensify the experience.&#8221;</strong></p>
<p><strong>In 2011 Real Time search and Social Search will drastically change to provide the infrastructure required.&#8221;</strong></p>
<p>I followed up on Michal&#8217;s Quora answer with some more questions &#8211; see below in this post.<strong><br />
</strong></p>
<p>Also note<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank"> the response</a> from <a href="http://research.microsoft.com/en-us/people/dmolnar/" target="_blank">David Molna</a>r, here is an excerpt:</p>
<p><strong>&#8220;2. A wave of actionable, important data APIs opened up, enabling useful non-gimmicky AR apps for the first time. Think geoloqi.com , or the work Max Ogden has done with Portland civic data. Plus of course <a href="http://face.com/" target="_blank">face.com</a> , email providers and calendar providers, etc.&#8221;</strong></p>
<p><a href="http://strataconf.com/strata2011/public/schedule/speaker/100889" target="_blank">Amber Case</a>, one of the founders of <a href="http://geoloqi.com/" target="_blank">Geoloqi</a>, is on the programming committee of Strata and will be speaking.  Be sure to catch her session! <a href="http://strataconf.com/strata2011/public/schedule/detail/17748" target="_blank">Posthumans, Big Data and New Interfaces,</a> and if you haven&#8217;t already seen it, <a href="http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html" target="_blank">Amber&#8217;s TED talk</a> is a must see.</p>
<p>Geographic proximity is a powerful filter, as is route, and time. But clearly social proximity, social relevance, and shared tastes are also key dimensions for location based experiences, (see my convo with Schuyler of <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, upcoming).</p>
<p>While the whole business of location based search and curation of augmented mobile social experiences is still, for the most part, uncharted terrain, the danger of key points of control being only really accessible to elite players looms large.   I asked <a href="http://www.youtube.com/watch?v=C2HcWlu1BS4" target="_blank">Sophia Parafina</a>, a pioneer in the open geo space for some thoughts on real-time local /geosearch and geomessaging, and the future of openess &amp; big data (see Sophia&#8217;s response below).</p>
<h3><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet for P2P cloud computing?</a></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg"><img class="alignnone size-full wp-image-6048" title="8a174_invisibles_bigbrother_1210" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg" alt="" width="150" height="150" /></a></p>
<p>This is another question I&#8217;m following,Â <a href="http://www.quora.com/home/following" target="_blank"> </a><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet forÂ P2P cloud computing?</a> It is one of those questions that we seem to have been asking in various forms for a very long while now, but without aÂ  major shift in sight.Â  The pic above is from, <a title="Permanent link to The Cloud Made Open Source " href="http://www.readwriteweb.com/cloud/2010/12/open-source-invisible.php">The Cloud Made Open Source &#8220;Invisible&#8221; This Year</a>.Â  But, perhaps, we are at the point when open p2p clouds will find a place in the market because of their potential importance in real time social search and discovery. <a href="http://distributedsearch.blogspot.com/" target="_blank">Borislav Agapiev</a>, Search Entrepreneur and founder of <a href="Vast.com" target="_blank">Vast.com</a>, writes on <a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing?q=p2p+for+a+non+centralized+infrastructure" target="_blank">Quora</a>:</p>
<p><strong>&#8220;I believe a P2P cloud is ideally suited for social &amp; real-time search and discovery.</strong></p>
<p><strong>Consider MapReduce, a very interesting and popular paradigm for distributed computing. MapReduce is very much about bringing computation to data i.e. doing computation at nodes (map) and then aggregating results through network (reduce).</strong></p>
<p><strong>It is very clear now that user attention data (what they click on) is very valuable for search and discovery, yet a centralized model relies upon uploading all that to a single location and then doing a supposed local MapReduce. Clearly, MapReduce could be done  across the network, without any centralized uploads.</strong></p>
<p><strong>In addition to the efficiency argument raised here, it is even more important to consider privacy issues. Uploading massive amounts of user attention data to a centralized location is not something that is going to make users warm and fuzzy <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />   as we are increasingly seeing.</strong></p>
<p><strong>In a P2P cloud, there is no big brother watching over anyone, all computation and data storage is done in the cloud, fragmented in many, many small  encrypted pieces ala BitTorrent.&#8221;</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1.png"><img class="alignnone size-medium wp-image-6066" title="Screen shot 2011-01-16 at 2.13.43 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1-300x223.png" alt="" width="300" height="223" /></a><br />
</strong></p>
<p><em>Picture above from Brynn Marie Evans, <a href="http://brynnevans.com/blog/2010/03/17/it-takes-two-to-tango/">&#8220;It takes two to tango: review of my social search panel</a>&#8220;</em></p>
<p><em><br />
</em></p>
<h3>The Delta of Now &#8211; Transforming Search into a Social Democratic Act</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44.jpg"><img class="alignnone size-medium wp-image-6049" title="2538108030_d37d124e44" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44-300x225.jpg" alt="" width="300" height="225" /></a></p>
<p><em>Picture of Maneki Neko &#8220;beckoning&#8221; cats from <a href="http://www.journeyetc.com/travel-ideas/famous-landmarks-of-cats-and-dogs-around-the-globe/">Journeyetc</a></em></p>
<p>New ecologies of human and machine intelligence are beginning to change basic social structures â€“ see the <a href="http://www.youtube.com/watch?v=t1J2RXrvPek" target="_blank">Future of Work (Biewald and Chirayath Janah 2010)</a>. And projects like <a href="http://swift.ushahidi.com/" target="_blank">Swift River</a>, using search and machine mining to filter out streams on topics of interest that can then be subsequently curated by human beings. This may be extended to the curation of real-time data streams and employment of machine learning algorithms based upon the explicit relationships.</p>
<p>Augmented mobile social experiences are a new frontier in which ideas and practices from a number of fields collide, including: ambient findability (Morville 2005), urban psychogeography, narrative structures, ambient games and devices, 4d (time-space), explorations of place and memory, enchanted objects and people (Kuniavsky 2010), and designed animism (Laurel 2010), to mention just a few.</p>
<p>Mobile local interaction presents an opportunity to invert the search pyramid and to transform search into a social, democratic act (see my interview with Anselm Hook upcoming).Â  Up until now search has been predicated around a very narrow revenue model.  Google has an implicit model of a B2C â€“ business to consumer brokerage. We are only just beginning to get a glimpse of the disruptive potential of C2C &#8211; consumer to consumer brokerages.  Mobile local C2C brokerages that allow us to transact in a trustworthy way over our local geography in close to real time (Hook 2010) have the potential to enable new forms of social organization.  Bruce Sterlingâ€™s short story about a networked gift economy, <a href="http://tqft.net/wiki/Maneki_Neko" target="_blank">Maneko Neki,</a> is a brilliant glimpse at the disruptive potential of such re-imaginings.</p>
<p>Augmented experiences that shift or change a personâ€™s situated geolocal experience of social reality, and change our relationship to the people and the place by augmenting engagement in, and reputation through, socially driven consumer tie ins and game dynamics, like <a href="http://foursquare.com/" target="_blank">Four Square</a>, &amp; <a href="http://gowalla.com/" target="_blank">Gowalla</a> are beginning to emerge, as <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Kati London pointed out in her excellent keynote at Web 2.0 Expo</a>.  And, while the integration of mobile local interaction and an augmented view that shifts our geolocal experience visually will involve creative solutions to some well churned mobile, tracking, mapping and registration challenges, the exploration and development of new dimensions through which we can filter and create trusted and meaningful augmented mobile social experiences is vital, whether you are considering a mobile screen, map, camera view, or futuristic HUDs and gestural interfaces.</p>
<h3>Talking with Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbill.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png"><img class="alignnone size-full wp-image-6077" title="edddumbillheadshot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png" alt="" width="150" height="150" /></a><br />
Picture from <a href="http://people.oreilly.com/edd" target="_blank">O&#8217;Reilly Community.</a></p>
<p><strong>Tish Shute: </strong>First congratulations on Strata!   On the Strata homepage there is a quote from Jason Hoffman:</p>
<p><strong>&#8220;My gut feeling is that we&#8217;re going to look back at the upcoming Strata Conference like we do at the Web 2.0 Conference in 2004/2005.&#8221;<br />
â€”Jason Hoffman, CTO/Founder, Joyent, Inc.</strong></p>
<p>Why do you think Jasonâ€™s comparison might be prescient?</p>
<p><strong>Edd Dumbill: Web 2.0 is a development that ran through every brand that has a web presence and radically changed the way business is done for many companies and brands.</strong></p>
<p><strong>Strata will have a similar impact: every business has data, every business collects an increasing amount of data. This data is the new oil â€“ a valuable raw material that when refined or combined creates value and opportunity.</strong></p>
<p><strong>Tish Shute:</strong> The rise of real time was one of your three key data trends for 2011.  Hadoop is bringing the capacity to work with big data to more than just a few elite players.  But the challenge is still real time.  You mention we will be seeing a hybrid approach to real time and batch MapReduce processing.  Will we hear more about these approaches to real time at Strata?  And, what do you see as the most important conversations on real time data analytics emerging at Strata?</p>
<p>You point out â€œopen source projects and cloud infrastructure means developers can evaluate and learn to love technologies without requiring support or approval from above.â€  What are the most exciting developments on the horizon for open source tools?</p>
<p><strong>Edd Dumbill: </strong><strong>Here are some projects worth watching, in the key areas of real time, cluster management and Hadoop.</strong></p>
<p><strong>* Cassandra and MongoDB â€” NoSQL databases that will prove vital for anybody with real time big data needs</strong></p>
<p><strong>* Mesos â€” a compute cluster management tool, modeled after that which powers Google</strong></p>
<p><strong>* Hadoop ecosystem&#8217;s continuing maturation, especially HBase and Hive.</strong></p>
<p><strong>Tish Shute: </strong> Do you think the market is ready for p2p cloud computing?</p>
<p><strong>Edd Dumbill: The market is emerging for decentralized and distributed cloud computing, and P2P technologies are one way of achieving that. They key trends will be moving computation nearer the data sets or nearer the point of user consumption of the result.</strong></p>
<p><strong>P2P is a difficult model for anybody wanting to commercialize a service, so I think it will tend to form part of a hybrid solution.</strong></p>
<p><strong>Tish Shute:</strong> We have seen enormous strides in our ability to work with giant unstructured databases recently.  Do you think, perhaps, that the dream of a web of linked data &#8211;  â€œa web of data that can be processed directly and indirectly by machines,â€ will be attained through brute force &#8211; i.e. through our ability to harness the power of massively parallel processing, as much as by Semantic Web approaches focused on machine readable metadata? [Also see <a href="http://www.quora.com/Is-this-a-good-approach-www-dist-systems-bbn-com-people-krohloff-shard_overview-shtml-to-use-Hadoop-to-build-a-scalable-distributed-triple-store" target="_blank">my question on Quora</a>, &#8220;Is this a good approach (<a rel="nofollow" href="http://www.dist-systems.bbn.com/people/krohloff/shard_overview.shtml" target="_blank">www.dist-systems.bbn.com/people/&#8230;</a>) to use Hadoop to build a scalable, distributed triple store?&#8221;]</p>
<p><strong>Edd Dumbill:  I&#8217;ve been an observer of the SW for over a decade and I tend to believe that on the web, data means to you whatever meaning you give it as the consumer. With that model, the links are made by the consumer rather than sitting out there explicitly. Some links become de facto standards, and some very few become web standards.</strong></p>
<p><strong>I think the actuality will be a mix of both explicitly stated metadata and that which is inferred. The Semantic Web is a great framework for certain operations, especially interoperable exchange of metadata. A great many more private meanings, never intended to be shared, will be created by consuming software.</strong></p>
<p><strong>There&#8217;s no question that machines will learn how to process most of the Web. Furthermore, machines will learn how to process most of the physical world we&#8217;re in. And that by the end of this decade</strong>.</p>
<h3>Talking with Sophia Parafina</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere.jpg"><img class="alignnone size-medium wp-image-6062" title="sophiawhere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere-300x250.jpg" alt="" width="300" height="250" /></a></p>
<p><em>Picture of Sophia at <a href="http://where2conf.com/where2011" target="_blank">Where 2.0</a><a href="http://www.flickr.com/photos/rich_gibson/2509114741/" target="_blank"></a></em></p>
<p><strong>Tish Shute:</strong> Sophia you have worked in the trenches for a long time now  to support the growth of open geo data.  What do you hope to see emerge in 2011 in the field of geo-data?</p>
<p><strong>Sophia Parafina: Better support for displaying and handling location data across multiple apps. Fred Wilson <a href="http://www.avc.com/a_vc/2011/01/content-shifting.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AVc+%28A+VC%29" target="_blank">recently blogged about content-shifting</a>, he talks about overcoming content silos across devices. Weâ€™ve worked very hard to reduce data silos via formats, but devices are creating their own silos. I would like to see a standard method for sending geo data and geo information to mobile devices.</strong></p>
<p><strong>Producing content for mobile is different from producing content for a computer browser. Web 2.0 produced a lot of infrastructure for browser based interfaces, but in mobile devices that gap has been filled with apps which is fragmenting how data is handled by various devices. What is even more interesting in the mobile space is that devices can push data back that contains location, user updates, photos and even sensor data.Â  If mobile data standardizes, it could lead to browser based applications and stem the continued fragmentation of the mobile application market.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://simplegeo.com/" target="_blank">Simple Geo</a> and<a href="http://www.factual.com/" target="_blank"> Factual</a> are startups emerging in the geodata space. What do you see on the horizon in terms of both the growth of business opportunities and an open geo data community?</p>
<p><strong>Sophia Parafina: In the near future think weâ€™ll see startups providing curated data + API and in response we will also see companies that provide a single interface across multiple data providers. We saw this when everyone released a mapping API and companies such as <a href="http://mapufacture.com/">Mapufacture</a> provided a single interface across multiple APIs.</strong></p>
<p><strong>We will see a resurgence in data providers repackaging the the 2010 US Census data in different ways to respond to market segments, some of this will be open data but all of it will be provided through an API instead of file. Additionally, weâ€™ll see more data from outside the US.</strong></p>
<p><strong>Tish Shute:</strong> What are the biggest obstacles to having the open geodata sets available that we need to enable mobile local interactions and social augmented experiences?</p>
<p><strong>Sophia Parafina: Licensing for both crowd sourced data and private curated open data will become an issue. We recently seen VLC, the open source video player, pulled from the Apple app store because of licensing issues. Also, licensing of content by geography will be problematic, limiting searches by geographical location. In addition, how will licensing of data that is updated by crowd sourcing work?</strong></p>
<p><strong>Multiple APIs for accessing data sources. The current trend for each provider to create an API for their data sets will result in data silos â€“ there needs to be a single sign-on equivalent for requesting data.</strong></p>
<p><strong>Size of data on the wire, the current models for delivering data is based on broadband connections. However, as mobiles increasingly become the way people use the web, the data needs to be sized accordingly. This also goes for mobile interfaces. Have you tried to shop on a mobile device, or buy a train or plane ticket? Itâ€™s frustrating and error prone. There is a large untapped market of people who only use the Internet on mobile devices.</strong></p>
<p><strong>Tish Shute</strong>: You pointed me to <a href="http://radar.oreilly.com/2010/12/strata-gems-diy-personal-sensi.html" target="_blank">this link in Strata Gems</a> re â€œan interesting and pertinent (also a competitor to GeoLoqi),â€ â€“ <a href="http://tasker.dinglisch.net/" target="_blank">the Android Tasker app.</a> What do these emerging services bring to the table in terms of the next generation of location based services?</p>
<p><strong>Sophia Parafina: This app letâ€™s your device interact with the environment. I think that this is a great way of using the sensors on existing platforms to increase interaction and to implement ambient findability. The basic premise of Tasker is that some action happens in response to an event in an application, time, date, location, event, or gesture. Tasker has defined 180 actions that can occur based and number or combination of events. This can provide a basic vocabulary for interaction between the user and the device and more importantly between users. Tasker also can use Android script plugins, which lowers the bar to creating your own ambient  application.</strong></p>
<p><strong>Programs such as Tasker can provide a way for people to interact with social networks beyond sending messages. People can use their mobile devices to interact with their surroundings with out having to interact with the device.</strong></p>
<p><strong>Tish Shute:</strong> We have had many conversations about emerging ideas of geo-search, geo-messaging and geo-fencing. What are the most interesting developments in these areas and what do you see on the horizon for 2011?</p>
<p><strong>Sophia Parafina: The map will fade into the background and become less important. Display of information will be context aware, that includes location. For example, letâ€™s say I make a grocery list, when Iâ€™m at the grocery story, the list will just pop-up without the need for me to find the app that has the list. Or reminders or offers pop-up when you are near a place at a certain time, letâ€™s say you need to buy a present for a birthday party for a child, you could send out a request that you are looking for an item and retailers could offer â€œon the spotâ€ discounts if you are in the area.</strong></p>
<p><strong>Geo-search, geo-messaging, and geo-fencing are geared to towards mobile devices, so I expect to see them soon as part of apps. Building generic applications that implement geo* will fail because that sort of information is useful only within a context. Geo* apps are solutions looking for an problem. The killer mobile app will use these functions transparently to reduce the cognitive load of the user who is busy moving around in the world.</strong></p>
<p><strong>User data gathered from multiple web applications will become consolidated profiles that will used for context aware applications. For example, there could be a service which matches prices of items that you have shopped for on the web, so for example the service would have access to your cookies, know your favorite retailers, things you have shopped for, your location and activity patters (when you are at home, work, restaurant). When you are in the vicinity of a brick and mortar retailer with the same or similar items, the service can send you alert to match the price of the item you found on line. So your digital life will become more closely linked with your day to day activities.</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Michal Avny</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic.jpg"><img class="alignnone size-medium wp-image-6059" title="Michal_Pic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic-300x275.jpg" alt="" width="300" height="275" /></a></p>
<p><strong>Tish Shute: </strong>At <a href="http://www.web2summit.com/web2010" target="_blank"> Web 2.0 Summit</a>, one of the highlights for me was the, <a href="http://www.web2summit.com/web2010/public/schedule/detail/17101" target="_blank">Q&amp;A:The New Search Insurgents</a> lunch where Charlie Cheever of <a href="http://www.quora.com/" target="_blank">Quora</a>, IMO, stole the show. I tweeted:</p>
<p><em>&#8220;One of my takeaways from #w2s is that #quora points to future of augmented mobile social experiences &#8211; a search filter for experience! #AR&#8221;</em></p>
<p>In your view what are the biggest challenges for location Q&amp;A to emerge as a search filter for location based experiences?</p>
<p><strong>Michal Avny: The biggest location Q&amp;A challenges yet to be conquered are immediacy (real time dynamic data), relevancy (strong personalized filters) and user experience (simplified interface).</strong></p>
<p><strong>Location Q&amp;A enables different use cases.  The most prominent are Follow (follow places, topics and friends to learn about a location), Interact (meet new people based on common interests), Plan ahead (plan a trip, night out or a shopping day by asking and searching for local information) and On-site (check for recommendations, friends, deals, events and traffic nearby).</strong></p>
<p><strong>Unlike Follow, Interact and Plan ahead that can be added to existing Q&amp;A platforms (such as Quora) by attending location specifics as they share similar characteristics, the on-site mode introduces a completely different experience, first and foremost it requires immediate attention.  It is real time based and the nature of the data is dynamic.  Traffic updates, current events, nearby friends, all that changes constantly.  Posting a location question on-site implies the response should be in real time (e.g. best kid friendly restaurant), the normal Q&amp;A response latency wouldnâ€™t work.</strong></p>
<p><strong>Strong relevancy filters are required to accommodate for the overwhelming flood of information.  Moreover, some of the data should be filtered by user behavior and preferences, check in notifications (type of relation), restaurant recommendations (type of food, price level, etc), shopping deals (commercial categories) and more.</strong></p>
<p><strong>Mobile experience requires ease of use and simplicity.  A new Q&amp;A interface and query language that allows for posting questions should be defined as well as coherent summarized response interface.  User on the go should not have to post lengthy questions, browse through tens of results or search for the right service, but instead use a simple intuitive tool.</strong></p>
<p><strong>Tish Shute: </strong>Real- time location based search is in its infancy.  Real time questions can be answered using different services such as Yelp, TripAdvisor, <a href="http://www.waze.com/homepage/" target="_blank">Waze,</a> <a href="http://foursquare.com/" target="_blank">Foursquare</a>, IMDb and more.  But what are the challenges to moving forward with aggregating these sources and then into â€œlocalsâ€ that are able to process and deal with vast amounts of information?</p>
<p><strong>Michal Avny: Using some of the leading location services to answer question is sufficient to start with.</strong></p>
<p><strong>In order to provide broad coverage (worldwide) and reliable information, aggregation of the different services is required for instance to normalize product and service rank, aggregate classified, and more. This is quite challenging as there is no one standard available.</strong></p>
<p><strong>When location Q&amp;A user base is big enough, I foresee a tendency to rely more on â€˜localsâ€™ input as the base of information.   As the platform grows, communities will be formed with different cultures, relationships and trust levels, making the information more valuable and customizable.  Some of the challenges I already mentioned are implementing filters, query language and interfaces to enable using the vast amounts of real time data in a mobile environment.  More of the challenges lying ahead are integrating the â€˜localsâ€™ data with location based services as they are integral components of the Q&amp;A ecosystem.   Merging trust levels and relationships while adhering to different privacy guidelines is a challenge yet to be explored. (This should be discussed in more detail under the protocols topic).</strong></p>
<p><strong>It is quite evident that Quora is now facing growing pains and is struggling to maintain its character.  Same as with Quora, it will also be a challenge to support and maintain the ecosystem while allowing for massive scale-up.</strong></p>
<p><strong>Tish Shute:</strong> I have been very interested in exploring protocols that will be enablers to micro local interaction and mobile social interaction for AR &#8211; particularly the XMPP extensions and operational transform work of Google Wave (now <a href="http://incubator.apache.org/projects/wave.html" target="_blank">Apache Wave</a>), and PubSub protocols like <a href="http://code.google.com/p/pubsubhubbub/" target="_blank">PubHubSubbub</a> and Erlang based <a href="http://www.rabbitmq.com/" target="_blank">RabbitMQ</a>.  We are beginning to see protocols emerging that could enable new real time local services.  What do you think are some of the most valuable use cases for â€œlocalsâ€ that this new generation of real time protocols can enable?</p>
<p><strong>Michal Avny: AR is about interacting with digital information; the AR ecosystem is composed of layers and components such as devices, platforms, browsers, applications and content.  For the different components to interact new protocols, security guidelines, and privacy policies must be in place.  A standard will enable local vendors and service providers to publish specials, deals, updates and events for any application to broadcast, identify people and places by proximity (without having to use the same application or device), local recommendations will be shared by services, devices will be able to interact, location based platforms, such as Q&amp;A, will have access to vast breadth of information, geo aware devices will provide consistent experience globally, and much more.</strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest challenges to going mainstream for this emerging field of real time social discovery?</p>
<p><strong>Michal Avny: The biggest challenge is building towards real time, geo-aware, localized, personalized ambient data.   Discovery is in its infancy, location social based Best, Top, and Trending lists with some basic filtering options are available, and this is great as people are getting accustomed to information surrounding them.  To some degree it can intensify the AR experience, for instance suggest the most popular dish in a restaurant, or map the best coffee shops nearby, but it is customized at best by friend recommendations and depends on the coverage and broadness of the specific discovery service.</strong></p>
<p><strong>There is a need for the next generation of discovery, customized geo social aware discovery that filters the vast amount of real time data by learning user preferences and behavior (built on top of the much needed local social real time open protocol)</strong></p>
<p><strong>Tish Shute:</strong> Who are your favorite startups/upstarts in the the field of real time search and why?</p>
<p><strong>Micha Avny: <a href="http://www.my6sense.com/" target="_blank">My6Sense </a>- My6sense provides a sharper and better way to experience your information from feeds you subscribe to (Social Networks, News, RSS feeds, etc.).  Itâ€™s personal &#8211; Content is ranked based on whatâ€™s relevant to you. It learns what&#8217;s valuable to you by translating your consumption behavior into a personalized ranking function.<br />
My6Sense â€“ because it is a personalized prediction filter, a critical foundation for AR</strong></p>
<p><strong><a href="http://topsy.com/" target="_blank">Topsy</a> &#8211; Topsy is realtime search powered by the social web that finds the most relevant conversations happening online. The siteâ€™s underlying technology examines popular links as well as the influence of each person citing a link. Topsy augments traditional search engines by finding information that people are talking about.<br />
Topsy â€“ because its ranking is based on retweets and influencers, a great social experience</strong></p>
<p><strong><a href="http://collecta.com/" target="_blank">Collecta</a> &#8211; Collecta is a real-time search engine for the social web. It monitors the update streams of popular realtime blogs and sites like Twitter, WordPress, and Flickr, and shows results as they happen. Results can be filtered by status updates, comments, stories, or photos. The entire engine is built around the XMPP standard, which pushes out data on a continual basis, so that for every search you end up watching a stream that keeps updating itself.<br />
Collecta â€“ because it is built around XMPP, a real time experience</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality, and Physical Hyperlinks for Playfulness, Not just Purchases: Talking with Paige Saez about ImageWiki</title>
		<link>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/</link>
		<comments>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/#comments</comments>
		<pubDate>Fri, 19 Mar 2010 03:25:17 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[Augmented reality Magician]]></category>
		<category><![CDATA[Augmented Reality Meetup]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Chris Grayson]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[Imagwik]]></category>
		<category><![CDATA[interaction design]]></category>
		<category><![CDATA[Jason Kolb]]></category>
		<category><![CDATA[Jesse Schell]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[linked data and augmented reality]]></category>
		<category><![CDATA[Makerlab]]></category>
		<category><![CDATA[Marco Tempest]]></category>
		<category><![CDATA[open augmented reality]]></category>
		<category><![CDATA[open Frameworks]]></category>
		<category><![CDATA[open Frameworks and augmented reality]]></category>
		<category><![CDATA[OpenCV]]></category>
		<category><![CDATA[OpenCV and augmented reality]]></category>
		<category><![CDATA[optical character recognition]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[physical hyperlinking]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[RDF and Augmented Reality Search]]></category>
		<category><![CDATA[semantic web and augmented reality]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[Social Commons for Augmented Reality]]></category>
		<category><![CDATA[SPARQL]]></category>
		<category><![CDATA[SPARQL and ARWAVE]]></category>
		<category><![CDATA[SPARQL and Wave]]></category>
		<category><![CDATA[SPARQL and XMPP]]></category>
		<category><![CDATA[Steven Feiner]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[ubiquity]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[Will Wright]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5262</guid>
		<description><![CDATA[The video above, The Imawik commercial, is a collaboration between In The Can Productions and Paige Saez for Makerlab &#8220;The Imawik (ImageWiki) is a visual search tool for mobile devices. It allows for the ability to turn images into physical hyperlinks, conflating visual culture with a community-editable universal namespace for images.&#8221; Paige Saez is an [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=2818525&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em>The video above, <a href="http://www.vimeo.com/2818525" target="_blank">The Imawik commercial</a>, is a collaboration between <a href="http://www.inthecanllc.com/" target="_blank">In The Can Productions</a> and <a href="http://makerlab.com/who.html" target="_blank">Paige Saez</a> for <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a></em></p>
<p>&#8220;The Imawik (<a href="http://imagewiki.org/" target="_blank">ImageWiki</a>) is a visual search tool for mobile devices. It allows for the  ability to turn images into physical hyperlinks, conflating visual  culture with a community-editable universal namespace for images.&#8221;</p>
<p>Paige Saez is an artist, designer and researcher.Â  In 2007 she founded <a href="makerlab.com/projects_show_imagewiki.html" target="_blank">Makerlab</a> with <a href="http://www.hook.org/" target="_blank">Anselm  Hook</a>, an arts and technology incubator focused on civic and  environmental projects.</p>
<p>Paige and Anselm (see my interview with Anselm Hook here, <a title="Permanent Link to Visual Search,  Augmented Reality and a Social Commons for the Physical World Platform:  Interview with Anselm Hook" rel="bookmark" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons  for the Physical World Platform: Interview with Anselm Hook</a>) have been asking a very important question:<strong></strong></p>
<p><strong>&#8220;Who Will Own Our Augmented Future?&#8221;</strong></p>
<p>But most importantly, they have been actually developing applications (again<a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank"> see my interview with Anselm</a> for more background on this), to allow people to play with, hack and explore and create with the physical world platform, and to imagine new possibilities for physical hyperlinking and augmented realities.Â  This is pretty important stuff, and kudos to Paige and Anselm for beginning this work before the big players &#8211; <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>,  and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> came hurtling into the field of visual search and physical hyperlinkingÂ  &#8211; <a href="http://techblips.dailyradar.com/video/translation-in-google-goggles-prototype/" target="_blank">see this demonstration of translation and optical   character recognition</a> in Google Goggle&#8217;s.Â  Also check out Jamey Graham&#8217;s (Ricoh Research) Ignite presentation at Tools of Change, 2010 &#8211; <a href="http://www.toccon.com/toc2010/public/schedule/detail/13370" target="_blank">Visual Search: Connecting Newspapers, Magazines and Books to Digital Information without Barcodes</a>, for more see <a href="http://ricohinnovations.com/betalabs/visualsearch">ricohinnovations.com/betalabs/visualsearch</a>.</p>
<p>We are only just beginning  to get a glimpse of how contested the social commons of the physical  world platform is going to be &#8211; see the Yelp <a href="http://blogs.wsj.com/digits/2010/03/17/small-businesses-join-lawsuit-against-yelp/" target="_blank">controversy.</a> <strong> </strong></p>
<p>As Paige points out:</p>
<p>&#8220;<strong>The lens that you are actually  looking through was as important as what you were looking at. And  democratizing that lens became the most important thing that we could  possibly do.&#8221;</strong></p>
<p>I<strong> </strong>am in total agreement.Â  One reason I have so much enthusiasm for <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> (note: if you are interested in following the developer conversations there are several public Waves) is I see this open framework playing an important role in the democratization of our augmented views, by creating an open, distributed, and universally accessible platform for  augmented reality that will allow the creation of augmented reality content and games to be as  simple as making an html page, or contributing to a wiki.</p>
<p>Federation, real time collaboration, <a href="http://linkeddata.org/" target="_blank">linked data</a> &#8211; ARBlips that contain metadata that is usable for semantic searches, and modified wave servers that can listen to and respond toÂ <a href="http://www.w3.org/TR/rdf-sparql-query/" target="_blank"> <span> </span>SPARQL</a> HTTP  requests properly (see Jason Kolb&#8217;s <a href="http://jasonkolb.com/" target="_blank">many interesting posts </a>on XMPP and Wave).Â <span> These are just some of the reasons why </span>ARWave could revolutionize augmented reality  searches and more! (see<a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"> my presentation at MoMo13</a> &#8211; video <a href="http://www.youtube.com/watch?v=Y7iqg8X24mU" target="_blank">here</a>)</p>
<p>For more on real time social augmented experiences see our panel, <a href="http://en.oreilly.com/where2010/public/schedule/detail/11046" target="_blank">The Next Wave of AR: Exploring Social Augmented Experiences</a> at <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0 2010</a>, and don&#8217;t miss the <a href="http://en.oreilly.com/where2010" target="_blank">Where2.0</a> conference which has been the crucible for the emergence of location technologies.</p>
<p>Augmented realities, proximity- based social networks,  mapping &amp; location aware  technologies, sensors everywhere, <a href="http://linkeddata.org/" target="_blank">linked data</a>, and human  psychology are on a collision course in what <a href="http://www.schellgames.com/" target="_blank">Jesse Schell</a> calls the &#8220;Gamepocalypse&#8221; Â  See <a href="http://g4tv.com/videos/44277/dice-2010-design-outside-the-box-presentation/" target="_blank">Jesse Schell&#8217;s Dice 2010  talk here,</a> and check out his <a href="http://www.gamepocalypsenow.blogspot.com/" target="_blank">Gamepocalypse Now</a> blog.Â  As Bruce Sterling&#8217;s notes in <a href="http://www.wired.com/beyond_the_beyond/2010/02/jesse-schell-future-of-games-from-dice-2010/" target="_blank">his post here</a>:</p>
<p><strong>*Another  precious half hour out of your life.Â   However: if youâ€™re into   interaction design, ubiquity, social networking, and trendspotting, in   the gaming biz or out of it, youâ€™re gonna wanna do yourself a favor and   listen to this.</strong></p>
<p>And don&#8217;t forget to <a href="http://augmentedrealityevent.com/register/" target="_blank">register now</a> for <a href="http://augmentedrealityevent.com/" target="_blank">Augmented  Reality Event (ARE2010 in 2-3 June, 2010 â€“ Santa Clara, CA</a><a href="http://augmentedrealityevent.com/" target="_blank">)</a><strong>.</strong></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a>, <a href="http://www.stupidfunclub.com/" target="_blank">Will Wright</a>, and Jesse Schell <a href="http://augmentedrealityevent.com/speakers/" target="_blank">will be keynoting, and there is a totally awesome line up of AR innovators and industry leaders</a>, including Paige and Anselm!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling.jpg"><img class="alignnone size-thumbnail wp-image-5289" title="bruce_sterling" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/bruce_sterling-150x150.jpg" alt="bruce_sterling" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright.jpg"><img class="alignnone size-thumbnail wp-image-5290" title="will_wright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/will_wright-150x150.jpg" alt="will_wright" width="150" height="150" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost.jpg"><img class="alignnone size-thumbnail wp-image-5291" title="Jesseschellpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/Jesseschellpost-150x150.jpg" alt="Jesseschellpost" width="150" height="150" /></a></p>
<h3>And:</h3>
<p>You are in luck!</p>
<p>Here is a discount code for the first 100 folks to register to the  event (before the end of March). Go to the <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">registration page</a>, type in code AR245 and &#8220;youâ€™ll be  asked to pay onlyÂ $245 for 2 full days of AR goodness.&#8221;</p>
<p>&#8220;Watching AR prophet Bruce Sterling, and gaming legend Will Wright, visionary game designer Jesse Schell  deliver keynotes for this price â€“ is aÂ magnificentÂ steal.Â  And on top,  participating in more than 30 talks by AR industry leaders will turn  these $254 into your best investment of the year,&#8221; as OriÂ  put is so well on Games Alfresco!</p>
<p>If you want a preview of just how exciting it is to be involved in augmented reality right now check out <a href="http://gamesalfresco.com/2010/03/17/magic-games-education-and-live-coding-at-the-augmented-reality-meetup-in-nyc/" target="_blank">Ori Inbar&#8217;s great round up</a> on our latest monthly <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">Augmented Reality Meetup NY</a> (or as, Ori notes, we fondly like to  call itÂ <a href="http://www.meetup.com/ARNY-Augmented-Reality-New-York/" target="_blank">ARNY</a>.)Â  There is lots of video up now (much thanks to <a href="http://www.chrisgrayson.com/" target="_blank">Chris  Grayson</a>, whoÂ  <a href="http://armeetup.org/001_arny/video/index.html" target="_blank">live  streamed it</a>).Â  <a href="http://www.marcotempest.com/" target="_blank">Augmented Reality Magician, Marco Tempest</a>, is an absolutely <strong>must</strong> see.Â  (developers note this is an awesome use of <a href="http://www.openframeworks.cc/" target="_blank">open Frameworks</a> and <a href="http://opencv.willowgarage.com/wiki/">OpenCV</a>).Â Â  The video of the show includes a rare explanation of how it  all worksÂ  &#8211; see <a href="http://www.youtube.com/watch?v=6TluCaxz7KM&amp;feature=player_embedded" target="_blank">here</a>.</p>
<h3>Talking with Paige Saez &#8211; &#8220;Software is candy now!&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg"><img class="alignnone size-full wp-image-5266" title="paige_headshot_sq135" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/paige_headshot_sq135.jpg" alt="paige_headshot_sq135" width="135" height="135" /></a><br />
<strong> </strong></p>
<p><strong>Tish  Shute:</strong> What interests me about ImageWiki is that you have thought  about physical hyperlinking beyond the obvious of where to get your  next good hamburger and beer, right?</p>
<p><strong>Paige Saez:</strong> Right. It was interesting for  me in just thinking about the two things. How do you design a tool to  work in a way that people are getting value from it? And also, how do  you make it work in a way where people can explore and hack it? I think  the most interesting technologies, and this is probably something  somebody else said sometime, are the ones that disappear, that we don&#8217;t  see, instead we see <em>through</em>. They become just the  intermediaries.Â  They don&#8217;t interfere with what we are trying to do.</p>
<p>It&#8217;s a struggle whenever you are developing a new way for  people to get information or make something happen, because you are  playing with magic a little bit. And you have to make it vanish the way a  good magic trick makes an experience a magical one. But at the same  time you also need to reveal just enough that you let people in and they  can see how to change it and make it their own. That is the interesting  tension for this space right now, the idea of augmented reality begins  to lead the idea of a social commons for physical things. The Imagewiki  project was a locus of just this tension. Tish you and I have previously  discussed how difficult it was to even get people to understand the two  concepts independently.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png"><img class="alignnone size-full wp-image-5269" title="dhj5mk2g_515dwxtjnds_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_515dwxtjnds_b.png" alt="dhj5mk2g_515dwxtjnds_b" width="642" height="163" /></a></p>
<p><strong>Tish Shute:</strong> Right, until  recently most people hadn&#8217;t even heard the term augmented reality and I  am not sure that a particularly high percentage of people would  recognize it now despite the recent interest in smart phone apps.</p>
<p><strong>Paige Saez:</strong> It&#8217;s very  difficult to get people to understand the two concepts, and now you are  adding in the third level of participation as well. So I don&#8217;t think it  is impossible, but I do think it requires narrative. It is interesting  that you were talking about the stories you heard this morning from the  creatives at the event [Tish mentioned David Curcurito, Creative  Director, Esquire gave an excellent presentation at Sobel Media event  NYC] because it&#8217;s narrative and the attention to telling a story that  help you walk through all of the ways you can understand how completely  expansive this area is right now.</p>
<p>So I think we have to play with it, play with the space and the  tools. I think we need to have an idea of what we want people to use  the tool for, and we need to not only introduce them to the tool and the  technology, but also introduce them to the concepts as well. So I see  it as a three part process.</p>
<p>I&#8217;m really excited to be there with people,  helping them do that. I think we need to do this face to face. I don&#8217;t  think this can be only through a social network. The ImageWiki website  is like one quarter of the entire picture, you know? The website is the  resource center and the place where you can see people adding images,  but what value is it to you to see an added image? It is more valuable  for you to be interacting with the image or interacting with the object  in the real world.</p>
<p>Designing for the experience of using the  ImageWiki got very complicated very fast. I was trying to figure out the main  thrust of the design for the UI for the ImageWiki and at a certain point  I had to take a step back and say â€œOkay, this has to be good enough for  now because we can lay it out and prototype as long as we want on the  Web or mobile UI. What we need to be doing is going outside and actually  aggregating and putting images into the database in order to see what  exactly happens when we are adding.â€Â  It&#8217;s not just like you are taking a  picture of something and adding it to Flickr. Using the tool is very  context specific and the information is context specific, and you can&#8217;t  necessarily make that all happen at the exact same time. I think these  are really fascinating spaces to be struggling in and I&#8217;m so glad to be  working in this space.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2.jpg"><img class="alignnone size-medium wp-image-5300" title="imagewiki_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki_2-300x225.jpg" alt="imagewiki_2" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1.jpg"><img class="alignnone size-medium  wp-image-5299" title="imagewiki" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/imagewiki1-300x225.jpg" alt="imagewiki" width="300" height="225" /></a></p>
<p><em>Images by Chris Blow of <a href="http://unthinkingly.com/" target="_blank">unthinkingly.com</a></em></p>
<p><strong>Tish  Shute:</strong> Could you explain why we need ImageWiki? I mean I think I  have ideas on this, but perhaps you can explain to me from you point of  view why we need an ImageWiki, as opposed, to say, extending the image  space of Wikimedia or something added on to Flickr.Â  I mean maybe  something leveraging the geotagged photos sets and APIs we already have?</p>
<p><strong>Paige Saez:</strong> Yes, definitely. It&#8217;s a really good question, I mean it really is. Like,  do you need an entirely new place to be holding images outside of the  places that we are already holding images? That&#8217;s a huge question;  enormous. Especially when you take a look at the problems around that.  Its&#8217; exhausting for an end user. Who the heck wants to go and reload  everything into <em>yet another place</em>, right?</p>
<p><strong>Tish Shute:</strong> Right.</p>
<p><strong>Paige Saez:</strong> Moreover, who is going to  really bother? Another problem would be what happens to the existing  datasets that people have already committed to? And then of course there  is the problem of authority and explanations why&#8230;.Gaining interest  and authority in a space when nobody even understands why that space  should exist in the first place. And those are just three, you know, off  the top of my head problems with that idea.</p>
<p>And yet at the same time, I don&#8217;t actually know  how else to go about thinking about the ImageWiki unless I think about  it as it&#8217;s own thing. Then you start thinking about models of large  independant image databases that exist already, examples of this from a  product standpoint- references to consider. The Getty Foundation comes  to mind. There are many other historical centers that have huge  resources and images that are licensed out and used. So here we have a  working example of people already doing this. But succesfully? I don&#8217;t  know. We do have a ton of intellectual property rights and copyright  issues and ownership and use issues with images currently. As a working  artist these issues for me were a major red flag to consider. Working on  the social commons for augmented reality starts paralleling issues  found in digital rights management and intellectual property.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png"><img class="alignnone size-full wp-image-5274" title="dhj5mk2g_518gpgpr7gd_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/03/dhj5mk2g_518gpgpr7gd_b.png" alt="dhj5mk2g_518gpgpr7gd_b" width="441" height="606" /></a></p>
<p><strong>Tish Shute:</strong> But one good thing about Wikimedia, why I focused on Wikimedia, is Flickr and Wikimedia already use a creative commons licensing, right?</p>
<p><strong>Paige Saez:</strong> Creative commons, you know they have their own resource center, too. But you know they haven&#8217;t been successful as great databases for images so far.</p>
<p><strong>Tish Shute:</strong> What would you like to see that they don&#8217;t have? Like say maybe start with Wikimedia, right?</p>
<p><strong>Paige Saez:</strong> There&#8217;s just still a lot of issues with how to encourage people to want to contribute. It&#8217;s hard to show the value to someone who doesn&#8217;t already understand the value for some reason. At least for me personally this is something I have run into frequently. I don&#8217;t know if it is necessarily what Wikimedia doesn&#8217;t have, I think it is a lack of understanding of what creative commons really means. And there is still a very strong sense of ownership and concern about creative property rights. Being paid to be creative is a tremendously difficult thing to do. People fear losing their livelihoods. They think this is possible. Is it? I dunno.</p>
<p>For example : Look at me, I take a photograph of something, I can sell that.  And there&#8217;s a question about whether or not, as an artist, I want to have my photographs in a pool of images that is open and accessible when I could be making money on it instead. Now that is just an example. Me personally, I can see the value. But that is a common concern. The gist of the question being, &#8216;what value does it bring to give something away versus holding on to it?&#8217; A hugely popular discussion right now.</p>
<p>This is the same crux of the problem we are dealing with when we talk about thinking about images in the social commons for the real world. It&#8217;s a conversation about ownership. It&#8217;s about, who does this belong to really? If I take a photograph of a Levi&#8217;s billboard, does that photograph belong to me or does it belong to Levi&#8217;s? We know the boundaries of that. But when the image becomes a living image, an image capable of transmutation; an image that provokes an action or hyperlinks to a product, experience, information&#8230;.where are the boundaries in that?</p>
<p><strong>Tish Shute: </strong>But how is ImageWiki handling that differently from Wikimedia, I suppose is my question.</p>
<p><strong>Paige Saez:</strong> We haven&#8217;t solved the problem.</p>
<p><strong>Tish Shute:</strong> Yes, I suppose it is not like we have fully solve the problem of a creative commons for images on the internet let alone the issues of a social commons for the real world! So neither one has solved the problem, right?</p>
<p><strong>Paige Saez:</strong> Exactly. To be honest, it made my head spin. I realized we were building a web application and a mobile tool doing augmented reality, real time feedback on the world and suddenly we weren&#8217;t. Suddenly we were dealing with DNS and talking about physical hyperlinks and ownership and property. And basically at that point you just have to sit and really start looking at catching up on IP issues and figuring out how to deal with that space in a much more wholistic way. It became so important that we had to take a step back and go</p>
<p>â€œOh my god I think we have really uncovered a real problem here.â€</p>
<p>At the point when we were building out the tools we realized something was really going on with our project. Here we were thinking that this was just a beautiful experience of learning about the world around us. We reallyâ€¦Anselm and I both just really wanted this tool to exist. It was something that we both just really wanted to happen in the world, something that we felt really just thrilled to make. And we looked at and used it and realized that instead of it just being a beautiful experience, it was a fundamental shift in how we understood everything. That it impacted our world in the same way the Internet impacted our world. It was a fundamental shift in understanding. A sea-change.</p>
<p>So I put down the prototype and went back to researching, read a ton of books on IP and went and presented to friends, family, schoolmates and co-workers trying to explain the project and then the larger conceptual framework that had emerged from the project. I began using the metaphor of thinking about Magritte&#8217;s &#8220;Ceci n&#8217;est pas une pipe.&#8221; Thinking about a pipe that isn&#8217;t actually a pipe.</p>
<p><strong>Tish Shute:</strong> Oh, yes!</p>
<p><strong>Paige Saez: </strong>..to try to help explain to people that the image that you see is actually not, you know, it&#8217;s not an image of a thing. It&#8217;s an image. And that image has a tone and that image has a voice, and that image was chosen. And there were decisions that were made through the interface of the camera, specific decisions that defined the view of what you were looking at. And that that wasn&#8217;t being acknowledged and that that was a fundamental part of what the ImageWiki was aiming to do. The lens that you are actually looking through was as important as what you were looking at. And democratizing that lens became the most important thing that we could possibly do.</p>
<p><strong>Tish Shute:</strong> So the emphasis for you on ImageWiki was in fact the lens, even though you found obstacles to creating the interface, right?</p>
<p><strong>Paige Saez:</strong> Yes. Definitely. That&#8217;s what I fell in love with first. I really wanted to be able to use my phone to learn about what kind of tree this was or to buy tickets for the band on the poster I just saw, or see a hidden secret. For me it was very much a story, a narrative experience that I just thought was magical. And that is how I fell in love with it, which is not where I ended up.  Where I ended up was realizing it was a fundamental shift in not only my own understanding of how to use the world around me, but in our understanding of looking at the world.</p>
<p><strong>Tish Shute: </strong>It would be pretty scary if an image DNS was basically in the hands of either one or very few people, right?  I mean even ImageWiki would be stuck with this problem, that if you set up a bunch of servers, you are going to be holding a very, very large image database. I mean, whatever your motivation, right?  I think at the minute that is why I am very into seeing everything through the lens of federation, I see that unless we have federation, these giant central, databases are inevitable aren&#8217;t they?</p>
<p><strong>Paige Saez: </strong>Essentially, yes. I mean I wasn&#8217;t able to walk through it as quickly as that. It kind of just overwhelmed me. Looking back on it, it seems perfectly obvious. I was just like â€œOh my god, what have we done? Like what is going on?â€ Particularly for me because so much of my life has been spent in art, it was really easy to immediately understand the connection between the view, the viewer, and whatâ€™s being viewed as all just different layers of ownership and understanding that it is a gaze. Right? We know that we are never able to look at something without passing judgment on it, but to see that become a part of the interface in a real-time fashion just blew my mind.</p>
<p><strong>Tish Shute: </strong>Yes.</p>
<p><strong>Paige Saez:</strong> I think you are right. Getty Images, Flickr images, no matter what you are always holding on to something and you have to be responsible for it. Right? So how do you deal with the responsibility but don&#8217;t take on too much ownership? Where is the boundary with that?</p>
<p><strong>Tish Shute: </strong>And for me, the simple answer to that is loosely connected small parts, distributed systems and federation.  Because there is only one way to be able to utilize these things is to have them distributed so that no one holds all the cards. Right?</p>
<p><strong>Paige Saez: </strong>Definitely and I personally agree with you wholeheartedly. However, the idea of distributed power is a concept that most people just don&#8217;t know how to deal with.</p>
<p><strong>Tish Shute:</strong> And it&#8217;s easier said than done because actually the root problems that you are talking about aren&#8217;t got rid through federation, because if someone really holds the, sort of, all the good image databases just because they have the potential to be federated, they may not choose to open them up on many levels.</p>
<p><strong>Paige Saez:</strong> And even then you have to think about, sort of, like the next level of it, which is we want it to be all open and accessible, but everything is owned by somebody. Like, what really is public anymore, in general?</p>
<p><strong>Tish Shute:</strong> And what is interesting though, regardless of what we speculate conceptually on this, we already set off down the road. I mean we have already several largeâ€¦they are all in beta I suppose, Google Goggles, Point and Find, right? But we have applications that are beginning to implement this. They are beginning to implement search on it, and it is geo-located even if it&#8217;s not in an augmented view, right? So it is proximity based.</p>
<p><strong>Paige Saez: </strong>Right, right. I mean maybe the solution is that if we follow that line of thinking then Flickr will be partnering with Google Goggles. And then my images would stay under my ownership through the authority of Flickr. And I would use Flickr as my place to add images and they would just be responsive via my devices via AR.</p>
<p><strong>Tish Shute:</strong> That&#8217;s very interesting.</p>
<p><strong>Paige Saez:</strong> Definitely I think so. It is also the shortest distance between things.</p>
<p><strong>Tish Shute:</strong> Yes, and as Anselm kept pointing out, basically it is going to happen in the simplest way possible, really, regardless of the implications of that. But OK, getting back to ImageWiki. As you say neither Wikimedia nor Flickr were really designed to take this role, right?</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute:</strong> With ImageWiki, you&#8217;ve had these ideas and a concern with the social implications of physical hyperlinking  in your mind since it&#8217;s inception. Are there any design ideas you&#8217;ve come up with that you know, as opposed to sort of, as you say, connecting Flickr to Point and Find, or who knows, Google Goggles.  How is ImageWiki going to be different, do you think? Is that a hard question at this point?</p>
<p><strong>Paige Saez:</strong> It is, and it&#8217;s a great question, and it&#8217;s a question I really love to think about. I think we have to introduce the politics with the tools. It has to be acknowledged that it&#8217;s not just a place to hold information, that&#8217;s what I feel in my heart.</p>
<p>At the same time, is that too much for people to really grasp at one time? In my experience it really has been, so the design of the experience needs to allow for an understanding of the power of the tool and the level of authority that the tool offers, while not getting in the way of it; just using it.  Because ultimately, at the end of the day, nobody will use anything if it isn&#8217;t valuable to them. And so I could talk for miles and miles and miles about how important it is that corporations don&#8217;t own all of the rights to all of the visual things in my life, right? For the rest of my life I could talk about that. The idea that advertising is dominating all of our views of anything in the world around us is horrifying. It doesn&#8217;t matter unless I can show somebody why it matters to them or how it affects them. It&#8217;s just that that is a tremendously difficult thing to explain through a user interface.</p>
<p>And I actually think that it&#8217;s great that tools like Google Goggles and Nokia Point and Find are here to do a lot of the hard work of showing people how it works. Recently somebody explained to me their experience of using Google Goggles. They went through this process of saying how the Google Goggles took a picture and then did this really complicated visual scanning thing over the image and it took a full minute.</p>
<p>And I said, â€œWell of course they did it that way.â€  And they said, â€œWell what do you mean?&#8221; I said, â€œWell, what they are really doing there when they are doing all these fancy graphics, is they are showing you how it works.â€ And even if it isn&#8217;t actually related at all to how it functionally works, algorithmically, that&#8217;s not the point. The point is that this gesture of the time taken to make it look like it&#8217;s scanning an image and going back and forth with pretty colors is giving people the time to process that as an experience. That&#8217;s a metaphor for what&#8217;s really happening. And these kinds of metaphors are crucial with user experience design. We have lots and lots of examples of them and how they work, and many of them aren&#8217;t necessary. Like you know, for example, the bar that shows you the time it&#8217;s taking for something to process.There is no relationship between that and reality. But it is really important.</p>
<p><strong>Tish Shute:</strong> Yes those bars often have no relationship between the actual time..</p>
<p><strong>Paige Saez:</strong> And that&#8217;s the thing. Like the idea of time versus our perceived understanding of time. Right? The length of time it takes for your Firefox browser to open and load your last 30 tabs, versus the reality of what&#8217;s actually happening. When you are doing that sort of research you are actually accessing millions and millions of places and points of interest all over the world, so we need more of that. We need more of the process shown. Anselm and I worked with a film maker named Karl Lind from In the Can Productions here in Portland to try and make a video about the ImageWiki. We made this little video and I can try to show it to you or send it to you if you want.</p>
<p><strong>Tish Shute:</strong> One of the issues with this kind of visual search is that it is inherently dependent on large databases, regardless of where they are federated, are going to be very large. Right? I mean someone is going to have something big, and aggregated there.   I suppose someone will figure out the challenges of federated search eventually but that is quite a big challenge!</p>
<p>So I suppose I am still trying to understand what ImageWiki can offer that we can&#8217;t get with any other existing service?  How will their be a social commons and even a social contract for the world as a platform for computing and physical hyperlinks?</p>
<p>Eben Moglen  brought up something when I talked to him about virtual worlds, he said we need code angels to let us know what was going on in the virtual space &#8211; who was gathering data and how, for example.</p>
<p><strong>Paige Saez:</strong> Tell me more about that, I want to hear more about that.</p>
<p><strong>Tish Shute: </strong> Eben suggested this metaphor for when I was asking him about privacy in virtual worlds. The fact that people just didn&#8217;t know that when they were pushing avatars around virtual worlds what metrics were being gathered on their behavior.  And he basically said that what we need is code angels when we enter these spaces because having the rules of the game buried in a TOC was ridiculous.</p>
<p><strong>Paige Saez:</strong> That is a really interesting idea.</p>
<p><strong>Tish Shute: </strong> Maybe ImageWiki needs to be our code angel to navigate the augmented world. I mean that&#8217;s what I want to see it as. And when I hear you talk, what I hear is you talking in broad categories about what a code angel might be in the space of images and image links to the physical world. I mean that is what I hear from you.</p>
<p><strong>Paige Saez:</strong> Yeah. No, I definitely agree with that. It is interesting. In that sense, it is kind of a protection layer. Is that what you are thinking?</p>
<p><strong>Tish Shute: </strong>Yes, I suppose because we can&#8217;t be navigating a lot of complicated opt-ins and opt-outs just to get around our neighborhood safely (in terms of privacy (also see Eben Moglen&#8217;s definition of privacy hereâ€¦)  We will need a code angel that is sort of keeping up with you in real time!</p>
<p><strong>Paige Saez:</strong> Right, right. I wonder how that would work in regards to images, though. That is a really interesting thing to try and put on an image. I guess why I am having such a hard time being specific about it, is I am <strong>just trying to work it in my head, thinking of a specific use case, like what would be an example of that?</strong></p>
<p><strong>Tish Shute: </strong>Well I suppose the example, and this is a crude one, is when you point your Google Goggles to the book jacket, the code angel, this is very crude, would say â€œYou are right now drawing images from the Amazon database &#8211; they are collecting data such and such data from your search.</p>
<p>And then of course the ability to have crowd sourced tagging and corrections..</p>
<p>There was a wonderful book that came out last year on how we can have commercial intelligence -Dan Golemanâ€™s new book: â€œEcological Intelligence: How Knowing the Hidden Impacts of What We Buy Can Change Everything&#8221;&#8230;</p>
<p>how corporations various different stakeholders, including their customers will drive corporations to do the morally right thing because they will lose the commercial support of customers who wonâ€™t support them unless they are more green, fairer, do the things we would like them to do whatever that happens to be &#8211; physical hyperlinking and tagging I guess would be a big part of this.</p>
<p><strong>Paige Saez:</strong> Sort of a transparency issue.  And that almost becomes a page rank algorithm in and of itself. I mean now we are really talking about search more than anything, and what tool becomes the dominant search tool. Anselm and I talked a lot about one platformâ€¦  I mean eventually we will have a unified platform. It willâ€¦No matter what, for the Internet and for physical objects and visual objects in the real world. It will just be a matter of, literally, who can find the best and most valuable, most relevant information on a thing. Currently we just have it very proprietary.</p>
<p><strong>Tish Shute:</strong> Yes.</p>
<p><strong>Paige Saez: </strong>That definitely won&#8217;t last. It just can&#8217;t, because of the exact problem that you are raising. And we already know too much about resources and information as they pertain to products for us to ever go back to a time where we are not considering other ways of getting information about it anyway. Right?</p>
<p>Like I have the same concerns nowadays when I look at fruit. I look at a piece of fruit in the store. I would never just assume that the person who put the sticker on that fruit, anymore, is the ultimate authority necessarily. I would always assume at this point I could go online and go find out more information about a company. Issues about like eco-footprint or how much toxicity, or pesticides or whatnot are now totally accessible already.</p>
<p>So I am thinking when you look at that piece of fruit and that sticker for Google, say what you are describing, do we just go immediately to the company&#8217;s website, or is it even more specific? Do we know that the sticker on that piece of fruit is going to tell us specific information about that? Or are we just getting back the nutritional resources, or are we getting a listing of all of the different options out of a page rank algorithm that shows us, â€œWell this is the website for the fruit.  Here is the nutritional information.  Here are the last 15 comments on it.â€  It&#8217;s basically just a basic search.</p>
<p>Have you heard of Good Search?</p>
<p><strong>Tish Shute:</strong> you mean http://en.wikipedia.org/wiki/GoodSearch</p>
<p><strong>Paige Saez:</strong> Right.</p>
<p><strong>Tish Shute: </strong>A code angel interface would have to give you options, wouldn&#8217;t it on possible views available?</p>
<p><strong>Paige Saez:</strong> Yes. You are then talking about filtering your view. Then it really gets really interesting, of course. I don&#8217;t even know if we have a choice in that. I think we are really kind of hitting a wall with who owns the space and the platform. Is it just a basic search because we are already familiar with search? If you had an option to choose, say, â€œI want to look at this apple sticker and I only want to getâ€¦programmatically only looking at my friend&#8217;s opinions of this company.â€</p>
<p>Or I have a safety valve on it that only shows me certain information based on what the code angel knows about me, my preferences, my age, things like that. Then that gets really, really interesting, because we are trying to do all that work right now just with social media and the Internet. We are already overwhelmed with too much information. It is already past the point of comprehension. So to think that we would actually drill down even more specifics is very interesting.</p>
<p><strong>Tish Shute:</strong> That was a point Anselm made about the fact that once you are into this mobile, just in time, one view kind of situation, it is quite different than the Internet where you can bring up all these different screens and go to another website.</p>
<p><strong>Paige Saez: </strong>Well yes, mobile is a different level of engagement. Very contextual. Much less information. Much more about timeliness. I don&#8217;t want to look an apple and get back a Google search. Oh my God no. Thatâ€™s the last thing I want. I would love to be able to look at an apple and my phone already knows exactly what I want, information-wise, to get back from that apple. But I don&#8217;t know. It&#8217;s all contextual and personal.  So I think the code angle concept you are talking about is really interesting because you still need to think about who is the person that is adding or creating those level filters- is it you, a filtered friend network, an algorithm? How much work is too much work? Where do we draw the line? How much of this are we willing to let the machine do for us?</p>
<p><strong>Tish Shute: </strong>Right.</p>
<p><strong>Paige Saez: </strong>And then of course once you have those filters in place, you need control over them. You will need to dial them up and dial them down, be able to choose and add new ones, so on and so forth. It becomes very modal at that point. For example, I want to change my view: To walk into a grocery store and instead of finding out information, Iâ€™d want to see where the hidden Easter egg puzzles were that my friends left last week because weâ€™re playing a game.</p>
<p>Iâ€™m still really attracted to the creative opportunities with the ImageWiki. Iâ€™m really attracted to changing this experience from being a one-to-one relationship (from Corporation to Consumer) to an open-ended relationship (From Person to Person). If I look at a book jacket, sure I can find out where to buy the book, but thatâ€™s boring. Who cares? Iâ€™d like to find out a link to a story or an adventure or a movie or something unthought-of before.</p>
<p>How do we build that in? How do we encourage serendipity? Mystery? I think the ImageWiki is the space for building that in, actually. Not how, that would be the one place, right? Thatâ€™s my really big fear is that this relationship just stays one-to-one. Click an image of consumable object, get back objects retail value. How completely dull. We have to do better than this.</p>
<p>Additionally, what if I want to take a photograph of a book, an apple, or something and I donâ€™t want to pull back data. Instead, I want to pull back music, or I want to pull back a video, or I want to pull back a song, or lyrics, or a story, or another image. Itâ€™s just a hyperlink at the end of the day, you know? Thatâ€™s all weâ€™re really doing. Hyperlinks can pull back so many different things.</p>
<p><strong>Tish Shute:</strong> And thatâ€™s one of the reasons I&#8217;m into mobile social interaction utility building, because without that, if we donâ€™t have that way to do that in mobile technologyâ€¦thatâ€™s very available on the Internet, as weâ€™ve seen, with Twitter. These applications are very easy to do on the Internet. Theyâ€™re not easy to do natively in a mobile application..</p>
<p>hey, Iâ€™m just promoting AR Wave again. I should shut up.</p>
<p><strong>Paige Saez:</strong> Oh, no.  I think itâ€™s a fascinating concept, I really do. I totally agree. As weâ€™ve talked about it before, itâ€™s amazing that marketing and advertising are helping push forward AR, and itâ€™s great. Itâ€™s fantastic.</p>
<p>But itâ€™s also the worst possible thing that could ever happen because it is such a singular way of looking at an overall ubiquitous computing experience. There are other ways.</p>
<p>The best experience I ever had was trying to explain to people about physical hyperlinks. I had to walk them through it. Good interactive isnâ€™t something you present or show, itâ€™s something you do. Nothing beats just walking around and showing people with a device or a tool or something else.</p>
<p>I mean, God forbid it always stays in our computers and our phones. I really hope we donâ€™t have to be stuck living our entire lives with these horrible interfaces.  But for the time being, we will. Having an AR app show you a puzzle, or a mystery, or a game, or an adventure is a magnificent experience, totally overwhelming, and people get it right away. Thereâ€™s no question; they totally understand.</p>
<p><strong>Tish Shute:</strong> Yes, I agree.</p>
<p><strong>Paige Saez:</strong> You walk them through the experience with a physical hyperlink and then you say, â€œHere, I could use this device and I could show you where to buy this thing, or I could use this device and we could start playing a game.â€ Then everybody gets it.</p>
<p><strong>Tish Shute:</strong> So then I have a question, because one of the things Anselm said to me when he wanted me to refer back to you is that he feels that the direction for ImageWiki should be perhaps to focus less on the technology and more on just the actual, I suppose, gathering of the images, how theyâ€™re going to be annotated, the metadata, right? But my question to him was, the problem if you do that, without the platform, thereâ€™s no experience or motivation for people to do that. Right? Is there?</p>
<p><strong>Paige Saez: </strong>Yeah, I agree with you on that one. Iâ€™m curious what hisâ€¦I think the reason why he wants to do that is he wants to be able to show people examples via the resources. Like to be able to show someone a library, essentially, which I think makes sense with some people. I definitely think that some audiences would really relate to that. For me, it doesnâ€™t make sense because Iâ€™m just very experiential. I need to do it and I need to show other people how to do it and I need to grow that way. I think that at the end of the day, those are great ways to go about doing it. Itâ€™s just itâ€™s a huge thing to do in either direction.</p>
<p>What Anselm&#8217;s really thinking on, I believe, is more about exemplifying how we read and understand images culturally. Then youâ€™re really getting into Visual Studies and Critical Theory which is what I did for my Masters at PNCA. I worked on the ImageWiki while I was in grad school, it was something I was doing for fun. Independently of my studies, the project lead to issues on democracy and objects and property and I ended up right smack in the middle of what I was studying; the nature and cultural analysis of images Questions like, &#8216;what exactly do we get out of images?&#8217; and how all these different things are happening in an image, and people get tons of totally different things out of an image depending on many factors.</p>
<p>The questions I began to ask myself got very philosophical. Questions like â€œIs this apple red? Is this apple red-orange? Is this a small apple? Whatâ€™s my understanding of small versus your understanding of small?â€</p>
<p>Because you supposed that you needed a text backup to the search, how would I be able to search for an apple? Because what if my understanding of apple is red and your understanding of apple is green. And so if Iâ€™m looking for a green apple, am I looking for the same green apple as you? Itâ€™s all semantics, sure.  But at the same time, it gets bigger and bigger, and itâ€™s fascinating.</p>
<p><strong>Tish Shute: </strong>Google Goggles seem to work best on book jackets, basically.</p>
<p><strong>Paige Saez: </strong> But book jackets are actually perfect for this.  Book jackets are perfect for this problem, because book jackets are specifically designed art.  So at the end of the day, we are still talking about creative works, artistic works, that have been designed as a communication tool.  But that is not something that people can own.  Creative works that are designed are a communication tool, with varying levels of skill to be sure, but still something anybody can do.  What we need to do is we need to be using that language.  We donâ€™t need to be trying to reach as far as facial recognition.  We need to develop our own logos, our own brand, our ownâ€¦I mean not brand.  Brand is a bad way of saying it.  Another way of saying it would be like, just use it.  Develop a visual language that we can use that is as effective and as well utilized as book jackets or the movie posters or something.</p>
<p><strong>Tish Shute:</strong> What are some of the use cases for ImageWiki you would like to develop first?</p>
<p><strong>Paige Saez:</strong> My dreamâ€¦I have like four or five use cases that I want to see happen.  One of them is I walk down the street and there is a new poster for my favorite band.  And I can just go up to the poster and I use my device, whatever it looks like, and I download the latest album. It&#8217;s transactional. I am able to just plug in my headset and walk down the street and the transaction is done. I saw something I wanted. It was beautiful. I was able to get it and I was able to move on in my life.  And that is totally possible.</p>
<p>Another one would be I walk down the street and there is a piece of graffiti.  And I am able to use my device to find out who the artist was that made it and to give them props, and to point my other friends to the fact that the piece is there and it will most likely be there only for a short period of time- information retrieval and socialization.</p>
<p>Or, use my device to find an Easter egg, to find a narrative puzzle that ends up going on for weeks, and everybody is involved, and we are all playing this game together. Adventure-based, non-linear experiences. I want playfulness, not just purchases.</p>
<p><strong>Tish Shute: </strong> Did you think of piggybacking on the Flickr API for geo-tagged photos as a way to work with those databases or not?</p>
<p><strong>Paige Saez:</strong> Yeah, we definitely thought about that.</p>
<p><strong>Tish Shute: </strong> And why did you decide not to, for any reason orâ€¦?</p>
<p><strong>Paige Saez:</strong> Ultimately, we justâ€¦we were such a small group, we just had to tackle certain things at a certain time.</p>
<p><strong>Tish Shute:</strong> Right.  And you were so prescient, you were working slightly before we had the mediating devices, werenâ€™t you?  You were just before the mobile devices really got adequate for this.</p>
<p><strong>Paige Saez:</strong> Yeah.  We started on itâ€¦I believe it was Januaryâ€¦No. December 2007. Basically, the iPhone had just launched like maybe six months prior or something like that.</p>
<p><strong>Tish Shute:</strong> But not 3G and not 3GS, right?</p>
<p><strong>Paige Saez: </strong>Not 3GS. It was the first generation iPhone. We built the ImageWiki before the App Store existed.</p>
<p>We knew that the App Store was coming out.  And we knew that the App Store was going to be the biggest thing in the whole world. I remember getting into multiple fights with friends about how revolutionary the iPhone and the App Store were going to be and people thinking I was totally crazy; people just thinking I was absolutely nuts for being so excited about it.</p>
<p>It sucks that it is a closed proprietary system, but the App Store has done something for software that nothing has ever done in the whole world.  Software is candy now.  It&#8217;s candy.  It is like when you are waiting at the grocery store at the checkout line and you are stuck behind somebody, and you have got all these little tchotchka&#8217;s, candy bars, magazines, nail-clippers and things. That is the equivalent of software now.  It&#8217;s become an impulse buy, which is amazing.  Nobody would ever have thoughtâ€¦that is actually revolutionary. That&#8217;s huge.</p>
<p><strong>Tish Shute:</strong> <a href="http://www.cs.columbia.edu/~feiner/" target="_blank">Steven Feiner</a>, who is one of the founding fathers of augmented reality said to me during a conversations at the ARNY meetup that one reason that augmented reality, despite the hype, is manifesting very differently from how virtual reality burst onto the tech scene is that it is about affordable apps on affordable readily available hardware.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2010/03/18/visual-search-augmented-reality-and-physical-hyperlinks-for-playfulness-not-just-purchases-talking-with-paige-saez-about-imagewiki/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>People Meet People Meet Big Data: ScienceSim Explores Collaborative High Performance Computing</title>
		<link>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/</link>
		<comments>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/#comments</comments>
		<pubDate>Wed, 11 Feb 2009 22:40:02 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[science outreach in virtual worlds]]></category>
		<category><![CDATA[scientific simulation in virtual worlds]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[virtual worlds in Japan]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[collaboration and big data]]></category>
		<category><![CDATA[collaborative visualization]]></category>
		<category><![CDATA[haptic interfaces for virtual worlds]]></category>
		<category><![CDATA[Hypergrid]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[modelling complex systems]]></category>
		<category><![CDATA[n-body simulation]]></category>
		<category><![CDATA[Piet Hut]]></category>
		<category><![CDATA[rapid data movement in virtual worlds]]></category>
		<category><![CDATA[ScienceSim]]></category>
		<category><![CDATA[scientific simulation]]></category>
		<category><![CDATA[steering big data simulations from virtual worlds]]></category>
		<category><![CDATA[steering virtual worlds with brain waves]]></category>
		<category><![CDATA[super computing conference]]></category>
		<category><![CDATA[supercomputing]]></category>
		<category><![CDATA[Wilf Pinfold]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2855</guid>
		<description><![CDATA[Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the Supercomputing Conference general chair, is working with some Intel colleagues to make a project called ScienceSim the centerpiece of a special workshop event at the SC09 conference (see Supercomputing Conference, an ACM and IEEE Computer society sponsored event). Recently, I interviewed Wilf Pinfold (see interview [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg"><img class="alignnone size-full wp-image-2861" title="gwave_lg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg" alt="gwave_lg" width="540" height="540" /></a></p>
<p>Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the<em> </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a></em> general chair, is working with some Intel colleagues to make a project called <a href="http://www.sciencesim.com/">ScienceSim</a> the centerpiece of a special workshop event at the SC09 conference (<em>see </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a>, an ACM and IEEE Computer society sponsored event)</em>.</p>
<p>Recently, I interviewed Wilf Pinfold (see interview below), Mic Bowman (also <a href="../../2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/">see my previous interview here</a>), and John A. Hengeveld (see interview below). I wanted to find out what are the underlying goals of this SC conference program?Â  Why are members of the SC community being encouraged to participate with the ScienceSim environment? What projects are beginning to emerge?  And, what are Intel&#8217;s goals in giving infrastructure support to further the conversation between high performance computing and collaborative virtual worlds?</p>
<p>The vision of creating new ways to collaborate and interact with big data does seem to be one of the more significant steps we can take at a time when we find many of our most complex systems roiling and threatening total collapse. As Tim O&#8217;Reilly has pointed out &#8211; from financial markets to the climate, the complex systems we depend on for our survival seem to be reaching their limits.</p>
<p>But,Â  how can we get from the place we are now &#8211; <a href="http://www.youtube.com/watch?gl=GB&amp;hl=en-GB&amp;v=gM4fmL6dLdY" target="_blank">see this example of an n-body simulation in OpenSim</a>, to the point where we can collaboratively steer from our visualizations big data simulations of climate change, financial markets, or the depths of the universe.Â  The picture opening this post is a:</p>
<blockquote><p><em>Frame from a 3D simulation of gravitational waves produced by merging black holes, representing the largest astrophysical calculation ever performed on a NASA supercomputer. The honeycomb structures are the contours of the strong gravitational field near the black holes. Credit: C. Henze, NASA</em></p></blockquote>
<p>Wilf Pinfold explained to me part of the reason to begin a dialogue on collaborative visualization at SC &#8217;09 is that super computing communities (that tend to be highly skilled and visionary) have played key roles in internet development in the past. Wilf pointed out,Â  key browser technologyÂ  developed out of these communities in the early days of the internet &#8211; see <a href="http://en.wikipedia.org/wiki/Mosaic_(web_browser)" target="_blank">this wikipedia entry</a> that givesÂ  a background on the role of NCSA (National Center for Supercomputer Applications).</p>
<p>The hope is, while there are many obstacles to overcome, the super computing community has both the skills and motivation to find solutions to creating collaborative environments capable of the kind of rapid data movement that scientific/big data visualization needs. Solving the problems of realtime collaborative interaction with big data willÂ  have many ramifications for the way we understand virtual reality, the metaverse, virtual worlds (all these terms are becoming increasingly inadequate for cyberspace in the age of ubiquitous computing, an argument I will make in another post!).</p>
<p><em></em></p>
<p>There have already been a number of blogs on ScienceSim (see <a href="http://www.virtualworldsnews.com/2008/11/intel-creating-sciencesim-on-opensim.html" target="_blank">Virtual World News</a>, <a href="http://nwn.blogs.com/nwn/2009/02/intel-outside-.html" target="_blank">New World Notes</a>, <a href="http://www.vintfalken.com/intel-using-opensim-for-immersive-science-project/" target="_blank">Vint Falken</a>, and <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a>). There have also been Intel blogs &#8211; <a href="http://blogs.intel.com/research/2009/01/sciencesim.php" target="_blank">see this post</a> by John A. Hengeveld (a senior business strategist working with Intel planners and researchers to accelerate the adoption of Immersive Connected Experiences). And Intel CTO <a href="http://blogs.intel.com/research/2008/11/immersive_science.php" target="_blank">Justin Rattner&#8217;s pos</a>t announcing the project this November.</p>
<p>But to blow my own horn a little, I think i was the first to blog the encounter between <a href="http://opensimulator.org/">OpenSim</a> and Supercomputing (an encounter I to some degree provoked by making the introductions) <a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/ " target="_blank">see this post</a>.Â  So I have been following the ScienceSim initiative with great interest.</p>
<p>Very shortly after N-Body astrophysicicsts Piet Hut and Jun Makino, creators ofÂ  &#8211; GRAPE (an acronym for â€œgravity pipelineâ€ and an intended pun on the Apple line of computers) &#8211; a super computer that will <a href="http://grape.mtk.nao.ac.jp/grape/news/ABC/ABC-cuttingedge000602.html" target="_blank">become one of the fastest super computers in the world (again)</a>, met <a href="http://www.genkii.com/" target="_blank">Genkii</a> &#8211; a Tokyo based strategic company working with OpenSim, the first N-body simulation appeared in OpenSim.Â  And in a matter of weeksÂ  <a href="http://www.youtube.com/watch?v=gM4fmL6dLdY" target="_blank">this video went up on YouTube</a> &#8211; the result of a collaboration between MICA and Genkii.Â  But the nirvana of being able to create visualizations using real time data from super computers that can be steered from a collaborative environment is still a ways off.</p>
<p>Super computing communities tend to be geographically very dispersed and researchers often find themselves far from simulation facilities so there is both the motivation and skills to pioneer new tools for collaborative visualization. I know that astrophysicists certainly see their value (Piet Hut has some profound ideas on this). Astrophysicist Piet Hut and othersÂ  (<a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/b" target="_blank">see here for more</a>) have been pioneering the use of VWs for collaboration.Â  There are two Virtual World organizations, both founded by <span class="nfakPe">Piet</span> Hut and collaborators, that are currently exploring the use of OpenSim for scientific visualizations. Â One is specifically aimed at astrophysics, MICA, the<a href="http://www.mica-vw.org/" target="_blank"> Meta Institute for Computational Astrophysics</a>, and the other is aimed broadly at interdisciplinary collaborations in and beyond science, <a href="http://www.kira.org/" target="_blank">Kira</a>, a 12-year old organization focused on `science in context&#8217;. Â As of last week, there are two weekly workshops sponsored jointly by Kira and MICA that explore the use of OpenSim, ScienceSim, and other virtual worlds. Â One of them is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=124&amp;Itemid=154" target="_blank">&#8220;Stellar Dynamics in a Virtual Universe Workshop&#8221; </a>and the other is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=119&amp;Itemid=149" target="_blank">&#8220;ReLaM: Relocatable Laboratories in the Metaverse.&#8221;</a></p>
<p>MICA was founded two years ago by <span class="nfakPe">Piet</span> Hut within the virtual world of <a href="http://qwaq.com" target="_blank">Qwaq Forums</a> (see the paper <a href="http://arxiv.org/abs/0712.1655" target="_blank">&#8220;Virtual Laboratories and Virtual Worlds&#8221;</a>). The Kira Institute is much older: it was founded in 1997. Â Later this month, on February 24, Kira will celebrate its 12th anniversary with a presentation of talks, a panel discussion, and a series of workshops. Â See the <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=83&amp;Itemid=113" target="_blank">Kira Calendar</a> for the main event, and the Kira Japan branch for a <a href="http://www.kirajapan.org/event/" target="_blank">special mixed RL/SL</a> event in Tokyo. Â During both events, Junichi Ushiba will give a talk about his research in which <a href="http://nwn.blogs.com/nwn/2007/10/the-second-life.html" target="_blank">he let paralyzed patients steer avatars using only brain waves</a>.</p>
<p>Other early adopters of ScienceSim include Tom Murphy, who teaches computer science at a Contra Costa College. Prior to teaching, Tom spent 35+ years working for supercomputer manufacturers. Tom said:</p>
<blockquote><p>it is very natural for me to find significantly new ways to visualize and interact with scientific mathematical models via ScienceSim and the OpenSim software behind it. ScienceSim also allows us to interact with each other and teach students in new ways.</p></blockquote>
<p>Also Charlie Peck, chair of the SC09 Education Program, (his day job is teaching computer science at Earlham College in Richmond, IN), is working with Wilf Pinfold, Tom Murphy and others &#8220;to explore how 3D Internet/metaverse technology can be used to support science education and outreach.&#8221;</p>
<p><a href="http://www.ics.uci.edu/~lopes/" target="_blank">Cristina Videira Lopes</a>, University of Irvine, is doing very interesting workÂ  on road and pedestrian traffic simulations. Crista is also the creator of <a href="http://opensimulator.org/wiki/Hypergrid" target="_blank">hypergrid in OpenSim</a>,</p>
<h3>People Meet People Meet Data: A Conversation With Mic Bowman</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png"><img class="alignnone size-full wp-image-2908" title="sciencesim_002_thumb1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png" alt="sciencesim_002_thumb1" width="404" height="239" /></a><em></em><br />
<em>Screenshot of ScienceSim from <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a></em></p>
<p><strong>Tish:</strong> How does this work on ScienceSimÂ  fit into a wider dialogue on linked data? Where people meet people meet data, and where data meets data?</p>
<p><em><strong>Mic:</strong> Yeahâ€¦ thatâ€™s hard by the way.Â  Open integration of data (and more interestingly the functions on data) is very hard if it comes from multiple, independent sources.</em></p>
<p><em>Thatâ€™s the people part. For example, if Crista can build a model of the UCI campus somebody else builds an accurate model of several cars and another expert provides the simulation that computes the pollution generated by those cars in that environmentâ€¦its bringing people together to solve real problems, no matter how far apart physically.</em></p>
<p><strong>Tish:</strong> You mention three different simulations here. Could you explain why it is difficult to integrate data from multiple sources?</p>
<p><em><strong>Mic:</strong> integrating data from multiple sources has always been one of understanding &amp; interpreting both the syntax &amp; semantics of the data. Even relatively simple things like multiple date formats require explicit translation. More complex formats, like the many formats data is represented for urban planning, are barely computable independently let alone in conjunction with data from other sources (each with its own representation for data). Its often the expertise &amp; the collaboration of bringing people (and their bag of tools) together that solves these problems.</em></p>
<p><strong>Tish:</strong> and in this case the bag of tools is high performance modeling..?</p>
<p><em><strong>Mic:</strong> high performance modeling, rich visualizations and data. Its the three that matterâ€¦ data, function, and interface.</em></p>
<p><strong>Tish:</strong> Some people have a very hard time wrapping their head aropund the fact that anything that seems related to Second Life can do this.Â  Can you explain more about the difference between SL and OpenSim?</p>
<p><em><strong>Mic:</strong> OpenSim potentially improves data &amp; function because it can be extended through region modules. Region modules hook directly into the simulator to provide additional functionality. For example, a region module could be implemented to drive the behavior of objects in a virtual world according based on a protein folding model.</em></p>
<p><em>We need to work on additional viewer capabilities to address the user interface limitations.</em><br />
<strong><br />
Tish:</strong> Yes Rob Smartâ€™s (IBM) recent data integrations with OpenSim (<a href="http://robsmart.co.uk/2009/01/22/visualizing-live-shipping-data-in-opensim-isle-of-wight-ferries/" target="_blank">see here</a>) are impressive. Re viewers one of the biggest objections to virtual worlds is the mouse pushing and pc tied interface.</p>
<p><em><strong>Mic:</strong> There are great opportunities for improving the interface</em></p>
<p><strong>Tish:</strong> Yes I really like where the Andy Piperâ€™s experiments with Haptic Interfaces for OpenSim lead, <a href="http://andypiper.wordpress.com/2009/02/06/haptic-user-interfaces/" target="_blank">see Haptic Fantastic</a>! And I think that we will have cyberspace ubiquitous in our environment, not just stuck on a pc screen, sooner than we think.</p>
<p><em><strong>Mic:</strong> Micâ€™s opinion (not Intel): until we get souped up sunglasses with HD screens embedded (or writing directly into the eye) there will always be a role for the PC/Console/TV).Â  But, it isnâ€™t about the deviceâ€¦ its about the services projected through the deviceâ€¦ sometimes youâ€™ll want a very rich experienceâ€¦ sometimes youâ€™ll want an experience NOW wherever you are.</em></p>
<p><strong>Tish:</strong> I think people are only just realizing that VWs will be a now and wherever you are experience very soon.</p>
<p><em><strong>Mic:</strong> Thatâ€™s the critical observation the virtual world is not an application you runâ€¦ its a â€œplaceâ€â€¦ and you interact with it where you are or maybe interact through it. Speaking for Intelâ€¦ it is the spectrum of experiences that are critical to support.</em></p>
<h3>Interview with Wilfred Pinfold</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg"><img class="alignnone size-full wp-image-2860" title="gustav_h" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg" alt="gustav_h" width="416" height="200" /></a></p>
<p><em>Picture from National Science Foundation &#8211; <a href="http://www.nsf.gov/news/news_summ.jsp?cntn_id=112166" target="_blank">&#8220;Climate Computer Modeling Heats Up.&#8221;</a></em></p>
<p><strong>Tish Shute:</strong> I know your day job for Intel is in High Performance computing.  Could you explain to me a little bit more about what you are working on in this regard &#8211; a mini state of play for high performance computing from your perspective?</p>
<p><em><strong>Wilfred Pinfold:</strong> My title is Director, Extreme Scale Programs. This program drives a research agenda that will put in place the technologies required to make an Exa (10^18) scale systems by 2015. The current generation of high performance computers are Peta (10^15) scale so this is a 1000x increase in performance and this increase will require significant improvements in power efficiency, reliability, scalability and new techniques for dealing with locality and parallelism.</em></p>
<p><strong>Tish:</strong> The nirvana in terms of linking supercomputers to the collaborative spaces of immersive virtual worlds is to be able to create visualizations using real time data from super computers in collaborative VW environments, and ultimately for researchers to be able to collaborate and steer their simulations from their visualizations.Â   Where are we at now in terms of scientific data visualization in VWs? And what are the current obstacles to using realtime data from super computers?</p>
<p><em><strong>Wilf: </strong>Being able to steer a simulation from a visualization requires both a visualization interface that allows interaction and a simulation that operates at a speed that is responsive in interactive timeframes. For example a weather model that predicts the path of a hurricane would need to operate at something close to 1000x real time. This would run through a day in ~1.5 minutes allowing an operator to run the simulation over several days multiple times with different parameters in a single sitting to understand the likelyhood of certain outcomes?</em></p>
<p><strong>Tish:</strong> Do you see a networked online collaborative virtual world being capable of being a visualization interface that allows meaningful interaction with the hurricane scenario you describe in the near future (next 6 to 18 months)?</p>
<p><em><strong>Wilf: </strong>I was using the hurricane example to explain the usage model not an imminent capability. Hurricane Simulation: Accurate hurricane simulations require multiscale models able to resolve the global forces working on the storm as well as the microforces that define precipitation. We can build useful weather models today that run faster than real time (anything slower is not useful for prediction) but we are a long way from the ideal.<br />
Visualization: There are excellent visualizations of weather systems but I have not yet seen a virtual world that can track a simulation and allow the scientist or team of scientists to see what is going on at both the macro scale and zoom in to see precipitation conditions. Today&#8217;s supercomputers are much better at this than they were a few years ago but they are a long way from ideal.</em></p>
<p><strong>Tish:</strong> Open Source Virtual World technologies are pretty diverse in their approaches, Croquet, Sun&#8217;s Wonderland and OpenSim are quite different and have different strengths and weaknesses. As you have become more familiar with OpenSim, what have you found about the technology that particularly lends itself to this project &#8211; ScienceSim (Mic mentioned Crista&#8217;s hypergrid code for example, modularity is another feature often cited).</p>
<p><em><strong>Wilf: </strong>We have found OpenSim&#8217;s client server model is well suited to the visualization model and the ability to put the server next to the supercomputer producing the visualization data is critical. We are however very interested in other environments and encourage papers, demonstrations and research on any of these platforms at the conference.</em></p>
<h3>Interview with John A. Hengeveld</h3>
<p><strong>Tish Shute:</strong> OpenSimâ€™s dependence on Second Life based viewers is sometimes cited as a limitation, and sometimes as a strength. What are your views on this?Â  What would a strong open viewer project directed at science applications bring to the picture?</p>
<p><em><strong>John Hengeveld:</strong> There may be more than one strong open viewer project required for opensim compatible experiences.Â  The strength of the Hippo viewer, for example, is availability and its weakness is the size of the client.Â  We would love a ubiquitous, client.. that runs on all platforms, but each hardware platform brings tradeoff and restrictions of its own.Â  Today, probably all of the folks innovating in the space can deal with the size of a very fat rich client ap.. they have big computers anyway.Â  But as we get into more 3D entertainment and augmented reality applications.. virtual mall, collaboration apps.. etcâ€¦ there is a great deal of room to optimize for the specific experience.Â  Balancing visual experience with bandwidth and compute performance available .. tying into standard browsers, etcâ€¦ people have done some of this work.. and I think all of it adds to the usefulness of these worlds.</em></p>
<p><strong>Tish:</strong> Integrating highend game engines and OpenSim opens up new possibilities. But licensing issues have been an obstacle. Could a project like ScienceSim get a non-commercial license on a high end game engine?Â  What would that bring to the picture?</p>
<p><em><strong>John: </strong>Anything is possible. Game engines can give a great deal of design power for high value experiences, but the programming of these experiences must be simplified.Â  Mainstream adoption in enterprise can&#8217;t be premised on the programming model of studio gamesâ€¦ thatâ€™s a big step to get over I think.Â  There are very interesting possibilities when we take that step tho.Â  Simulation, training, agents of various types (I just finished watching â€œThe Matrixâ€ for like the billionth timeâ€¦ I think agents are coolâ€¦)</em></p>
<p><strong>Tish:</strong> Where does Larabee fit into the picture of ScienceSim and next generation virtual worlds?</p>
<p><em><strong>John:</strong> We are all very excited about the Larrabee architecture and its application to work loads like next generation virtual worlds, both in the client.. delivering immersive reality.. and someday potentially in a distributed architecture simulating and producing these worlds.Â  For Intel CVC is an all play.Â  Atom will be used in strong mobile clients.Â  Core will be used in Enterprise PCs, Laptops and DesktopsÂ Â  Xeon will be simulating these environments and handling the data communication, and Whatever we brand Larrabeeâ€¦ will be enabling compelling visual experiences. Oh.. and our software products (Havoc, tools and others) will be building blocks in knitting all this together.Â  Larrabee is a part, but there are a lot of other pieces in our visionâ€¦</em></p>
<p><strong>Tish:</strong> If the kind of rapid data movement that scientific visualization needs is achieved in virtual worlds, this will be quite a game changer for business applications of VWs too. Also it will blurr the boundaries between what we call virtual worlds and mirror worlds. It seems to me this kind of rapid data movement is a vital step towards what Mic described to me as Intelâ€™s vision of CVC: â€œConnected Visual Computing is the union of three application domains: mmog, metaverse, and paraverse (or augmented reality).â€ It almost seems to me that if you achieve your goals for ScienceSim you will change how we think about virtual worlds in general? What do you think?</p>
<p><em><strong>John:</strong> I certainly hope so..Â  Part of our goal is to stimulate innovation in the technology and usage models that will enable broad mainstream adoption of CVC based applications (what we categorize as immersive connected experiences).Â Â  By tackling the scientific visualization problem, we hope to find the key technology barriers and encourage the ecosystem to solve them.</em></p>
<p><strong>Tish: </strong>To me virtual worlds and augmented reality should be complimentary and connected experiences. How do you see this connection evolving?</p>
<p><em><strong>John:</strong> We certainly see them as related.Â  In the long term, there are many common building blocks.. but they arenâ€™t united per se.Â  Its about the user experience, and in some usages these two are almost identicalâ€¦Â  in some.. they donâ€™t look or feel at all alikeâ€¦ the viewer is distinct by a lot.Â  Our approach is to enable building blocks that people can quickly build out usages that are robust.</em></p>
<p><strong>Tish: </strong>What is Intelâ€™s vision for ubiquitous mobile computing and an internet of objects?Â  How can high performance computing be an enabler for this vision?</p>
<p><em><strong>John: </strong>Mobile computing is a central part of our life, culture and community in economically enabled economies.Â  It feeds the data of our decisions, it connects us to entertainment, it is the access point to our soapboxes, pulpits, economy and families.Â  This creates a massive increase in data, a massive increase in interactions, transactions and visualizations.Â  While many HPC applications will be behind the scenes (finance, health, energy, visual analytics and others), HPC will emerge as a part of a scale solution to serving some of this increaseâ€¦ particularly that part where interactions and visualizations are complex or compelling.. or where scale enables the usage per se .. I talked about my love of agents earlier.. and some of that comes in here.Â  Compute working behind the scenes to help managed the data complexity, manage some of the base interactions between ourselves and technology.Â  The other thing we talk internally about the â€œHannah Montana usageâ€ where millions of people use their mobile devices to access and participate (using the sensors in the device) with an interactive live concert.Â  When Mylie hears the applause of a virtual interactive audienceâ€¦ and can scream back at them.. weâ€™re there.Â  Access to ubiquitous compute will be mobile, and interactive experiences will be complex.. and HPC can help make that real.Â  Watch out for the mental trap that HPC is always high end super compute clusters thoâ€¦ the â€œmainstream HPCâ€.. smaller clustersâ€¦ high threads, etcâ€¦ will play a key part in all of this as well.</em></p>
<p>Interesting that John ended on this point as this just came in from <a href="http://blog.wired.com/gadgets/2009/02/intel-fights-re.html" target="_blank">Wired. </a><em><br />
</em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
	</channel>
</rss>
