<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; pervasive computing</title>
	<atom:link href="http://www.ugotrade.com/tag/pervasive-computing/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Real Time Big Data at Strata 2011: Ambient Findability, Social Search, GeoMessaging, Augmented Data, and New Interfaces</title>
		<link>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/</link>
		<comments>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/#comments</comments>
		<pubDate>Thu, 20 Jan 2011 22:48:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Alistair Croll]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android Tasker]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[attention data]]></category>
		<category><![CDATA[augmented data]]></category>
		<category><![CDATA[augmented reality ecosystem]]></category>
		<category><![CDATA[augmented reality search]]></category>
		<category><![CDATA[BackType]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[Big data and new interfaces]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Cassandra]]></category>
		<category><![CDATA[Collecta]]></category>
		<category><![CDATA[content-shifting]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Data Engineering]]></category>
		<category><![CDATA[data privacy]]></category>
		<category><![CDATA[digital divide]]></category>
		<category><![CDATA[distributed computing]]></category>
		<category><![CDATA[Edd Dumbill]]></category>
		<category><![CDATA[Factual]]></category>
		<category><![CDATA[future of work]]></category>
		<category><![CDATA[geo]]></category>
		<category><![CDATA[geo social aware discovery]]></category>
		<category><![CDATA[geo-search]]></category>
		<category><![CDATA[geodata]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[Geoloqi]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[geosearch]]></category>
		<category><![CDATA[gestural interfaces]]></category>
		<category><![CDATA[Gov2.0.]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Hive]]></category>
		<category><![CDATA[key data trends]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[location data]]></category>
		<category><![CDATA[Maneko Neki]]></category>
		<category><![CDATA[MapReduce]]></category>
		<category><![CDATA[mapufacture]]></category>
		<category><![CDATA[Mesos]]></category>
		<category><![CDATA[Michal Avny]]></category>
		<category><![CDATA[mobile local interactions]]></category>
		<category><![CDATA[MongoDB]]></category>
		<category><![CDATA[My6sense]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[NoSQL]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[OpenGov]]></category>
		<category><![CDATA[P2P cloud computing]]></category>
		<category><![CDATA[pervasive computing]]></category>
		<category><![CDATA[Q&A]]></category>
		<category><![CDATA[Q&A ecosystems]]></category>
		<category><![CDATA[Q&A platforms]]></category>
		<category><![CDATA[Q&A The New Search Insurgents]]></category>
		<category><![CDATA[Quora]]></category>
		<category><![CDATA[RabbitMQ]]></category>
		<category><![CDATA[real time data analytics]]></category>
		<category><![CDATA[real time data in mobile development]]></category>
		<category><![CDATA[real time search]]></category>
		<category><![CDATA[real time search engines]]></category>
		<category><![CDATA[real time social discovery]]></category>
		<category><![CDATA[semantic web]]></category>
		<category><![CDATA[Simple Geo]]></category>
		<category><![CDATA[social graph]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[social web]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Strata 2011]]></category>
		<category><![CDATA[Swift River]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[Topsy]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Who owns your data?]]></category>
		<category><![CDATA[XMPP]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6025</guid>
		<description><![CDATA[We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with Anselm Hook, Part 2 upcoming).Â  O&#8217;Reilly&#8217;s Strata Confernence 2011, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31.jpg"><img class="alignnone size-medium wp-image-6034" title="noisedderived3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/noisedderived31-300x163.jpg" alt="" width="300" height="163" /></a></p>
<p>We are in the age of unearthing and uncovering data, and only just at the beginning of the age of processing data and dealing with it (see my interview with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, Part 2 upcoming).Â  <a href="http://strataconf.com/strata2011" target="_blank">O&#8217;Reilly&#8217;s Strata Confernence 2011</a>, will explore, &#8220;the change brought to technology and business by data science, pervasive computing, and new interfaces.&#8221; It is, perhaps, one of the most important events of 2011.</p>
<p>Data is driving a revolution much as coal, oil, and steel powered the industrial revolution.Â  And the world changing insight from Karl Marx that &#8220;the industrial revolution polarized the world into two groups: those who own the means of production and those who work on them,&#8221; is taking on on new life, asÂ <a href="http://twitter.com/#!/acroll" target="_blank"> Alistair Croll</a>, co-chair of <a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a>, points out in his post,Â  <a href="http://mashable.com/2011/01/12/data-ownership/" target="_blank">&#8220;Who Owns Your Data?&#8221;</a></p>
<p><strong>&#8220;The important question isnâ€™t who owns the data. Ultimately, we all do. A better question is, who owns the means of analysis? Because thatâ€™s how, as Brand suggests, you get the right information in the right place. The digital divide isnâ€™t about who owns data â€” itâ€™s about who can put that data to work.&#8221;</strong></p>
<p>Strata is where a vanguard will be meet, not only to discuss this revolutionâ€™s futures, but to define how to create, handle, and build the platforms and experiences that will harness the data.  My flight is booked!Â  (Also check out <a href="http://www.bigdatacamp.org/">BigDataCamp</a> which takes place the night before <a title="Strata Conference" href="https://en.oreilly.com/strata2011/public/regwith/str11dnaff" target="_blank">Strata</a>.)</p>
<p>The picture opening this post is from Michael EdgeCumbe&#8217;sÂ  <a href="http://garden.neocyde.net/thoughts/2010/12/fall-2010-itp-winter-show-project/">Fall 2010: ITP Winter Show Project</a>.Â  A project exploring ways to intuitively get the feel of what it going on with big data sets using &#8220;the gestural manipulation and stereoscopic visualization of complex data to create a meditative state for data analysis.&#8221;Â  Michael project will be part of the <a href="http://strataconf.com/strata2011/public/schedule/detail/17840" target="_blank">Science Fair at Strata</a>.Â  For more on Michael&#8217;s work see <a href="http://www.neocyde.net/derive/2010/12" target="_blank">Noise Derived.</a> I also have a number of theÂ    <a href="http://strataconf.com/strata2011/public/schedule/topic/595 " target="_blank">interesting new interface sessions </a>at Strata in my schedule.</p>
<p>The daily <a href="http://radar.oreilly.com/2010/12/write-your-own-visualizations.html" target="_blank">Strata Gems</a> on O&#8217;Reilly Radar are great place to get a gestalt of some of the Strata themes, and <a href="http://radar.oreilly.com/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank">this  post </a>by <a href="http://strataconf.com/strata2011/profile/1" target="_blank">Edd Dumbill</a>, program chair for Strata,<a href="http://radar.oreilly.com/m/2010/12/strata-gems-three-key-data-trends-for-2011.html" target="_blank"> Three key data trends for 2011</a>, looks at the year ahead.Â  This week, I got the chance to ask Edd a few of the questions that I will have on mind at Strata &#8211; see his responses below.</p>
<p>If you have been reading Ugotrade, you will know I am interested in our mobile social augmented futures and there is no question in my mind that these will be unleashed by our new capacities to work with data (see <a href="http://www.ugotrade.com/2010/10/31/tim-o%E2%80%99reilly%E2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/" target="_blank">my post here</a>).</p>
<p><strong><br />
</strong></p>
<h3>Data is the how.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram.png"><img class="alignnone size-medium wp-image-6045" title="backtypediagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/backtypediagram-210x300.png" alt="" width="210" height="300" /></a></p>
<p><em>The pic above is from <a href="http://www.readwriteweb.com/hack/2011/01/secrets-of-backtypes-data-engineers.php" target="_blank">&#8220;Secrets of BackType&#8217;s Data Engineers.&#8221;</a> This post on ReadWriteHack by <a href="http://twitter.com/petewarden">Pete Warden</a>, an ex-Apple engineer, and founder of <a href="http://www.openheatmap.com/">OpenHeatMap</a>, really lives up to its title.Â  Check it out if you want to know howÂ <strong> &#8220;three guys (the <a title="opens in new window" href="http://backtype.com/" target="_blank">BackType</a> team ) with only seed funding process a hundred million messages a day?&#8221;</strong></em></p>
<p>I asked on Quora, &#8220;<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank">What would be the most important developments for Augmented Reality in 2011,&#8221;</a> <a href="https://sites.google.com/site/michalavny/" target="_blank">Michal Avny,</a> Strategist &amp; Real Time search expert, wrote:</p>
<p><strong>&#8220;AR strongly relies on localized personalized real time information.</strong></p>
<p><strong>Having a stream of tweets based on keyword search, location or circle of friends doesnâ€™t really make the AR experience; it is the processed real time relevant information that will make AR useful and intensify the experience.&#8221;</strong></p>
<p><strong>In 2011 Real Time search and Social Search will drastically change to provide the infrastructure required.&#8221;</strong></p>
<p>I followed up on Michal&#8217;s Quora answer with some more questions &#8211; see below in this post.<strong><br />
</strong></p>
<p>Also note<a href="http://www.quora.com/What-will-be-the-most-important-developments-in-augmented-reality-in-2011" target="_blank"> the response</a> from <a href="http://research.microsoft.com/en-us/people/dmolnar/" target="_blank">David Molna</a>r, here is an excerpt:</p>
<p><strong>&#8220;2. A wave of actionable, important data APIs opened up, enabling useful non-gimmicky AR apps for the first time. Think geoloqi.com , or the work Max Ogden has done with Portland civic data. Plus of course <a href="http://face.com/" target="_blank">face.com</a> , email providers and calendar providers, etc.&#8221;</strong></p>
<p><a href="http://strataconf.com/strata2011/public/schedule/speaker/100889" target="_blank">Amber Case</a>, one of the founders of <a href="http://geoloqi.com/" target="_blank">Geoloqi</a>, is on the programming committee of Strata and will be speaking.  Be sure to catch her session! <a href="http://strataconf.com/strata2011/public/schedule/detail/17748" target="_blank">Posthumans, Big Data and New Interfaces,</a> and if you haven&#8217;t already seen it, <a href="http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html" target="_blank">Amber&#8217;s TED talk</a> is a must see.</p>
<p>Geographic proximity is a powerful filter, as is route, and time. But clearly social proximity, social relevance, and shared tastes are also key dimensions for location based experiences, (see my convo with Schuyler of <a href="http://simplegeo.com/" target="_blank">Simple Geo</a>, upcoming).</p>
<p>While the whole business of location based search and curation of augmented mobile social experiences is still, for the most part, uncharted terrain, the danger of key points of control being only really accessible to elite players looms large.   I asked <a href="http://www.youtube.com/watch?v=C2HcWlu1BS4" target="_blank">Sophia Parafina</a>, a pioneer in the open geo space for some thoughts on real-time local /geosearch and geomessaging, and the future of openess &amp; big data (see Sophia&#8217;s response below).</p>
<h3><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet for P2P cloud computing?</a></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg"><img class="alignnone size-full wp-image-6048" title="8a174_invisibles_bigbrother_1210" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/8a174_invisibles_bigbrother_1210.jpg" alt="" width="150" height="150" /></a></p>
<p>This is another question I&#8217;m following,Â <a href="http://www.quora.com/home/following" target="_blank"> </a><a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing" target="_blank">Is the market ready yet forÂ P2P cloud computing?</a> It is one of those questions that we seem to have been asking in various forms for a very long while now, but without aÂ  major shift in sight.Â  The pic above is from, <a title="Permanent link to The Cloud Made Open Source " href="http://www.readwriteweb.com/cloud/2010/12/open-source-invisible.php">The Cloud Made Open Source &#8220;Invisible&#8221; This Year</a>.Â  But, perhaps, we are at the point when open p2p clouds will find a place in the market because of their potential importance in real time social search and discovery. <a href="http://distributedsearch.blogspot.com/" target="_blank">Borislav Agapiev</a>, Search Entrepreneur and founder of <a href="Vast.com" target="_blank">Vast.com</a>, writes on <a href="http://www.quora.com/Is-the-market-ready-yet-for-P2P-cloud-computing?q=p2p+for+a+non+centralized+infrastructure" target="_blank">Quora</a>:</p>
<p><strong>&#8220;I believe a P2P cloud is ideally suited for social &amp; real-time search and discovery.</strong></p>
<p><strong>Consider MapReduce, a very interesting and popular paradigm for distributed computing. MapReduce is very much about bringing computation to data i.e. doing computation at nodes (map) and then aggregating results through network (reduce).</strong></p>
<p><strong>It is very clear now that user attention data (what they click on) is very valuable for search and discovery, yet a centralized model relies upon uploading all that to a single location and then doing a supposed local MapReduce. Clearly, MapReduce could be done  across the network, without any centralized uploads.</strong></p>
<p><strong>In addition to the efficiency argument raised here, it is even more important to consider privacy issues. Uploading massive amounts of user attention data to a centralized location is not something that is going to make users warm and fuzzy <img src="http://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />   as we are increasingly seeing.</strong></p>
<p><strong>In a P2P cloud, there is no big brother watching over anyone, all computation and data storage is done in the cloud, fragmented in many, many small  encrypted pieces ala BitTorrent.&#8221;</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1.png"><img class="alignnone size-medium wp-image-6066" title="Screen shot 2011-01-16 at 2.13.43 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Screen-shot-2011-01-16-at-2.13.43-PM1-300x223.png" alt="" width="300" height="223" /></a><br />
</strong></p>
<p><em>Picture above from Brynn Marie Evans, <a href="http://brynnevans.com/blog/2010/03/17/it-takes-two-to-tango/">&#8220;It takes two to tango: review of my social search panel</a>&#8220;</em></p>
<p><em><br />
</em></p>
<h3>The Delta of Now &#8211; Transforming Search into a Social Democratic Act</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44.jpg"><img class="alignnone size-medium wp-image-6049" title="2538108030_d37d124e44" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/2538108030_d37d124e44-300x225.jpg" alt="" width="300" height="225" /></a></p>
<p><em>Picture of Maneki Neko &#8220;beckoning&#8221; cats from <a href="http://www.journeyetc.com/travel-ideas/famous-landmarks-of-cats-and-dogs-around-the-globe/">Journeyetc</a></em></p>
<p>New ecologies of human and machine intelligence are beginning to change basic social structures â€“ see the <a href="http://www.youtube.com/watch?v=t1J2RXrvPek" target="_blank">Future of Work (Biewald and Chirayath Janah 2010)</a>. And projects like <a href="http://swift.ushahidi.com/" target="_blank">Swift River</a>, using search and machine mining to filter out streams on topics of interest that can then be subsequently curated by human beings. This may be extended to the curation of real-time data streams and employment of machine learning algorithms based upon the explicit relationships.</p>
<p>Augmented mobile social experiences are a new frontier in which ideas and practices from a number of fields collide, including: ambient findability (Morville 2005), urban psychogeography, narrative structures, ambient games and devices, 4d (time-space), explorations of place and memory, enchanted objects and people (Kuniavsky 2010), and designed animism (Laurel 2010), to mention just a few.</p>
<p>Mobile local interaction presents an opportunity to invert the search pyramid and to transform search into a social, democratic act (see my interview with Anselm Hook upcoming).Â  Up until now search has been predicated around a very narrow revenue model.  Google has an implicit model of a B2C â€“ business to consumer brokerage. We are only just beginning to get a glimpse of the disruptive potential of C2C &#8211; consumer to consumer brokerages.  Mobile local C2C brokerages that allow us to transact in a trustworthy way over our local geography in close to real time (Hook 2010) have the potential to enable new forms of social organization.  Bruce Sterlingâ€™s short story about a networked gift economy, <a href="http://tqft.net/wiki/Maneki_Neko" target="_blank">Maneko Neki,</a> is a brilliant glimpse at the disruptive potential of such re-imaginings.</p>
<p>Augmented experiences that shift or change a personâ€™s situated geolocal experience of social reality, and change our relationship to the people and the place by augmenting engagement in, and reputation through, socially driven consumer tie ins and game dynamics, like <a href="http://foursquare.com/" target="_blank">Four Square</a>, &amp; <a href="http://gowalla.com/" target="_blank">Gowalla</a> are beginning to emerge, as <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Kati London pointed out in her excellent keynote at Web 2.0 Expo</a>.  And, while the integration of mobile local interaction and an augmented view that shifts our geolocal experience visually will involve creative solutions to some well churned mobile, tracking, mapping and registration challenges, the exploration and development of new dimensions through which we can filter and create trusted and meaningful augmented mobile social experiences is vital, whether you are considering a mobile screen, map, camera view, or futuristic HUDs and gestural interfaces.</p>
<h3>Talking with Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbill.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png"><img class="alignnone size-full wp-image-6077" title="edddumbillheadshot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/edddumbillheadshot.png" alt="" width="150" height="150" /></a><br />
Picture from <a href="http://people.oreilly.com/edd" target="_blank">O&#8217;Reilly Community.</a></p>
<p><strong>Tish Shute: </strong>First congratulations on Strata!   On the Strata homepage there is a quote from Jason Hoffman:</p>
<p><strong>&#8220;My gut feeling is that we&#8217;re going to look back at the upcoming Strata Conference like we do at the Web 2.0 Conference in 2004/2005.&#8221;<br />
â€”Jason Hoffman, CTO/Founder, Joyent, Inc.</strong></p>
<p>Why do you think Jasonâ€™s comparison might be prescient?</p>
<p><strong>Edd Dumbill: Web 2.0 is a development that ran through every brand that has a web presence and radically changed the way business is done for many companies and brands.</strong></p>
<p><strong>Strata will have a similar impact: every business has data, every business collects an increasing amount of data. This data is the new oil â€“ a valuable raw material that when refined or combined creates value and opportunity.</strong></p>
<p><strong>Tish Shute:</strong> The rise of real time was one of your three key data trends for 2011.  Hadoop is bringing the capacity to work with big data to more than just a few elite players.  But the challenge is still real time.  You mention we will be seeing a hybrid approach to real time and batch MapReduce processing.  Will we hear more about these approaches to real time at Strata?  And, what do you see as the most important conversations on real time data analytics emerging at Strata?</p>
<p>You point out â€œopen source projects and cloud infrastructure means developers can evaluate and learn to love technologies without requiring support or approval from above.â€  What are the most exciting developments on the horizon for open source tools?</p>
<p><strong>Edd Dumbill: </strong><strong>Here are some projects worth watching, in the key areas of real time, cluster management and Hadoop.</strong></p>
<p><strong>* Cassandra and MongoDB â€” NoSQL databases that will prove vital for anybody with real time big data needs</strong></p>
<p><strong>* Mesos â€” a compute cluster management tool, modeled after that which powers Google</strong></p>
<p><strong>* Hadoop ecosystem&#8217;s continuing maturation, especially HBase and Hive.</strong></p>
<p><strong>Tish Shute: </strong> Do you think the market is ready for p2p cloud computing?</p>
<p><strong>Edd Dumbill: The market is emerging for decentralized and distributed cloud computing, and P2P technologies are one way of achieving that. They key trends will be moving computation nearer the data sets or nearer the point of user consumption of the result.</strong></p>
<p><strong>P2P is a difficult model for anybody wanting to commercialize a service, so I think it will tend to form part of a hybrid solution.</strong></p>
<p><strong>Tish Shute:</strong> We have seen enormous strides in our ability to work with giant unstructured databases recently.  Do you think, perhaps, that the dream of a web of linked data &#8211;  â€œa web of data that can be processed directly and indirectly by machines,â€ will be attained through brute force &#8211; i.e. through our ability to harness the power of massively parallel processing, as much as by Semantic Web approaches focused on machine readable metadata? [Also see <a href="http://www.quora.com/Is-this-a-good-approach-www-dist-systems-bbn-com-people-krohloff-shard_overview-shtml-to-use-Hadoop-to-build-a-scalable-distributed-triple-store" target="_blank">my question on Quora</a>, &#8220;Is this a good approach (<a rel="nofollow" href="http://www.dist-systems.bbn.com/people/krohloff/shard_overview.shtml" target="_blank">www.dist-systems.bbn.com/people/&#8230;</a>) to use Hadoop to build a scalable, distributed triple store?&#8221;]</p>
<p><strong>Edd Dumbill:  I&#8217;ve been an observer of the SW for over a decade and I tend to believe that on the web, data means to you whatever meaning you give it as the consumer. With that model, the links are made by the consumer rather than sitting out there explicitly. Some links become de facto standards, and some very few become web standards.</strong></p>
<p><strong>I think the actuality will be a mix of both explicitly stated metadata and that which is inferred. The Semantic Web is a great framework for certain operations, especially interoperable exchange of metadata. A great many more private meanings, never intended to be shared, will be created by consuming software.</strong></p>
<p><strong>There&#8217;s no question that machines will learn how to process most of the Web. Furthermore, machines will learn how to process most of the physical world we&#8217;re in. And that by the end of this decade</strong>.</p>
<h3>Talking with Sophia Parafina</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere.jpg"><img class="alignnone size-medium wp-image-6062" title="sophiawhere" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/sophiawhere-300x250.jpg" alt="" width="300" height="250" /></a></p>
<p><em>Picture of Sophia at <a href="http://where2conf.com/where2011" target="_blank">Where 2.0</a><a href="http://www.flickr.com/photos/rich_gibson/2509114741/" target="_blank"></a></em></p>
<p><strong>Tish Shute:</strong> Sophia you have worked in the trenches for a long time now  to support the growth of open geo data.  What do you hope to see emerge in 2011 in the field of geo-data?</p>
<p><strong>Sophia Parafina: Better support for displaying and handling location data across multiple apps. Fred Wilson <a href="http://www.avc.com/a_vc/2011/01/content-shifting.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+AVc+%28A+VC%29" target="_blank">recently blogged about content-shifting</a>, he talks about overcoming content silos across devices. Weâ€™ve worked very hard to reduce data silos via formats, but devices are creating their own silos. I would like to see a standard method for sending geo data and geo information to mobile devices.</strong></p>
<p><strong>Producing content for mobile is different from producing content for a computer browser. Web 2.0 produced a lot of infrastructure for browser based interfaces, but in mobile devices that gap has been filled with apps which is fragmenting how data is handled by various devices. What is even more interesting in the mobile space is that devices can push data back that contains location, user updates, photos and even sensor data.Â  If mobile data standardizes, it could lead to browser based applications and stem the continued fragmentation of the mobile application market.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://simplegeo.com/" target="_blank">Simple Geo</a> and<a href="http://www.factual.com/" target="_blank"> Factual</a> are startups emerging in the geodata space. What do you see on the horizon in terms of both the growth of business opportunities and an open geo data community?</p>
<p><strong>Sophia Parafina: In the near future think weâ€™ll see startups providing curated data + API and in response we will also see companies that provide a single interface across multiple data providers. We saw this when everyone released a mapping API and companies such as <a href="http://mapufacture.com/">Mapufacture</a> provided a single interface across multiple APIs.</strong></p>
<p><strong>We will see a resurgence in data providers repackaging the the 2010 US Census data in different ways to respond to market segments, some of this will be open data but all of it will be provided through an API instead of file. Additionally, weâ€™ll see more data from outside the US.</strong></p>
<p><strong>Tish Shute:</strong> What are the biggest obstacles to having the open geodata sets available that we need to enable mobile local interactions and social augmented experiences?</p>
<p><strong>Sophia Parafina: Licensing for both crowd sourced data and private curated open data will become an issue. We recently seen VLC, the open source video player, pulled from the Apple app store because of licensing issues. Also, licensing of content by geography will be problematic, limiting searches by geographical location. In addition, how will licensing of data that is updated by crowd sourcing work?</strong></p>
<p><strong>Multiple APIs for accessing data sources. The current trend for each provider to create an API for their data sets will result in data silos â€“ there needs to be a single sign-on equivalent for requesting data.</strong></p>
<p><strong>Size of data on the wire, the current models for delivering data is based on broadband connections. However, as mobiles increasingly become the way people use the web, the data needs to be sized accordingly. This also goes for mobile interfaces. Have you tried to shop on a mobile device, or buy a train or plane ticket? Itâ€™s frustrating and error prone. There is a large untapped market of people who only use the Internet on mobile devices.</strong></p>
<p><strong>Tish Shute</strong>: You pointed me to <a href="http://radar.oreilly.com/2010/12/strata-gems-diy-personal-sensi.html" target="_blank">this link in Strata Gems</a> re â€œan interesting and pertinent (also a competitor to GeoLoqi),â€ â€“ <a href="http://tasker.dinglisch.net/" target="_blank">the Android Tasker app.</a> What do these emerging services bring to the table in terms of the next generation of location based services?</p>
<p><strong>Sophia Parafina: This app letâ€™s your device interact with the environment. I think that this is a great way of using the sensors on existing platforms to increase interaction and to implement ambient findability. The basic premise of Tasker is that some action happens in response to an event in an application, time, date, location, event, or gesture. Tasker has defined 180 actions that can occur based and number or combination of events. This can provide a basic vocabulary for interaction between the user and the device and more importantly between users. Tasker also can use Android script plugins, which lowers the bar to creating your own ambient  application.</strong></p>
<p><strong>Programs such as Tasker can provide a way for people to interact with social networks beyond sending messages. People can use their mobile devices to interact with their surroundings with out having to interact with the device.</strong></p>
<p><strong>Tish Shute:</strong> We have had many conversations about emerging ideas of geo-search, geo-messaging and geo-fencing. What are the most interesting developments in these areas and what do you see on the horizon for 2011?</p>
<p><strong>Sophia Parafina: The map will fade into the background and become less important. Display of information will be context aware, that includes location. For example, letâ€™s say I make a grocery list, when Iâ€™m at the grocery story, the list will just pop-up without the need for me to find the app that has the list. Or reminders or offers pop-up when you are near a place at a certain time, letâ€™s say you need to buy a present for a birthday party for a child, you could send out a request that you are looking for an item and retailers could offer â€œon the spotâ€ discounts if you are in the area.</strong></p>
<p><strong>Geo-search, geo-messaging, and geo-fencing are geared to towards mobile devices, so I expect to see them soon as part of apps. Building generic applications that implement geo* will fail because that sort of information is useful only within a context. Geo* apps are solutions looking for an problem. The killer mobile app will use these functions transparently to reduce the cognitive load of the user who is busy moving around in the world.</strong></p>
<p><strong>User data gathered from multiple web applications will become consolidated profiles that will used for context aware applications. For example, there could be a service which matches prices of items that you have shopped for on the web, so for example the service would have access to your cookies, know your favorite retailers, things you have shopped for, your location and activity patters (when you are at home, work, restaurant). When you are in the vicinity of a brick and mortar retailer with the same or similar items, the service can send you alert to match the price of the item you found on line. So your digital life will become more closely linked with your day to day activities.</strong></p>
<p><strong><br />
</strong></p>
<h3>Talking with Michal Avny</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic.jpg"><img class="alignnone size-medium wp-image-6059" title="Michal_Pic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/01/Michal_Pic-300x275.jpg" alt="" width="300" height="275" /></a></p>
<p><strong>Tish Shute: </strong>At <a href="http://www.web2summit.com/web2010" target="_blank"> Web 2.0 Summit</a>, one of the highlights for me was the, <a href="http://www.web2summit.com/web2010/public/schedule/detail/17101" target="_blank">Q&amp;A:The New Search Insurgents</a> lunch where Charlie Cheever of <a href="http://www.quora.com/" target="_blank">Quora</a>, IMO, stole the show. I tweeted:</p>
<p><em>&#8220;One of my takeaways from #w2s is that #quora points to future of augmented mobile social experiences &#8211; a search filter for experience! #AR&#8221;</em></p>
<p>In your view what are the biggest challenges for location Q&amp;A to emerge as a search filter for location based experiences?</p>
<p><strong>Michal Avny: The biggest location Q&amp;A challenges yet to be conquered are immediacy (real time dynamic data), relevancy (strong personalized filters) and user experience (simplified interface).</strong></p>
<p><strong>Location Q&amp;A enables different use cases.  The most prominent are Follow (follow places, topics and friends to learn about a location), Interact (meet new people based on common interests), Plan ahead (plan a trip, night out or a shopping day by asking and searching for local information) and On-site (check for recommendations, friends, deals, events and traffic nearby).</strong></p>
<p><strong>Unlike Follow, Interact and Plan ahead that can be added to existing Q&amp;A platforms (such as Quora) by attending location specifics as they share similar characteristics, the on-site mode introduces a completely different experience, first and foremost it requires immediate attention.  It is real time based and the nature of the data is dynamic.  Traffic updates, current events, nearby friends, all that changes constantly.  Posting a location question on-site implies the response should be in real time (e.g. best kid friendly restaurant), the normal Q&amp;A response latency wouldnâ€™t work.</strong></p>
<p><strong>Strong relevancy filters are required to accommodate for the overwhelming flood of information.  Moreover, some of the data should be filtered by user behavior and preferences, check in notifications (type of relation), restaurant recommendations (type of food, price level, etc), shopping deals (commercial categories) and more.</strong></p>
<p><strong>Mobile experience requires ease of use and simplicity.  A new Q&amp;A interface and query language that allows for posting questions should be defined as well as coherent summarized response interface.  User on the go should not have to post lengthy questions, browse through tens of results or search for the right service, but instead use a simple intuitive tool.</strong></p>
<p><strong>Tish Shute: </strong>Real- time location based search is in its infancy.  Real time questions can be answered using different services such as Yelp, TripAdvisor, <a href="http://www.waze.com/homepage/" target="_blank">Waze,</a> <a href="http://foursquare.com/" target="_blank">Foursquare</a>, IMDb and more.  But what are the challenges to moving forward with aggregating these sources and then into â€œlocalsâ€ that are able to process and deal with vast amounts of information?</p>
<p><strong>Michal Avny: Using some of the leading location services to answer question is sufficient to start with.</strong></p>
<p><strong>In order to provide broad coverage (worldwide) and reliable information, aggregation of the different services is required for instance to normalize product and service rank, aggregate classified, and more. This is quite challenging as there is no one standard available.</strong></p>
<p><strong>When location Q&amp;A user base is big enough, I foresee a tendency to rely more on â€˜localsâ€™ input as the base of information.   As the platform grows, communities will be formed with different cultures, relationships and trust levels, making the information more valuable and customizable.  Some of the challenges I already mentioned are implementing filters, query language and interfaces to enable using the vast amounts of real time data in a mobile environment.  More of the challenges lying ahead are integrating the â€˜localsâ€™ data with location based services as they are integral components of the Q&amp;A ecosystem.   Merging trust levels and relationships while adhering to different privacy guidelines is a challenge yet to be explored. (This should be discussed in more detail under the protocols topic).</strong></p>
<p><strong>It is quite evident that Quora is now facing growing pains and is struggling to maintain its character.  Same as with Quora, it will also be a challenge to support and maintain the ecosystem while allowing for massive scale-up.</strong></p>
<p><strong>Tish Shute:</strong> I have been very interested in exploring protocols that will be enablers to micro local interaction and mobile social interaction for AR &#8211; particularly the XMPP extensions and operational transform work of Google Wave (now <a href="http://incubator.apache.org/projects/wave.html" target="_blank">Apache Wave</a>), and PubSub protocols like <a href="http://code.google.com/p/pubsubhubbub/" target="_blank">PubHubSubbub</a> and Erlang based <a href="http://www.rabbitmq.com/" target="_blank">RabbitMQ</a>.  We are beginning to see protocols emerging that could enable new real time local services.  What do you think are some of the most valuable use cases for â€œlocalsâ€ that this new generation of real time protocols can enable?</p>
<p><strong>Michal Avny: AR is about interacting with digital information; the AR ecosystem is composed of layers and components such as devices, platforms, browsers, applications and content.  For the different components to interact new protocols, security guidelines, and privacy policies must be in place.  A standard will enable local vendors and service providers to publish specials, deals, updates and events for any application to broadcast, identify people and places by proximity (without having to use the same application or device), local recommendations will be shared by services, devices will be able to interact, location based platforms, such as Q&amp;A, will have access to vast breadth of information, geo aware devices will provide consistent experience globally, and much more.</strong></p>
<p><strong>Tish Shute:</strong> What do you think are the biggest challenges to going mainstream for this emerging field of real time social discovery?</p>
<p><strong>Michal Avny: The biggest challenge is building towards real time, geo-aware, localized, personalized ambient data.   Discovery is in its infancy, location social based Best, Top, and Trending lists with some basic filtering options are available, and this is great as people are getting accustomed to information surrounding them.  To some degree it can intensify the AR experience, for instance suggest the most popular dish in a restaurant, or map the best coffee shops nearby, but it is customized at best by friend recommendations and depends on the coverage and broadness of the specific discovery service.</strong></p>
<p><strong>There is a need for the next generation of discovery, customized geo social aware discovery that filters the vast amount of real time data by learning user preferences and behavior (built on top of the much needed local social real time open protocol)</strong></p>
<p><strong>Tish Shute:</strong> Who are your favorite startups/upstarts in the the field of real time search and why?</p>
<p><strong>Micha Avny: <a href="http://www.my6sense.com/" target="_blank">My6Sense </a>- My6sense provides a sharper and better way to experience your information from feeds you subscribe to (Social Networks, News, RSS feeds, etc.).  Itâ€™s personal &#8211; Content is ranked based on whatâ€™s relevant to you. It learns what&#8217;s valuable to you by translating your consumption behavior into a personalized ranking function.<br />
My6Sense â€“ because it is a personalized prediction filter, a critical foundation for AR</strong></p>
<p><strong><a href="http://topsy.com/" target="_blank">Topsy</a> &#8211; Topsy is realtime search powered by the social web that finds the most relevant conversations happening online. The siteâ€™s underlying technology examines popular links as well as the influence of each person citing a link. Topsy augments traditional search engines by finding information that people are talking about.<br />
Topsy â€“ because its ranking is based on retweets and influencers, a great social experience</strong></p>
<p><strong><a href="http://collecta.com/" target="_blank">Collecta</a> &#8211; Collecta is a real-time search engine for the social web. It monitors the update streams of popular realtime blogs and sites like Twitter, WordPress, and Flickr, and shows results as they happen. Results can be filtered by status updates, comments, stories, or photos. The entire engine is built around the XMPP standard, which pushes out data on a continual basis, so that for every search you end up watching a stream that keeps updating itself.<br />
Collecta â€“ because it is built around XMPP, a real time experience</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Interview with Mitch Kapor</title>
		<link>http://www.ugotrade.com/2008/05/05/interview-with-mitch-kapor/</link>
		<comments>http://www.ugotrade.com/2008/05/05/interview-with-mitch-kapor/#comments</comments>
		<pubDate>Mon, 05 May 2008 15:34:00 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[avatar 2.0]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metarati]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Hands Free 3D]]></category>
		<category><![CDATA[Iron Man]]></category>
		<category><![CDATA[Iron Man in Second Life]]></category>
		<category><![CDATA[Mitch Kapor]]></category>
		<category><![CDATA[pervasive computing]]></category>
		<category><![CDATA[tangible media]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1444</guid>
		<description><![CDATA[Only a two weeks after debuting their first Hands Free 3D video showing the possibilities for navigating Second Life &#8220;hands free&#8221; without a mouse or keyboard, Mitch Kapor (MitchK Linden in Second Life) and Philippe Bossut have a new demo out &#8211; Hands Free Object Editing in Second Life. Philippe points out on the Hands [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/mitchkaporpost.jpg"><img class="alignnone size-full wp-image-1447" title="mitchkaporpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/mitchkaporpost.jpg" alt="" width="225" height="300" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/mitchklinden.jpg"><img class="alignnone size-full wp-image-1448" title="mitchklinden" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/mitchklinden.jpg" alt="" width="225" height="301" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/mitchkaporslpostnew.jpg"> </a></p>
<p>Only a two weeks after debuting their <a href="http://www.handsfree3d.com/" target="_blank">first Hands Free 3D video</a> showing the possibilities for navigating <a href="http://www.secondlife.com" target="_blank">Second Life</a> &#8220;hands free&#8221; without a mouse or keyboard,  Mitch Kapor (MitchK Linden in Second Life) and Philippe Bossut have a new demo out &#8211;  <a href="http://www.handsfree3d.com/videos/" target="_blank">Hands Free Object Editing</a> in Second Life.</p>
<p>Philippe points out on the <a href="http://http://www.handsfree3d.com/blog/" target="_blank">Hands Free 3D blog</a> that they have already seen a lot of interest in their &#8220;hands free&#8221; project even from the main press (see <a onclick="javascript:urchinTracker('/outbound/bits.blogs.nytimes.com/2008/04/11/the-coming-of-the-holodeck/?ref=/videos/');" href="http://bits.blogs.nytimes.com/2008/04/11/the-coming-of-the-holodeck/">this article from the NYT</a>).  Hands Free 3D, <a href="http://www.kei.com/news.html" target="_blank">a project of </a><a href="http://www.kei.com/" target="_blank">Kapor Enterprises</a>, is creating a prototypical interface using the 3D Camera designed by <a onclick="javascript:urchinTracker('/outbound/www.3dvsystems.com/?ref=/');" href="http://www.3dvsystems.com/" target="_blank">3DV Systems</a> to control virtual worlds like Second Life.</p>
<p>Mitch Kapor told me, they are now working  &#8220;so that avatars can directly mirror body language and facial expression.&#8221;</p>
<p>Mitch very generously gave me an interview in which he not only describes his project to explore how:</p>
<blockquote><p>the camera could be a central device to a whole new kind of interface the way the mouse became the central piece of hardware that enabled the whole graphical user interface and it enabled the transition from character based computing DOS to the GUI.</p></blockquote>
<p>But also, Mitch shares some of his thoughts on the future of Second Life.  A full transcription follows in this post.</p>
<h3>&#8220;Moving From Science Fiction to Science&#8221;</h3>
<p>Mitch explained to me he began to get excited with the idea of Hands Free 3D  when he realized:</p>
<blockquote><p>we had a shot at moving from science fiction to science as it were actually making some of this stuff work that people have been talking about for a long time</p></blockquote>
<p>As <a href="http://gwynethllewelyn.net/2008/04/28/the-intergrid-and-the-second-life-foundation/" target="_blank">Gwyneth Llewelyn points out</a> much of the so called virtual worlds industry has backed off the bigger vision of a unified metaverse and is retreating into a more limited vision of a multitude of closed and controlled virtual worlds (see Digado&#8217;s post <a href="http://digado.nl/" target="_blank">Raising Kids in Virtual Worlds</a> and this video from <a href="http://www.fastcompany.tv/video/disneys-virtual-worlds-raising-kids-social-networks" target="_blank">fastcompany.tv</a> to see how this controlled/controlling vision for virtual worlds plays from Disney&#8217;s point of view).</p>
<p>But while a bigger vision for virtual environments with a revolutionary role in adult life may not not be interesting to marketeers at the moment, it has a momentum that cannot be stopped.  Mitch Kapor made a prediction during the interview that I wholeheartedly agree with:</p>
<blockquote>
<h4>the big vision of 3D is in the process of happening. It will be very transformative and anybody who is not counting on that happening, is likely to be run over by it.</h4>
</blockquote>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/navigation-walkingpost2.jpg"><img class="alignnone size-full wp-image-1452" title="navigation-walkingpost2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/navigation-walkingpost2.jpg" alt="" width="450" height="253" /></a></p>
<p>I got very excited when I heard about the Hands Free 3D project because developing a natural interaction between people and virtual environments to me is one of the &#8220;it&#8221; projects for immersive 3D.</p>
<p>The dialogue between science fiction and science is of course the ongoing story of the metaverse.  And seeing <a href="http://ironmanmovie.marvel.com/" target="_blank">Iron Man</a> which is alive with  new possibilities for &#8220;seamless interfaces between people bits and atoms&#8221; made me think of how very exciting this new chapter in metaverse development is.</p>
<p><a href="http://tangible.media.mit.edu/projects/" target="_blank">The Tangible Media Group</a>, MIT, founded by <a href="http://tangible.media.mit.edu/people/hiroshi.php" target="_blank">Hiroshi Ishii</a> has pioneered new couplings of the physical and the virtual. And, alumni <a href="http://web.mit.edu/invent/iow/underkoffler.html" target="_blank">John Underkoffler&#8217;s</a> vision is definitely in play in Iron Man. Underkoffler&#8217;s exact credit flew by me too quickly &#8211; but he was clearly a futurist for Iron Man.  <a href="http://www.mitadmissions.org/topics/pulse/notable_alumni/iron_man_mit_87.shtml" target="_blank">Matt McGann</a> points out that there is a very cool article about his work on <em>Minority Report</em> <a href="http://web.mit.edu/newsoffice/2002/underkoffler-0717.html">here</a>.</p>
<p>Oh I cannot mention Iron Man without noting Iron Man in Second Life (see <a href="http://www.massively.com/2008/05/03/cinemassively-iron-man-in-second-life/" target="_blank">Massively</a>) and Annie Ok&#8217;s <a href="http://www.annieok.com/OtherProjects/IronMan" target="_blank">latest great machinima</a>!</p>
<p><em>And, Click on the screen shot below or <a title="Hands Free 3D: Second Life Object Editing Demo" href="http://www.youtube.com/watch?v=KqwUn_KgrDQ" target="_blank">here</a> to watch the &#8220;<strong>Hands Free 3D: Second Life Object Editing Demo&#8221;</strong></em></p>
<p><a title="Nads Free 3D: Second Life Object Editing Demo" href="http://www.youtube.com/watch?v=KqwUn_KgrDQ" target="_blank"><img class="alignnone size-full wp-image-1449" title="hands-free-object-editing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/05/hands-free-object-editing.jpg" alt="" width="450" height="340" /></a></p>
<h3>Interview with Mitch Kapor</h3>
<p><strong>Tish Shute</strong>: How did you get the idea to focus on Hands Free 3D out of all the possible areas you could have begun R&amp;D in?</p>
<p><strong>Mitch Kapor</strong>: You were asking me where did the idea come from? It originated in the fact that this kind of difficulty &#8211; of creating a natural, easier user interface &#8211; that we&#8217;ve had is characteristic of virtual world interactions.</p>
<p>There are things to be done about that at every conceivable level. From fixing all the little bugs to a bigger initiative. I was doing a thought experiment about what would really make a virtual world fundamentally easier to use.</p>
<p>I didn&#8217;t have an answer, but somebody had mentioned to me &#8211;  one of the other investors in Second Life &#8211;  that there are two Israeli companies working on 3D cameras. I had read about and heard about lots of things but this caught my attention.  And I started to ask some questions about it. I had seen the video that Johnny Lee shot with the Wii on YouTube.</p>
<p>That had begun to prepare my mind to think about how you could use new types of input devices to control virtual worlds. So when I heard about the cameras I said this is really interesting and I started to make some phone calls and inquire.  The Idea came to me that you could use the camera &#8230; the camera could be a central device to a whole new kind of interface the way the mouse became the central piece of hardware that enabled the whole graphical user interface and it enabled the transition from character based computing DOS to the GUI.</p>
<p>One of the other things is that I&#8217;ve now been around long enough, 30 years &#8211; active and professional &#8211;  that I&#8217;ve seen many things come and go and I have a feeling for patterns. So I was fortunate in actually being able to get hold of a prototype of one of the cameras to do some experiments with it.</p>
<p><strong>Tish Shute:</strong><br />
They&#8217;re not yet released generally are they, later this Summer, right?</p>
<p><strong>Mitch Kapor:</strong><br />
Well .. it&#8217;s unclear. Sometime in 2008 or 2009. There will be multiple manufacturers. They have somewhat different approaches as to how they&#8217;re going to go to market. I&#8217;d say it&#8217;s all being sorted out  soon. Everybody I&#8217;ve talked to is quite certain that by Christmas season of 2009 at the latest, they&#8217;ll be available in high volume at low cost.</p>
<p><strong>Tish Shute:</strong><br />
I just got so excited when I saw you doing this because I think, basically, in terms of free form 3D programmable space  which is how I&#8217;ve come to see Second Life now, it&#8217;s the future. Everyone&#8217;s been complaining that the problem with free form 3d programmable space for a mass audience is the difficulty of the interface.  So there seems to have been this big retreat back into 2.5D, 3D chat rooms &#8211;  plugins to Facebook etc. It seems like a step backward to me.</p>
<p><strong>Mitch Kapor:</strong><br />
I think it&#8217;s inevitable that we&#8217;re going to get fully interactive 3D. It&#8217;s all a question of how we&#8217;re going to get there and how long it takes. It&#8217;s understandable why, for commercial reasons, people do more incremental things, but those are only going to get you so far.</p>
<p><strong>Tish Shute:</strong><br />
Well it seems to me ideas about the evolution of 3D are to some degree  being driven by marketing on the web forces at the minute.  I suppose the thinking is that you can get these 3D chatrooms up easily and they are more amenable to marketing than a  freeform 3D space like Second Life.</p>
<p>But my question is why  you didn&#8217;t decide to go to game controllers? I suppose this is where a lot of  thinking goes  because all the kids have already a high level of skill with these?</p>
<p><strong>Mitch Kapor:</strong><br />
Well, I&#8217;m not a gamer. It seemed to me that the possibilities with a camera to do the imaging and to be able in real time, to extract out a 3D model of the scene and the objects in it, is fundamentally just incredibly powerful. It feels like the right direction if you can develop it. What I was pleasantly surprised by was actually creating the first demo was pretty straightforward.</p>
<p><strong>Tish Shute:</strong><br />
How did you prevent every random motion being sucked into the program?</p>
<p><strong>Mitch Kapor:</strong><br />
It turns out that the cameras are pretty sensitive. They can detect relatively small motions like the resolution at a distance of 5 to 10 feet is a half a centimeter. That would be one part in several hundreds. maybe one part in a thousand. So it can detect slight motions. I don&#8217;t know the details of the software that the camera came with and that Philippe wrote. One of the other advantages is that Philippe, who is the engineer that did the work, has a PhD in computer graphics.  And, he has been around the block quite a few times, and had a whole bag of tricks. I know that he spent some of the time writing filtering code to filter out noise in the signal and so on.</p>
<p><strong>Tish Shute:</strong><br />
Do you have to be particular about where you stand at the minute?  Can you smoothly go back and forth between when you have to type and things like that?</p>
<p><strong>Mitch Kapor:</strong><br />
No, I&#8217;m not anticipating problems. We have another video coming up very shortly where we show object editing. The object editing isn&#8217;t as sexy as we would like it because it has to use the existing interface. They&#8217;re having to emulate keyboard and mouse. The point is that we have the concept of a control plane, a vertical plane, in front of you, that if you put your hand out so it crosses that imaginary plane, then it interprets what you do as controlling the mouse.</p>
<p>If you push through to the far side than pull it back it doesn&#8217;t. That actually works quite well as a gesture. And you get visual feedback when you&#8217;re in the control plane, it lights something up, so you can see &#8211; OK. It&#8217;s sort of like when you&#8217;re using the mouse to target an object you can tell tell when a mouse is inside a clickable button. Similarly there&#8217;ll be some kind of control zones. When your hand or other body part is in that you&#8217;ll get some feedback in the same way that a button highlights to indicate I&#8217;m clickable, or you&#8217;re over me. It&#8217;ll be a similar kind of thing.</p>
<p><strong>Tish Shute:</strong><br />
But you have to avoid ending up with a mapping that&#8217;s more difficult to learn than the original one, don&#8217;t you?</p>
<p><strong>Mitch Kapor:</strong><br />
I agree with you, but on the navigation and flying, we&#8217;ve had people learn to use this in less than 30 seconds. We just stand them up and say lean forward, lean back, stand up, lean to the side, raise your arms, and they&#8217;re moving, they&#8217;re flying, they&#8217;re walking.</p>
<p><strong>Tish Shute:</strong><br />
And you don&#8217;t get a problem with the casual motion?</p>
<p><strong>Mitch Kapor:</strong><br />
No. And this was just our first shot at this.</p>
<p><strong>Tish Shute:</strong><br />
I know! I was really impressed that you could actually have done that in 3 weeks.</p>
<p><strong>Mitch Kapor:</strong><br />
I think the start to finish time was a couple of months including the fact that Philippe had never seen the Second Life viewer code. So, he started like any other developer, just downloading and building the Second Life client. And, we never had a camera before! Ha!</p>
<p><strong>Tish Shute:</strong><br />
But this is the great beauty of Second Life  &#8211; the power that people have to do so many amazing things so rapidly.</p>
<p><strong>Mitch Kapor:</strong><br />
He&#8217;s already re-written the code once. We&#8217;re totally prepared to give the code to Linden. It&#8217;s a little premature because the cameras&#8217; aren&#8217;t available, but if the cameras&#8217; were available, we would just donate the code. The nice thing is it&#8217;s actually pretty clean. It interfaces to the client at just a couple of points. We&#8217;ve isolated the dependencies.</p>
<p><strong>Tish Shute:</strong><br />
But that&#8217;s my other question. If you donate the code will it be open source so that other developers could get involved? I know lots of people &#8230;</p>
<p><strong>Mitch Kapor:</strong><br />
This stuff, the demonstration stuff, absolutely. That&#8217;s the intent. The purpose of this whole phase was just to test what we could do and to promote or evangelize the use of the camera. Get people excited. We&#8217;re thinking about what we might do with it.</p>
<p>I&#8217;m actually incredibly excited about the thing Philippe is working on now which is to use the camera so that avatars can directly mirror body language and facial expression. So that if I&#8217;m sitting in my chair and I have my arms crossed, my avatar will cross it&#8217;s arms. If I tilt my head to the side or smile or frown, the avatar will do the same thing. We&#8217;re quite optimistic that we can do something compelling in pretty short order, like less than a month.</p>
<p><strong>Tish Shute:</strong><br />
Wow! That is really, really exciting. I think that has just been something people have been talking about a lot recently &#8211; to have gesturing and expressions transmitted to the avatar ..</p>
<p><strong>Mitch Kapor:</strong><br />
The reason I get so excited is cause when I started believing we had a shot at moving from science fiction to science as it were actually making some of this stuff work that people have been talking about for a long time.</p>
<p><strong>Tish Shute:</strong><br />
So the plan is to make your work part of the open source community and &#8230;</p>
<p><strong>Mitch Kapor:</strong><br />
I don&#8217;t have a plan yet. I would say anything we&#8217;re doing in this phase we&#8217;re happy to give away. At some point I think things are going to become clearer as to the availability of the cameras, what Linden is going to build in, and then businesses that might be built off of what we&#8217;re doing.</p>
<p>But I&#8217;m very confident that the kinds of things we&#8217;re doing now and in the short term are just going to become part of the standard repertoire of things you can do in Second Life in code that&#8217;s available to developers.</p>
<p>I don&#8217;t have the exact road map.</p>
<p><strong>Tish Shute:</strong><br />
I heard your recent talks in Second Life and how you were very interested in seeing how Second Life could become more of a business tool.  I&#8217;ve talked about what Second Life and its &#8220;cousins&#8221; offers in comparison to other open source platforms  like SUN&#8217;s Project Wonderland and  the Croquet platform Quaq.  For example, Second Life is a free form 3D programmable space that&#8217;s really accessible and easy to develop in.</p>
<p>But in Qwaq you can drag and drop documents in from 2D applications easily, and Wonderland has some great telephony/audio development.  I&#8217;m totally psyched by what you&#8217;re doing because it has the potential to make the free form programmable space of Second Life more widely useful, and it could be bring much innovation to business communications.</p>
<p>I see a future in interactive data visualization, for example, the idea that Ben Lindquist of <a href="http://www.greenphosphor.com/" target="_blank">Green Phosphor</a> has been developing, i.e., that you can actually model business processes dynamically in a collaborative environment. What are your thoughts on Second Life&#8217;s potential in business applications?</p>
<p><strong>Mitch Kapor:</strong><br />
One thought is that a more general platform, more general purpose, more open, in the long run, all other things being equal, will be superior to more limited, less capable, more closed platforms, for building any kind of application.</p>
<p>And at the moment, Second Life is the most general and most open platform. So all other things being equal, which usually they&#8217;re not, Second Life should be viewed as superior by people who are building a variety of applications.</p>
<p>But there are clearly some things that need to happen.  Well let me put it this way some of the other platforms have temporarily at least moved further ahead in enterprise related applications by developing collaboration capabilities.<strong></strong></p>
<p>So the imperative is for Second Life to provide comparable capabilities. It has to do that, in terms of fundamental stability, reliability, in all respects. If it does that then it&#8217;s actually going to win on it&#8217;s own merits.</p>
<p><strong>Tish Shute:</strong><br />
I absolutely agree with you because in terms of ease of use, it&#8217;s the only dynamic networked general simulation platform around. There&#8217;s no one else close.</p>
<p><strong>Mitch Kapor:</strong><br />
It&#8217;s also I think highly scalable in ways that some other things aren&#8217;t. Even though it doesn&#8217;t have as many 9&#8242;s in uptime as it needs to have, there have been recent signs of more progress. I guess the HTML on the prim stuff is rolling out finally or at least the first version of it.</p>
<p>I think it&#8217;s ended in beta now.</p>
<p>It&#8217;s not the full thing. But it&#8217;s a huge step. That&#8217;s going to help a lot.</p>
<p><strong>Tish Shute:</strong><br />
Plus the fact it seems Linden Labs moving towards a more heterogeneous idea of a grid where there&#8217;ll be the potential to connect behind the firewall worlds with the main grid .</p>
<p><strong>Mitch Kapor:</strong><br />
I also know that there are some third parties that have done that.  They&#8217;ve sworn me temporarily to confidentiality.  But they have done some very impressive stuff with integrating the web with Second Life in ways that you can for instance in a web interface just go and grab a PowerPoint. In your Second Life window.  The power point will just show up. So there is a kind of work around to using the familiar web to get your intercollaboration stuff working. There&#8217;s progress. It&#8217;s going to be some time before it all sorts itself out.</p>
<p>But to come back to the camera as a more natural interface, I think for personal interaction, is important.  It&#8217;s going to be a breakthrough.</p>
<p><strong>Tish Shute:</strong><br />
It&#8217;s a huge breakthrough also to have the avatar related to your real life gestures. It&#8217;s a huge leap forward. When you introduced it at metaverse meetup that really got people&#8217;s attention. I have a question. Have you thought about going even further with the thought driven game controllers?</p>
<p><strong>Mitch Kapor:</strong><br />
At some point I intend to take another look at that. I have the feeling that your not doing anything highly profound. Kind of a cute hack.</p>
<p><strong>Tish Shute:</strong><br />
Again they&#8217;re not available, I would guess they would give some to you though.</p>
<p><strong>Mitch Kapor:</strong><br />
From looking at earlier incarnations of this stuff I think what they can pick up on is very superficial. So I&#8217;m not sure that they&#8217;re going to be that interesting cause we really don&#8217;t know how to do, without some invasive type of surgery,</p>
<p><strong>Tish Shute</strong>:<br />
You can do it with very very complicated brain scanning you can do a lot more, but I agree. Although I did see the Japanese University was using them for  severely disabled people. Looked like they were doing some interesting things.</p>
<p>My question is, this is something you mentioned in one of your talks in Second Life, you thought some of the steps forward to make Second Life truly a player in the business world, would be changes on the server level. Were you thinking more about the moves that are going on towards open source and making a heterogeneous grid?</p>
<p><strong>Mitch Kapor:</strong><br />
Yes. I was thinking about letting people run it behind the firewall, and also it&#8217;s not just putting it behind the firewall, anytime you&#8217;re talking about an enterprise application, the enterprises want to integrate all of their existing IT systems. They already have these very sophisticated systems for managing say identity, and having easy integration of those thing with Second Life identity management is not glamorous but very important.</p>
<p><strong>Tish Shute:</strong><br />
This brings to mind another question. I know I have some ideas about what Second Life really brings to the table for business. No one else has taken on working with dynamic melded states on the internet in 3D to the degree Second Life has.  That&#8217;s sort of, to me, the essence of it &#8211;  having groups of people working around 3D objects that can be updated on the fly and modeled on the fly.</p>
<p><strong>Mitch Kapor:</strong></p>
<p>If we do things well there will be a good level of interoperability and all of the open source work and the reverse engineered clones will actually be a good thing.</p>
<p>Second Life is, and I&#8217;ve probably used this line, faced with insurmountable opportunities on all sides.</p>
<p>Let me ask you a question. I&#8217;ve read your blog, or some of it, but what do you actually do?</p>
<p><strong>Tish Shute:</strong><br />
I spend a lot of time on my blog at the minute!!!  You can tell I have kids and dogs driving me crazy [dog is barking in the background], which is exactly why I took this up a year ago. I worked in film and special effects for the early part of my career.</p>
<p>When I had my kids and dogs and all of that it got to be just too much to do 24/7 film production. My son&#8217;s nearly 9 now. I tried academia for a while, then I just said forget it..too hard to be in a medieval guild as a second career!</p>
<p>And I actually a year ago when I started looking at this (Second Life) I thought my goodness this is what we sat around and talked about every night when we were doing multiple pass motion control photography in the eighties. And so I started writing about it and that took a life of it&#8217;s own. And now it&#8217;s become a little ridiculous because it&#8217;s an excessively time consuming hobby!</p>
<p><strong>Mitch Kapor:</strong><br />
Are you in New York or the UK?</p>
<p><strong>Tish Shute:</strong><br />
Yes. I&#8217;m in Manhattan.</p>
<p><strong>Mitch Kapor:</strong><br />
The reason Second Life has gotten as far as it&#8217;s gotten is because of people like you who have become inspired and become obsessed and feel the possibilities and feel them to be so utterly compelling to cause some rearrangement of life priorities.</p>
<p><strong>Tish Shute:</strong><br />
It&#8217;s interesting cause it&#8217;s like every week I say &#8220;Oh I really can&#8217;t spend all this time writing!&#8221; Then I see something, like this week I saw all the new wave of 3D chat rooms coming out. And it just got me going again!  I just can&#8217;t bear to not to have a voice because  when you see the big picture you want the really innovative stuff to move forward.  That&#8217;s why when I saw your work on hands free 3D, I said:  &#8220;Oh my goodness, someone&#8217;s taking it the next step. And as you say there isn&#8217;t a path that&#8217;s clear. There&#8217;s no guarantees. But its a path worth traveling, in my view!</p>
<p><strong>Mitch Kapor:</strong><br />
I firmly believe, I have complete conviction, that all of the 3D, the big vision of 3D is in the process of happening. It will be very transformative and anybody who is not counting on that happening, is likely to be run over by it.</p>
<p><strong>Tish Shute:</strong><br />
Right. Of course you&#8217;re much more knowledgeable of this aspect of it but in terms of business applications, has anything interesting happened in a long long while?</p>
<p><strong>Mitch Kapor:</strong><br />
There are some interesting things that are happening, I just learned this by accident, that are being kept under very close wraps. There&#8217;s at least one consultancy that is doing extremely well with very large prestigious global corporations. They have done a lot of development of this integration of web with Second Life. Their clients are shy. They do not want public exposure at the moment because of the backlash against the overhyping of Second Life that happened last year. I was very heartened to hear about this. I think it&#8217;s going to start coming out in the next few months what some of these companies are doing.</p>
<p><strong>Tish Shute:</strong><br />
I  agree. Many  of the interesting things I know about I can&#8217;t write about either because there&#8217;s no interest for people developing business applications to have a lot of web publicity about it in the early stages.</p>
<p><strong>Mitch Kapor:</strong><br />
Right. I think we&#8217;ll be in this phase for a while.  But then we&#8217;ll get out of it.</p>
<p><strong>Tish Shute:</strong><br />
In terms of specifics about business application, do you have any dreams for Second Life?</p>
<p><strong>Mitch Kapor:</strong><br />
I would like to just personally have a really good meeting application. Just simple like when you and I want to get together and meet in world, I would like that to be easy, bullet proof, convenient, natural. I&#8217;m imagining that we both have cameras, so that we can see each other and you get body language and you get a sense something like what you would get in a face to face meeting. And I want people to have the ability to easily get more realistic avatars, if that&#8217;s what they want. And actually there&#8217;s a lot of good technology around that now. Where you can just basically take a picture or two with an ordinary digital camera, upload it and get back something that pretty much looks like you.</p>
<p><strong>Tish Shute:</strong><br />
What do you think are the biggest obstacles to this kind of free form 3D programmable space?</p>
<p><strong>Mitch Kapor:</strong><br />
There&#8217;s a lot of software that has to be written to bring out its full potential. And not just by Linden or any one company. It&#8217;s really a collective effort that is the work of a whole generation.</p>
<p>It&#8217;s comparable to all of the work that went into making the ecosystem of the personal computer. Or for that matter the ecosystem of the internet. It requires having the right architecture, it has to stay open. If that can happen I think it&#8217;s mostly just a matter of time and some patience.</p>
<p>It is going to happen. There are lots of individual challenges. Tons of problems to solve. I&#8217;m not a technological determinist, but at this point I don&#8217;t think anything can hold it back.</p>
<p>In a way though having lived through the onset of the internet, while it has changed things a lot, and in certain ways it would be very difficult to imagine life without it, it also has left things the same. I mean people bring all of themselves and their issues into every technological medium. The drama gets played out in a different ways, but it&#8217;s neither going to be a good thing or a bad thing. It&#8217;s going to be some of both. And so the question is, to me, how people of good will who want to make the world a better place are going to use whatever new things get created in a positive way.</p>
<p><strong>Tish Shute:</strong><br />
I know Mark (Zero Linden) heads up a lot of interoperability work in his office hours and other meetings. But I got a couple of emails this week saying that all these groups that are working off of either clones or reverse engineered, and there are so many of them, and some are under wraps too, need to actually meet on an even more regular basis?</p>
<p><strong>Mitch Kapor:</strong><br />
That&#8217;s true. I guess it&#8217;s much more desirable for people to meet and talk, and if they don&#8217;t for awhile, you get more noise in the system. It just will take longer to put things back together.</p>
<p><strong>Tish Shute:</strong><br />
That&#8217;s what I was thinking, that it&#8217;s become pretty clear to me that cooperation, if it is going to happen,  has to happen around the clones and the reverse engineered versions of Second Life because other platforms are not prioritizing interoperability at the moment, that I know of.</p>
<p><strong>Mitch Kapor:</strong><br />
People will call  &#8211;   this and that should be happening but my view is that the ecosystem is still sufficiently underdeveloped that there is a risk of attempted premature standardization.</p>
<p>If you look at the history of things, It&#8217;s very important for there to be working instances before anybody attempts to standardize anything.</p>
<p>There&#8217;s a lot to be learned in the early history of the internet. pre-history, from the 60&#8242;s up through the 80&#8242;s &#8212; when the basic protocols were being developed. There&#8217;s some very smart people working on that and a certain amount of looseness is actually quite important now.</p>
<p>There&#8217;ll be people who want to prematurely standardize and get everybody together and all you&#8217;ll wind up with is a massive crud.</p>
<p>I thinks that the power of the open systems is so much greater than the walled gardens Also the open source ethic is so deeply established in large parts of the development community, even in enterprises, that overall I&#8217;m not too worried about it.</p>
<p>When the functionality of whatever it is, is that well known and well understood, that&#8217;s the period in which the open source alternatives can really flourish. When there&#8217;s still a lot of evolution in functionality, and design in the user experience, open source techniques can become too slow.</p>
<p>So it&#8217;s going to be somewhat chaotic. I think we have to embrace or at least make peace with a certain amount of chaos right now and the understanding that it&#8217;s likely to settle down. The chaos is not the last word.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ugotrade.com/2008/05/05/interview-with-mitch-kapor/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
