<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; online privacy</title>
	<atom:link href="https://www.ugotrade.com/category/participatory-culture/online-privacy/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Story Telling &#8211; the Art, Science, and Business of Data: Talking with Edd Dumbill about Strata, NYC, 2011</title>
		<link>https://www.ugotrade.com/2011/08/31/story-telling-the-art-science-and-business-of-data-talking-with-edd-dumbill-about-strata-nyc-2011/</link>
		<comments>https://www.ugotrade.com/2011/08/31/story-telling-the-art-science-and-business-of-data-talking-with-edd-dumbill-about-strata-nyc-2011/#comments</comments>
		<pubDate>Wed, 31 Aug 2011 18:51:52 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Linked Data]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Open Data]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Real Time Big data]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[Bloom]]></category>
		<category><![CDATA[Business in the Age of Big Data]]></category>
		<category><![CDATA[Color]]></category>
		<category><![CDATA[data analytics]]></category>
		<category><![CDATA[data design]]></category>
		<category><![CDATA[data expression]]></category>
		<category><![CDATA[data Science]]></category>
		<category><![CDATA[Data Sift]]></category>
		<category><![CDATA[data story telling]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[Edd Dumbill]]></category>
		<category><![CDATA[Google +]]></category>
		<category><![CDATA[Google Maps]]></category>
		<category><![CDATA[Google Plus]]></category>
		<category><![CDATA[GreenPlum]]></category>
		<category><![CDATA[infographics]]></category>
		<category><![CDATA[Kinect]]></category>
		<category><![CDATA[Media Sift]]></category>
		<category><![CDATA[Mike Kuniavsky]]></category>
		<category><![CDATA[Narrative Science]]></category>
		<category><![CDATA[Natural Language Generation]]></category>
		<category><![CDATA[OKCupid]]></category>
		<category><![CDATA[Quid]]></category>
		<category><![CDATA[Singly]]></category>
		<category><![CDATA[Somatic Data Perception]]></category>
		<category><![CDATA[Strata Conference]]></category>
		<category><![CDATA[Strata Summit]]></category>
		<category><![CDATA[The Locker project]]></category>
		<category><![CDATA[Visual.ly]]></category>
		<category><![CDATA[Visualizing Data]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6338</guid>
		<description><![CDATA[I&#8217;m really looking forward to the O&#8217;Reilly Strata events that are coming to NYC in a couple of weeks. Iâ€™m fascinated to seeÂ where the art, science, and business of data has gone since February, when I attended the first Strata Conference in Santa Clara &#8211; a sold out event imbued with an awareness that this [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><iframe width="500" height="281" src="http://www.youtube.com/embed/sCmO8YKzv9U?feature=oembed" frameborder="0" allowfullscreen></iframe></p>
<p>I&#8217;m really looking forward to the<a href="http://strataconf.com/stratany2011"> O&#8217;Reilly Strata </a>events that are coming to NYC in a couple of weeks. Iâ€™m fascinated to seeÂ where the art, science, and business of data has gone since February, when I <a href="../../2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/">attended the first Strata Conference in Santa Clara</a> &#8211; a sold out event imbued with an awareness that this was an important gathering of cognoscenti working on   the next big thing.</p>
<p>Strata in New York City is a sequence of events,Â  <a href="http://strataconf.com/jumpstart2011/">Strata JumpStart</a>, Sept. 19th, and then<a href="http://strataconf.com/summit2011/"> The Strata Summit</a>, &#8220;The Business of Data,&#8221; Sept. 20th &amp; 21st, and followed by the <a href="http://strataconf.com/stratany2011/">Strata Conference</a>, &#8220;Making Data Work,&#8221; Sept. 22nd, 23rd.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Screen-shot-2011-08-28-at-7.15.41-PM.png" target="_blank"><img class="alignnone size-medium wp-image-6376" title="Screen shot 2011-08-28 at 7.15.41 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Screen-shot-2011-08-28-at-7.15.41-PM-300x101.png" alt="" width="300" height="101" /></a><em><a href="http://strataconf.com/public/content/landing?_discount=adw&amp;cmp=kn-conf-st11-starta-terms" target="_blank"></a></em></p>
<p><em><a href="http://strataconf.com/public/content/landing?_discount=adw&amp;cmp=kn-conf-st11-starta-terms" target="_blank">&#8220;The future belongs to those who understand how to collect and use their data successfully.&#8221;</a></em></p>
<p>Below is a transcript of a conversation I had last Friday with <a href="http://strataconf.com/stratany2011/public/content/about" target="_blank">Strata Program Chair, Edd Dumbill</a> about some of the highlights of the schedule from my perspective.Â  However, I highly recommend taking a good look at <a href="http://strataconf.com/public/content/landing?_discount=adw&amp;cmp=kn-conf-st11-starta-terms" target="_blank">all that is planned through the three events</a> because there is a depth and breadth that could not be covered in one conversation.</p>
<p>The video opening this post is from <a href="http://visual.ly/about" target="_blank">visual.ly.com</a> &#8211; a start-up making it easier for people to create, explore, share, and promote data visualizations and infographics.</p>
<h3>Talking with Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/edddumbillheadshot.png"><img class="alignnone size-full wp-image-6391" title="edddumbillheadshot" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/edddumbillheadshot.png" alt="" width="150" height="150" /></a></p>
<p><strong>Tish Shute:</strong> It seems a dialogue between the art of data and the science of data is going to be center stage at Strata NYC, and there will be much discussion about story telling with data.</p>
<p>Is that observation correct or is there something else going on there?</p>
<p><strong>Edd Dumbill:</strong> No, I think thatâ€™s a great characterization.  For the <a href="http://strataconf.com/summit2011/" target="_blank">Summit</a>, the core realization for me has been that when you have these tools for getting value from data and when you can drive what youâ€™re doing by data, then actually, the biggest consequences are human ones, and they are organizational ones, and they are strategic ones once you have the technology in place.</p>
<p>So what the summit is doing is really looking at how, in a variety of industries, governments, and within disciplines within those, how the amount of data, the ease of which it can be communicated and mined is changing the way industry is shaped.</p>
<p><strong>Tish Shute: </strong> Also, I noticedÂ  that the <a href="http://strataconf.com/summit2011/public/schedule/full" target="_blank">Strata Summit Schedule</a> (Sept 20th &amp; 21st), and even through to the <a href="http://strataconf.com/stratany2011/" target="_blank">Strata Conference</a> (Sept 22nd &amp; 23rd), has more of an emphasis on pop culture; sports &#8211; baseball, dating &#8211; OKCupid, and Narrative Science, all have a place on the schedule, for example?</p>
<p>Is this the culture of New York City being reflected â€“ interests in media and marketing, or is there something else going on?Â  Has the data tool stack matured since the Strata Conference in Silicon Valley at beginning of the year?</p>
<p><strong><br />
Edd Dumbill</strong>:  Yes, thereâ€™s certainly a different flavor to the event because weâ€™re in New York.  And, yes, the tool stack has matured, but it is, by no means mature, and the maturityâ€™s only coming at the lowest level.</p>
<p>I think thereâ€™s many years left in maturing the tool stack.  But one of the beauties of big data is that once you have the data together, the algorithms to get value from it initially are pretty simple.</p>
<p>So, focusing on the stories of success of being data driven, particularly in the Summit, is important to us because the two questions people are asking are, â€œOne, Iâ€™ve got data.  Two, What do I do with it?â€Â    We donâ€™t need to make the argument that data is important anymore.  But we do need to demonstrate what you can do with it.</p>
<p>The data isnâ€™t necessarily big; itâ€™s just there.  Itâ€™s about having an analytical approach to your business that compliments your intuition, and compliments your vision.</p>
<h3>&#8220;One of the most powerful ways of presenting data to people is in a story,&#8221; Edd Dumbill</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/NarrativeScience.png"><img class="alignnone size-full wp-image-6351" title="NarrativeScience" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/NarrativeScience.png" alt="" width="260" height="218" /></a></p>
<p><strong>Tish Shute:</strong> Yes I can see the emphasis in the schedule on how to tell meaningful stories with data. <a href="http://www.narrativescience.com/" target="_blank">Narrative Science</a> seem to be doing something very interesting re turning data into stories?</p>
<p><strong>Edd Dumbill: </strong>Yes. They absolutely fascinate me with what they do.  Thereâ€™s this kind of hierarchy and sort of chain of needs right now where business is going, â€œWe need data scientists.  Find me data scientists.  Train me data scientists.  Hire me data scientists.â€  And the data scientists are all going, â€œI need visualization.  Iâ€™ve got this data, I now need to turn it back into a story thatâ€™s going to be useful to people or provide interfaces that are going to help people understand and explore this,â€ because it doesnâ€™t scale to have to have an interpreter all the time between the data and the results.</p>
<p>You need to be able to present it in a way that means something to people.</p>
<p>People can look at a graph and get many things out of it, maybe not even get anything at all out of it if they are not used to it.  But particularly for digesting certain kinds of high-level summaries and results, if you can put the data back into prose, it makes it very accessible to people.<br />
<strong><br />
Tish Shute:</strong> Natural Language Generation from data really opens up so many possibilities..</p>
<p><strong>Edd Dumbill:</strong> Yes, itâ€™s interesting. I think itâ€™s a very novel use.  A lot of people would consider that the end result of their data was a spreadsheet or a graph that they are processing.</p>
<p>But if you turn that back into a story, I think thereâ€™s a lot of potential of helping executives understand whatâ€™s going on. It makes it possible to use language to understand the results.<br />
<strong><br />
Tish Shute:</strong> I am really excited to see the emphasis on stories, data design and visualization, and the way we experience data is as much part of The Strata Summit and The Strata Conference as some of the more hardcore big data challenges and analytics stuff.<br />
<strong><br />
Edd Dumbill: </strong> Yes.  We are definitely ramping up on visualization.  And I think thatâ€™s going to become more important. Having a fundamental grasp of how to use graphics and charts is still incredibly core to what weâ€™re saying.  But Iâ€™m also interested in ways that go beyond, because at least 50% of the point of visualization is to help people understand the dynamics of the data, to really augment their senses with the results of the computation.</p>
<p>You know, the people who are some of our best leaders, the ones who know how to ask the right questions of the data, have a sort of indefinable fingertip feel that you get for numbers when you live around them for a while.  And anything we can do with interfaces to accelerate this is going to be very beneficial, whether it comes to being visual and flying through the data or hearing it in natural language.</p>
<p><strong>Tish Shute:</strong> Have I missed anything in that in terms of what youâ€™ve got on the schedule re visualization?  VisualizingData.com published <a href="http://www.visualisingdata.com/index.php/2011/08/data-viz-schedule-for-oreilly-strata-conference/">an ideal schedule from the visualizing data perspective</a>.  But have you added anything recently?</p>
<p><strong>Edd Dumbill: </strong> Well, thereâ€™s one event which isnâ€™t actually listed on the schedule yet, which is on Tuesday night.  Thereâ€™s a venue called <a href="http://www.eyebeam.org/">EyeBeam in New York</a>; weâ€™re having a visualization showcase that evening.  So there will be stuff to walk around and then a few talks, really from some of the most interesting companies doing viz and viz approaches.  So thatâ€™s not up on the schedule yet, but that will be in addition.  It gives a nice focus on Tuesday night.</p>
<p><strong>Tish Shute:</strong> Oh, thatâ€™s super awesome.  I&#8217;ll definitely go to that.<br />
<strong><br />
Tish Shute:</strong> I am very interested in mobile social communications and augmented reality &#8211; especially augmented reality that feels different, not just looks different, as Kevin Slavin puts it.</p>
<p>I am excited to see people thinking about data not just in terms of visualization, but in other ways too that we can feel it through our secondary senses as well (see <a href="http://orangecone.com/archives/2011/05/somatic_data_pe.html">Mike Kuniavskyâ€™s talk at ARE2011, &#8220;Somatic Data Perception&#8221;</a>).</p>
<p><strong>Edd Dumbill: </strong> Yes, absolutely.  That is where we view this as going.  I will be incredibly depressed if Iâ€™m still looking at the world through a glowing rectangle in 10 years time.</p>
<p><strong>Tish Shute: </strong> Yes, it would be!  I am looking forward to see the new data start ups too.</p>
<p><strong>Edd Dumbill:</strong> Yes, there are a variety of interesting startups, that I feel are particularly important in the data space.  <a href="http://mediasift.com/">Media Sift</a> and Data Sift, for example,<a href="http://datasift.com/"> Data Sift</a> is doing a lot of real time processing on the Twitter fire hose.  They provide real time analytics on Twitter, which I think is very important.</p>
<p><strong>Tish Shute:</strong> In terms of using data to provision mobile experiences, real time is massively important, isnâ€™t it?<br />
<strong><br />
Edd Dumbill:</strong> Absolutely.  Yes.</p>
<p><strong>Tish Shute: </strong> But real time data is still a big challenge, isn&#8217;t it?<br />
<strong><br />
Edd Dumbill: </strong> Yes.  I mean right now, our focus on real time is probably at the technology level.  Looking at real time, people are kind of building out the frameworks, companies like Media Sift and Data Sift creating parts of the experience.</p>
<p>And yes, our <a href="http://conferences.oreillynet.com/">Where 2.0</a> conference will be focused more on the mobile experience.</p>
<p><strong>Tish Shute: </strong>Re mobile experiences,<strong> </strong> I am very excited about <a href="http://www.infochimps.com/" target="_blank">Infochimps</a> and <a href="http://semanticweb.com/infochimps-adds-geo-apis-and-takes-a-shine-to-schema-org-too_b22613" target="_blank">their new geo APIs</a>, and sensor data is becoming such a big part of the picture now too. But the Kinect has also opened up a whole set of possibilities for the future of sensor data!</p>
<p><strong>Edd Dumbill:</strong> Yeah.  I still think Kinect is probably one of the most exciting things going down because of the democratization of that kind of capability.  Interesting things happen when the sensors become cheap, right?</p>
<p>When alongside a little camera in your iPad you have a Kinect sensor equivalent.  Thatâ€™s become extremely interesting because everybody has it with them and can do things based off it.</p>
<p>So the things that always fascinate me are when it becomes cheap and hackable.<br />
<strong><br />
Tish Shute:</strong> And if Kinect went mobile, that would be exciting?</p>
<p><strong>Edd Dumbill:</strong> I think itâ€™s entirely likely in the next couple years, yes.</p>
<p>The more sensors we can start instrumenting our mobile and personal devices with, I think itâ€™s going to always result in some much more novel uses that we ever dreamed of.</p>
<p><strong>Tish Shute:</strong> There was a lot of hoo-ha about <a href="http://blogs.wsj.com/venturecapital/2011/06/15/after-seeing-green-color-is-black-and-blue/">Color</a> when they launched this year. They were unable to capture a user base, but if they had issues of privacy might have come to the fore because they were really collecting more sensor data than any other app, right?</p>
<p>We are still waiting to see a breakthrough app in that area in terms of using all the phone sensors in ways that will really enhance a user experience rather than just the aims of data mining, aren&#8217;t we?</p>
<p><strong>Edd Dumbill:</strong> Yes.  I think this is one of the things where, in parallel, weâ€™re really learning out the social and privacy implications of this kind of technology.  It seems to me the focus has shifted from the tech in the second half of the year too.  Frankly, everybody getting kind of freaked out about the amount of data thatâ€™s being mined and, you know, whatâ€™s acceptable use for that.</p>
<p>But on a slightly more prosaic level, there are some rather fabulous things being done.  If you look at the Google Maps navigation experience on an Android phone.  For instance, thereâ€™s some very practical applications of sensors collecting data with traffic and a variety of other augmentations going in that to actually do something useful.</p>
<p>So maybe weâ€™d like to think we carry our sixth sense around with us in our pocket, and maybe we will.  But we certainly can in our car right now with all the automatic rerouting and so on.  Thatâ€™s slightly more prosaic, but I think a lot more significant in terms of a pattern of how that can be applied.<br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Singly.png"><img class="alignnone size-thumbnail wp-image-6367" title="Singly" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Singly-150x150.png" alt="" width="150" height="150" /></a><br />
<strong><br />
Tish Shute:</strong> One of the Startups that really excited me in February at Strata, Santa Clara was <a href="http://singly.com/">Singly</a> and <a href="http://lockerproject.org/">The Locker project.</a> They are really thinking innovately in the area of putting people at the center of their data.</p>
<p>I am looking forward to seeing the fruition of that work.  And, while Iâ€™m enjoying Google +, it seems, we are just sort of holding up our hands and saying, â€œWell, thereâ€™s only one business model for data, and that is a centralized Fort Knox,â€ isnâ€™t it?  Or is there something that Iâ€™m missing?</p>
<p><strong>Edd Dumbill:</strong> Youâ€™re right.  I mean I think Google +, for instance, is rather the walled garden is a hedged garden.  You know, there is a certain barrier there that I think is more about the fact that you need to put certain barriers up to actually create a decent user experience in the first place.  I think user experience is one of the BIG problems with open data, and private data, to be honest.</p>
<p>Thereâ€™s a reason we are not all writing PGP encrypted emails to each other, right?  Because itâ€™s so hard to make a UI for encryption thatâ€™s safe.  Most people donâ€™t use passwords properly.  And I think a lot of the same user experience considerations come into this whole data thing.</p>
<p>Facebook can get away with anything they want to because have you ever tried using their privacy settings?  Google, I think, more than anybody has tried to address this issue using sensible defaults, making the explanations clear.  And they probably succeeded for a geek tech audience.</p>
<p>So I honestly think, probably, Lockerâ€™s biggest challenge, in that kind of approach, is definitely UI and giving the concept to the users so they can understand it.</p>
<p>But thereâ€™s certainly a very useful contribution to this conversation.</p>
<p>I think there are parallels in blogging, actually.  There is a case where people have information they want to disseminate.  And do you choose to do in on your own website, set everything up, publish for yourself, host for yourself, so you have complete control, or do you cede, for convenience, control to Blogger or Tumbler, knowing that you are being monetized somehow and that youâ€™re playing in somebody elseâ€™s walled garden and donâ€™t have that control?</p>
<p>So I havenâ€™t really expanded that thought too much, but I think thereâ€™s something there in following that along and seeing where that actually leads.</p>
<p>But, you know, there is a whole technical challenge as well.</p>
<p>I really like the idea of being able to give permission to people. Being able to say, well, â€œIâ€™m engaging you to do X,Y,Z in return for such and such. That seems like a good bargain to me. Giving up my data is a decent bargain for the services Iâ€™m getting back.â€ I mean thatâ€™s generally the contract we make in real life with people anyway.</p>
<p>Thatâ€™s another thing re Google+, &#8211;why itâ€™s a promising approach. At least in their rhetoric, theyâ€™re trying to say, well, â€œWeâ€™re trying to model this on the real life economy, the economy of real life interactions.â€</p>
<p><strong>Tish Shute:</strong> Yes. Any movement towards saying, well, â€œIâ€™m not just collecting your data randomly, Iâ€™m collecting this data because I want to give something back to you that will enhance your interactions,â€ definitely feels like an improvement, doesnâ€™t it?<br />
<strong><br />
Edd Dumbill:</strong> Yes. I think that bargain is clear. Iâ€™m just fascinated by who could be trusted andâ€¦ I do actually wonder if there will be some kind of, rather than necessarily everything being decentralized like Lockers suggests, there might be an idea of a variety of inter-operating, trusted identity brokers. People who we would actually trust. Banks, right? We do that right now. Banks are pretty much our identity brokers. Who knows?</p>
<p><strong>Tish Shute:</strong> I think, that is where the Locker projectâ€™s going with Singly, isnâ€™t? Isnâ€™t Singly the trusted broker for the Lockers, right?</p>
<p><strong>Edd Dumbill: </strong>Yes. Now the question is whether you trust a startup with that or whether youâ€™re going to trustâ€¦ I mean, who knows? Trust levels are at such all-time lows with everybody right now. People in America wonâ€™t trust the government. I think Google are probably one of the most trusted brokers out there online.<br />
<strong><br />
Tish Shute:</strong> Perhaps, thatâ€™s interesting, isnâ€™t it?<br />
<strong><br />
Edd Dumbill:</strong> I did write a piece, which kind of speculated that Google may become some sort of center brokering of social information and kind of a platform.</p>
<p><strong>Tish Shute:</strong> Oh, yes, <a href="http://radar.oreilly.com/2011/07/google-plus-social-backbone.html"><strong>&#8220;Google+ is the social backbone&#8221;</strong></a> &#8211; a very thought provoking piece! It deserves an interview on it&#8217;s own!</p>
<p>But back to the Strata schedule!  I notice you have DePodesta doing the Moneyball talk, right? Whatâ€™s the 2011 twist on Moneyball?</p>
<p><strong>Edd Dumbill: </strong>I think the twist on that is that theâ€™re a lot more people can play now, really, which is why weâ€™re having Strata in the first place. That 10 years ago the people doing this kind of stuff are McDonalds and Walmart and sports teams. Everybody, where there was large money, they could afford to gather the data. Maybe they could try this service out in making decisions based on it.</p>
<p>Well, weâ€™re now in a very instrumented society where every business, every person has instrumented data about their interactions. I think the kind of resistance and dynamics and opinions that Moneyball brought up are the ones that people are going to be facing again right now as they seek to be more data-driven in what theyâ€™re doing.</p>
<p>Itâ€™s also very interesting to know 10 years on, what do you think? Youâ€™ve had 10 years of this, of sort of sabermetrics and so on. Have you matured in your view, have you softened?</p>
<p>What Iâ€™m endlessly and ultimately fascinated by is, where does this fit in the decision process and in the organization tree? Where does it mesh with vision?</p>
<p>Steve Jobs achieved it perfectly. He had vision and all kinds of things for his products. But Apple succeeded through a relentless operational efficiency. Absolutely relentless in their suppliers, their supply train, their manufacturing lines down to their detail. They are an utterly data-driven, process-driven organization at the same time as melding that with vision, design values and good quality. Thatâ€™s a case where it worked together.</p>
<p>Iâ€™m eager to try and tease it out, figure out how that really works and how those things come together.<br />
<strong><br />
Tish Shute: </strong> And thatâ€™s another thread I see being explored at Strata, NYC.  Itâ€™s not human versus machine or machine trumps human, but itâ€™s human with machine.  This is another theme, isn&#8217;t it?<br />
<strong><br />
Edd Dumbill: </strong>Exactly. We all operate by feedback loops. Really, what machines are doing enables us to get better quality data and in a tighter feedback loop.</p>
<p><strong>Tish Shute: </strong> One feedback loop that weâ€™re finding machines very useful for is understanding how we feel. I think thatâ€™s really interesting.</p>
<p><strong>Edd Dumbill: </strong>Yes. Iâ€™m very fascinated by all the quantified-self stuff and where that can take us. At the end of the day, we have a very personal little organization to deal with, which is ourselves.<br />
<strong><br />
<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Quid.png"><img class="alignnone size-medium wp-image-6369" title="Quid" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/Quid-300x182.png" alt="" width="300" height="182" /></a><br />
<a href="http://quid.com/" target="_blank"><em>Quid: Building Software and Mathematical Solutions â€¨to Simplify Complex Decisions</em></a></strong></p>
<p><strong>Tish Shute:</strong> Yes! But the thing is we donâ€™t understand ourselves in isolation, do we?   I am definitely going to attend the session by Sean Gourley, CTO of <a href="http://quid.com/" target="_blank">Quid</a>, on semantic clustering analysis.  It seems like sentiment analysis is going big-time now, isnâ€™t it?<br />
<strong><br />
Edd Dumbill: </strong>Yes. I mean, sentiment analysis is actually becoming a checkbox feature in databases now. The latest release of <a href="http://www.greenplum.com/">Greenplum</a> has it built it. Itâ€™s that kind of level of feature that people want as social data is so important. Of course a lot of this is being driven by marketing and advertising.</p>
<p><strong>Tish Shute: </strong> Yes but even re marketing data story telling has been taking some interesting and quirky turns hasn&#8217;t it?<br />
<strong><br />
Edd Dumbill: </strong>Yes, absolutely. I think thereâ€™s a lot of interesting research ahead of us there as well.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/OKCupid.png"><img class="alignnone size-medium wp-image-6370" title="OKCupid" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/OKCupid-273x300.png" alt="" width="273" height="300" /></a><br />
<em><a href="http://blog.okcupid.com/">OKCupid Trends</a></em></p>
<p><strong>Tish Shute:</strong> <a href="http://www.okcupid.com/">OkCupid</a> is a very interesting example of data story telling that leverages our desire to know ourselves, and ourselves in relation to others.<br />
<strong>.<br />
Edd Dumbill:</strong> Yes. I mean theyâ€™re an example of a shift thatâ€™s happening in the PR industry, actually, which is companies understanding that telling marketing stories with data is very, very compelling. OkCupid really used that to hit well above their weight. Of course they got acquired as a direct result of that and their profile.</p>
<p><strong>Tish Shute:</strong> I know OKCupid got acquired by Match.com, but you were saying they hit above their weight by using this analysis? How did that work?<br />
<strong><br />
Edd Dumbill:</strong> I think a lot of itâ€™s down to their blog. That they analyze these things, publish them on their blog. It got a lot of attention, generated a lot of media stories, which brought them to Match.comâ€™s attention. Thereâ€™re millions of &#8211; well a large number of dating sites. But they differentiated themselves through the smart use of their data.</p>
<p><strong>Tish Shute:</strong> Data and Games is an area I am very interested in.  Zynga changed the game with game analytics and social games. And now we are seeing Rovio partner with <a href="http://medio.com/">Medio</a> for analytics,<a href="http://radar.oreilly.com/2011/08/angry-birds-data-hp-daily-dot.html"> </a>(see<a href="http://radar.oreilly.com/2011/08/angry-birds-data-hp-daily-dot.html"> Green pigs and data). </a> But I noticed that you donâ€™t have games as a strong theme on the schedule?</p>
<p><strong> Edd Dumbill: </strong>I think youâ€™ll see more of that on the West Coast to be honest. Itâ€™s not that weâ€™re not interested. I just feel that the center of gravity to that topic is probably back on the West at the moment.</p>
<p><strong>Tish Shute:</strong> So whatâ€™s after Zynga in terms of game analytics? A nice easy question!<br />
<strong><br />
Edd Dumbill:</strong> Sure. Let me predict the future for you.</p>
<p><strong>Tish Shute:</strong> Yes please do!</p>
<p><strong>Edd Dumbill:</strong> I donâ€™t know, to be honest. One of the very interesting things about games is that it helps us understand the real world by modeling and playing around.  Iâ€™m highly fascinated to see some more of those things played out through real life actors.   Thereâ€™s been some examples right out of <a href="http://www.scvngr.com/" target="_blank">Scavngr</a> and whatnot. But if any of those techniques can really start to make a way into mobile technology, thatâ€™s one interesting thing.</p>
<p>What lessons can we take from what weâ€™ve actually learned in game analytics that are reproducible and useful elsewhere?</p>
<p>Gamification is a bit of a trend right now. I am slightly skeptical&#8230; But I am fascinated by a lot of systems that are having these game elements added to them.   And so the second question is, if youâ€™re having games added to things, like losing weight or saving money or writing a book, Iâ€™ve seen that too, what can you apply from the analytics world on top of that, and learn about systems and tweak them?</p>
<p>I donâ€™t have that good of an answer for you. How my game is, is not steeped in that. But I am aware that thereâ€™s probably a lot of progress in games that has yet to be applied anywhere else.</p>
<p>Zynga and whatnot, is kind of a space race, isnâ€™t it, to monetize that.   Space races generate technologies that can be applied in a variety of places.</p>
<p>What are the spinouts of game analytics that we can actually use elsewhere?</p>
<h3>&#8220;These Bloom Instruments arenâ€™t merely games or graphics. They&#8217;re new ways of seeing what&#8217;s important.&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/cartagram.png"><img class="alignnone size-medium wp-image-6373" title="cartagram" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/08/cartagram-300x129.png" alt="" width="300" height="129" /></a><br />
<em>Cartagr.am by Bloom</em><a href="http://cartagr.am/#10.00/40.8526/-74.6277"></a></p>
<p><strong>Tish Shute:</strong> Last February,  at Strata, I was very struck by the new work by Ben Cerveny and<a href="http://bloom.io/"> Bloom</a> on &#8220;pop cultural instruments for data expression&#8221; (also see<a href="http://cartagr.am/#10.00/40.8526/-74.6277"> </a><a href="http://www.youtube.com/watch?v=HWDcc5gNVrE">Ben Cerveny&#8217;s talk at ARE2011</a>).<br />
<strong><br />
Edd Dumbill:</strong> Yeah. I love every time the visualization comes onto a tabletâ€¦.thereâ€™s an interesting back channel there.</p>
<p>And Google has done this in extreme to add to their great advantage. Thereâ€™s a potential when you read an E-book, or you interact with the visualization of a tablet, that it can learn from your interactions.</p>
<p>If you read an E-book, and the book is instrumented and sends stuff back, then the book can read you at the same time that youâ€™re reading it. That kind of collective intelligence can then be harnessed.</p>
<p>So what if Bloomâ€™s pop culture visualizations are instrumented so that they know how people are using it?   Well what can they learn about that?  About either the quality of the visualization, about whatâ€™s interesting to data and back at the same time?</p>
<p>This is what the fundamental principles I think even of Web 2.0 and definitely in this era of big data that weâ€™re in, is that the secondary signals, the exhaust from any electronic product, can be incredibly valuable.</p>
<p>We know that every time you run Google you are probably a part of at least one experiment that they are running to determine an optimal, and optimize their product through that. And how can you turn this up to generalize that out?</p>
<p><strong>Tish Shute:</strong> I agree.Â  This is at the core of the art, science and business of data.Â  I hear your phone ringing, but do I have time for one more quick question?</p>
<p><strong>Edd Dumbill:</strong> Oh yes.<strong> </strong></p>
<p><strong>Tish Shute:</strong> So it sort of follows on from my previous question.Â  The relationship between the crowd sourced intelligence and machine intelligence has played a huge role in making data work andÂ  solve real world problems &#8211; <a href="http://crowdflower.com/" target="_blank">Crowd Flower</a>, for example.</p>
<p>Where are we at now with this relationship between crowdsourcing power of, for example, Crowd Flower and Mechanical Turk when combined with machine intelligence. Is there anything new going on here?<br />
<strong><br />
Edd Dumbill:</strong> What weâ€™re actually starting to do is learn where to apply these tools. Weâ€™re reaching a point of understanding what crowd-sourcing is for, how to better design crowd-source tasks and so on in innovative uses.</p>
<p>One of the things I am particularly excited about is Natala Menezes who was at Amazon working on Mechanical Turk, sheâ€™s now moved to a company called <a href="http://gigwalk.com/" target="_blank">GigWalk</a>, which is a Turk platform thatâ€™s mobile.</p>
<p>So if you want to assign tasks that depend on people being in particular places and being able to do particular things, this is a platform for turking using that, which I think is fascinating. Thatâ€™s definitely a new approach.<br />
<strong><br />
Tish Shute:</strong> Yes <a href="http://gigwalk.com/">GigWalk</a> is awesome â€“ I saw that <a href="http://blogs.msdn.com/b/photosynth/archive/2011/07/19/get-paid-to-shoot-mobile-photosynths.aspx">Photosynth is partnering with GigWalk.</a> That is interesting â€“ perhaps a step towards strong AR! ( see <a href="http://www.wired.com/beyond_the_beyond/2011/05/augmented-reality-readwrite-world-at-are2011/" target="_blank">Read Write World and Blaise Aguera Y Arcas&#8217;s work on Photosynth was big news at ARE2011</a>).</p>
<p><strong> Edd Dumbill:</strong> Natala will be talking about GigWalk.  I think the session is called quirky crowdsourcing. I want to call it Quirky Turks.</p>
<p><strong>Tish Shute:</strong> [laughs] I like that.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2011/08/31/story-telling-the-art-science-and-business-of-data-talking-with-edd-dumbill-about-strata-nyc-2011/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Jeremie Miller &amp; The Locker Project Give a Data Platform to the People in the Era of Data Everywhere. And Bloom presents Fizz!</title>
		<link>https://www.ugotrade.com/2011/02/10/jeremie-miller-the-locker-project-give-a-data-platform-to-the-people-in-the-era-of-data-everywhere-and-bloom-presents-fizz/</link>
		<comments>https://www.ugotrade.com/2011/02/10/jeremie-miller-the-locker-project-give-a-data-platform-to-the-people-in-the-era-of-data-everywhere-and-bloom-presents-fizz/#comments</comments>
		<pubDate>Thu, 10 Feb 2011 17:10:30 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Ambient Findability]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Data]]></category>
		<category><![CDATA[Big Data]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[GeoFencing]]></category>
		<category><![CDATA[GeoMessaging]]></category>
		<category><![CDATA[gestrural interface]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Linked Data]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[New Interfaces]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Open Data]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Real Time Big data]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Alistair Croll]]></category>
		<category><![CDATA[Ben Cerveny]]></category>
		<category><![CDATA[Bloom]]></category>
		<category><![CDATA[data visualization]]></category>
		<category><![CDATA[federation]]></category>
		<category><![CDATA[Fizz]]></category>
		<category><![CDATA[Instant Messaging]]></category>
		<category><![CDATA[Introspectr]]></category>
		<category><![CDATA[Jabber]]></category>
		<category><![CDATA[Jason Cavnar]]></category>
		<category><![CDATA[Jeremie Miller]]></category>
		<category><![CDATA[Jesper Sparre Anderson]]></category>
		<category><![CDATA[lifestreaming]]></category>
		<category><![CDATA[Locker Project]]></category>
		<category><![CDATA[Marshall Kirkpatrick]]></category>
		<category><![CDATA[open federated protocol]]></category>
		<category><![CDATA[P2P]]></category>
		<category><![CDATA[peer to peer protocols]]></category>
		<category><![CDATA[real time data]]></category>
		<category><![CDATA[real time data visualization]]></category>
		<category><![CDATA[Roger Magoulas]]></category>
		<category><![CDATA[Simon Murtha-Smith]]></category>
		<category><![CDATA[Singly]]></category>
		<category><![CDATA[social data aggregation]]></category>
		<category><![CDATA[social graph]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Strata]]></category>
		<category><![CDATA[Strata 2011]]></category>
		<category><![CDATA[TeleHash]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Tom Carden]]></category>
		<category><![CDATA[XMPP]]></category>
		<category><![CDATA[Zynga]]></category>
		<category><![CDATA[Zyngification]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=6102</guid>
		<description><![CDATA[Singlyâ€™s appearance at the startup showcase at Strata 2011 this week has excited thought leaders across the web since the story got out. Singly is a new startup that exists to provide oxygen and commercial support to the open source Locker Project, and new protocol TeleHash. With some wonderful serendipity I met Singly on my [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Jeremiemiller.jpg"><img class="alignnone size-medium wp-image-6105" title="Jeremiemiller" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Jeremiemiller-300x223.jpg" alt="" width="300" height="223" /></a></p>
<p><a href="http://sing.ly/" target="_blank">Singlyâ€™s</a> appearance at the <a href="http://strataconf.com/strata2011/public/cfp/148" target="_blank">startup showcase at Strata 2011</a> this week has excited thought leaders across the web since the story got out. Singly is a new startup that exists to provide oxygen and commercial support to the open source <a href="https://github.com/quartzjer/Locker" target="_blank">Locker Project</a>, and new protocol <a href="http://www.telehash.org/about.html" target="_blank">TeleHash</a>.</p>
<p>With some wonderful serendipity I met Singly on my first night at <a href="http://strataconf.com/strata2011" target="_blank">Strata</a>.Â  The next day, I talked in depth to <a href="http://en.wikipedia.org/wiki/Jeremie_Miller" target="_blank">Jeremie Miller</a> and <a href="http://twitter.com/#!/smurthasmith" target="_blank">Simon Murtha-Smith</a>, two of the three Singly co-founders (see later in this post).  I also had the opportunity to ask <a href="http://radar.oreilly.com/tim/" target="_blank">Tim Oâ€™Reilly</a>, <a href="http://strataconf.com/strata2011/profile/17816" target="_blank">Alistair Croll</a> and <a href="http://www.oreillynet.com/pub/au/2717" target="_blank">Roger Magoulas</a> for some of their thoughts on the significance of this project (see below for their comments).</p>
<p>It was a real &#8211; pinch myself in case I need to wake up from a dream  experience &#8211; for me, to stumble across Jeremie Miller with Simon  Murtha-Smith sitting behind a hand written sign demoing Singly at Strata  (see myÂ  pic opening this post).  As <a href="http://www.readwriteweb.com/archives/creator_of_instant_messaging_protocol_to_launch_ap.php" target="_blank">Marshall Kirkpatrick notes</a>:</p>
<p><strong>â€œJeremie  Miller is a revered figure among developers, best known for building  XMPP, the open source protocol that powers most of the Instant Messaging  apps in the world. Now Miller has raised funds and is building a team  that will develop software aimed directly at the future of the web.â€</strong></p>
<p>Singlyâ€™s appearance at Strata began auspiciously when they won the judges choice award in the startup showcase.  And following Marshall Kirkpatrickâ€™s post, <a href="http://www.readwriteweb.com/archives/creator_of_instant_messaging_protocol_to_launch_ap.php#disqus_thread" target="_blank">Creator of Instant Messaging Protocol to Launch App Platform for Your Life </a>, and <a href="http://gigaom.com/2011/02/04/the-locker-project-why-leave-data-tracking-to-others-do-it-yourself/" target="_blank">The Locker Project: Why Leave Data Tracking to Others? Do It Yourself,</a> Singly have been burning up Twitter.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/tweetssingly3.jpg"><img class="alignnone size-medium wp-image-6110" title="tweetssingly3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/tweetssingly3-300x236.jpg" alt="" width="300" height="236" /></a></p>
<p>Singly, by giving people the ability to do things with their own data, has the potential to change our world.Â  And, as <a href="http://www.readwriteweb.com/archives/creator_of_instant_messaging_protocol_to_launch_ap.php#disqus_thread" target="_blank">Marshall Kirkpatrick notes,</a> this wonâ€™t be the first time Jeremie has done that.</p>
<p><strong> </strong></p>
<h3><strong> â€œPop-cultural instruments for data expression and exploration,â€ by Bloom</strong></h3>
<p><strong> </strong>I was drawn over to the Singly table when an awesome app they were demonstrating caught my eye.  <a href="http://bloom.io/fizz/index.html" target="_blank">Fizz</a>, which is running on a locker with data aggregated from three different places is a first glimpse of one of <a href="http://bloom.io/" target="_blank">Bloomâ€™s</a>,  â€œpop-cultural instruments for data expression and exploration.â€</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/SimonMurthaSmith.png"><img class="alignnone size-medium wp-image-6116" title="SimonMurthaSmith" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/SimonMurthaSmith-300x224.png" alt="" width="300" height="224" /></a></p>
<p>Fizz is an intriguing early manifestation of capabilities never seen before on the web &#8211; the ability for us to control, aggregate, share and play with our own data streams, and bring together the bits and pieces of our digital selves scattered about the web (for more about Bloom and Singly, see Tim Oâ€™Reillyâ€™s comments below).  The picture below is my Fizz.  In <a href="http://bloom.io/fizz/index.html" target="_blank">Fizz</a>, large circles represent people and small circles represent their status updates. Bloom says:</p>
<p><strong>â€œClicking a circle will reveal its contents. Typing in the search box will highlight matching statuses.<br />
This is an early preview of our work and we&#8217;ll be adding more features in the next few weeks. <a href="https://spreadsheets.google.com/viewform?hl=en&amp;formkey=dGZINGpDQ3NubVNiMlY3eFZ6MUNGdFE6MQ#gid=0" target="_blank">We&#8217;d love to hear your feedback and suggestions</a>.â€</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/FizzbyBloom.png"><img class="alignnone size-medium wp-image-6117" title="FizzbyBloom" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/FizzbyBloom-300x179.png" alt="" width="300" height="179" /></a><br />
</strong></p>
<p>If you are not already familiar with The Bloom team, Ben Cerveny, Tom Carden, and Jesper Sparre Andersen &#8211; go directly to<a href="http://bloom.io/about" target="_blank"> their about page</a> and you will understand why the match of Bloom and The Locker Project is a cause for great delight.</p>
<h3>The Locker Project &#8211; a whole new way to connect from the protocol up</h3>
<p>As Jeremie began explaining the depth and breadth of what The Locker Project is facilitating, I was utterly gob smacked. And when the penny dropped and I realized this is the whole 9 yards, bringing awesomeness to people with a whole new way to connect, from the protocol up, all I could think was, OMG finally!</p>
<p>Luckily I have had time to catch up with the whole team since then, and recovered my composure enough to ask some coherent questions. But I can still barely contain my enthusiasm for this project.</p>
<p>Singly, The Locker Project and TeleHash take on, and deliver a simple, elegant, and open solution to some of the holy grails of the next generation of networked communications.   I have written on, and been nibbling at the edges of some of these grails in various projects myself for quite a while now.  Even if you havenâ€™t been reading Ugotrade, just a glance at <a href="http://www.ugotrade.com/2011/01/20/real-time-big-data-at-strata-2011-ambient-findability-geomessaging-augmented-data-and-new-interfaces/" target="_blank">the monster mash of my pre Strata post</a> will give you an idea of how important I think Singly is.</p>
<p>My previous post raised the question of how to invert the search pyramid and to transform search into a social, democratic act.  But if you are really interested in social search, I suggest staying keyed into what Singly is doing with The Locker Project!</p>
<p>One of Singlyâ€™s three founders,  Simon Murtha-Smith, was building a company called <a href="https://www.introspectr.com/" target="_blank">Introspectr</a>, a social aggregator and search product. Singlyâ€™s other founder <a href="http://www.linkedin.com/in/jasoncavnar" target="_blank">Jason Cavnar </a>was working on another similar project.  And they came together as Singly because social aggregation and search is a very hard problem for one company to solve, and they realized that the basic infrastructure needs to be open source and built on an open protocol.</p>
<p>As Jeremie puts it,<strong> â€œWe shouldnâ€™tâ€¦(every startup that wants to do something interesting) have to spend this much time aggregating the data, building robust aggregators.â€</strong></p>
<p>To me what is so important about the Locker Project is that it is built on a new open protocol, TeleHash.  And having the Singly team focused on supplying tools and the trust/security layer for the Locker Project will mean that developers have the whole stack they need to do some interesting stuff very soon.</p>
<p>I asked Jeremy to explain the relationship between TeleHash, The Locker Project and Singly.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/TeleHash.png"><img class="alignnone size-medium wp-image-6118" title="TeleHash" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/TeleHash-300x172.png" alt="" width="300" height="172" /></a></p>
<p><strong>Tish Shute:</strong> So<a href="http://www.telehash.org/about.html" target="_blank"> TeleHash</a>â€¦</p>
<p><strong>Jeremie Miller:   Itâ€™s a peer-to-peer protocol to move bits of data for applications around.  Not file sharing, but itâ€™s for actual applications to find each other and connect.  So if you had an app and I had an app, whenever weâ€™re running that app on our devices, we can actually find those other devices from each other and then connect.  Our applications can connect and do something.</strong></p>
<p><strong>For the entire edge of the network, basically, out there in the wild, and let those things mesh together.</strong></p>
<p><strong>A</strong><strong>nd TeleHash is actually what has led to the Locker project itself.</strong></p>
<p><strong>Tish Shute:</strong> So  TeleHash led to the The Locker Project and the Locker Project led to Singly?</p>
<p><strong>Jeremie Miller: Singly is a company who is sponsoring the open source Locker Projectâ€¦the three of us as founders, [left to right in pic below - Jeremie Miller, Jason Cavnar, Simon Murtha-Smith, ]</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/RRWSingly.png"><img class="alignnone size-medium wp-image-6119" title="RRWSingly" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/RRWSingly-300x220.png" alt="" width="300" height="220" /></a></p>
<p><em>I took the pic above of all three founders being interviewed by Marshall Kirkpatrick of Read Write Web for his post, <a href="http://www.readwriteweb.com/archives/creator_of_instant_messaging_protocol_to_launch_ap.php#disqus_thread" target="_blank">â€œCreator of Instant Messaging Protocol to Launch App Platform for Your Life.</a>&#8220;Â  I think we will look back on this moment and say it was <a href="http://twitter.com/#!/TishShute/status/33403971649544192" target="_blank">an inflection point for the web.</a> At least I tweeted that!</em></p>
<p><strong>Jeremie Miller: TeleHash is a protocol that lets the lockers connect with each other and share things.  The locker is like all of your data.  So itâ€™s sort of like a digital personâ€¦</strong></p>
<p><strong>Tish Shute:</strong> A locker for bits and pieces your digital self?</p>
<p><strong>Jeremie Miller:</strong> <strong> Yes. So TeleHash lets the lockers connect and directly peer-to-peer connect with each other and share things.  Singly, as a company, is going to be hosting lockers first and foremost.  But the Locker Project is an open source project.  You can have a locker in your machine or you can install it wherever you wantâ€¦</strong></p>
<p><strong>Tish Shute:</strong> Yes itâ€™s often too difficult for a lot of people to set up something locally&#8230;so Singly makes it easy to have a locker right?</p>
<p><strong>Jeremie Miller:  A lot of people see this cool app or this cool thing they want to do, itâ€™s something that you run in your locker that they need to be able to turn on a locker somewhere very easily.</strong></p>
<p><strong>Tish Shute:</strong> So Singly will provide the trust layer and hosting?</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Singly.jpg"><img class="alignnone size-full wp-image-6130" title="Singly" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Singly.jpg" alt="" width="159" height="80" /></a></p>
<p><strong>Jeremie Miller:  Yeah,  Singly is a company that will host lockers, as well asâ€¦when people build applications that run inside your lockers or use your data, you need to be able to trust them.  Maybe itâ€™s like social data and you donâ€™t care that much.  But especially once you start to get any of your transactions in there, your browsing history, your health data, like your running logs or sleepingâ€¦fit-bit stuffâ€¦then itâ€™s much more important to be careful about what youâ€™re running inside your locker and sharing.</strong></p>
<p><strong>So Singly will also look at the applications that are available that you can install and actually run them and look at what data they access, and look at who created them, and be able to come back and either certify or vouch for them.</strong></p>
<p><strong>And I hope in the long-run, as this grows and builds, that power users may actually be able to buy a little device that they can plug into their home network that is their locker.  Wouldnâ€™t that be cool?  This little hard drive or whatever that you plug in.</strong></p>
<p><strong>Tish Shute: </strong>Wow &#8211; that would be very cool!  Architecturally is TeleHash and the Locker Project related to your work on XMPP?</p>
<p><strong>Jeremie Miller:  Architecturally, some of the stuff Iâ€™ve learned, XMPP, in Jabber it was designed for the specific purpose of instant messaging, but it was still a federated model, in that you still had to go through sort of a central point so you couldâ€¦a server that lived somewhere.  So it was really optimized for like businesses and small groups, teams, as well as big companies out there; ISPs can use it.</strong></p>
<p><strong>So it was designed with that in mindâ€”for the communication path to be routed through somewhere.  And where Iâ€™ve sort of evolved over the years since then is really fascinated with truly distributed protocols that are completely sort of decentralized so that things are going peer-to-peer instead of actually through any server.</strong></p>
<p><strong>The last 10 years, peer-to-peer has gotten a pretty bad rap with file sharing.</strong></p>
<p><strong>Tish Shute: </strong> A really bad rap, yes.</p>
<p><strong>Jeremie Miller:  Yeah.  And almost because of that, and because itâ€™s really hard to do, that it hasnâ€™t gottenâ€¦the potential for itâ€™s awesome.  Thereâ€™s so many really good things that can be done peer-to-peer.  And it hasnâ€™t gotten used very much.</strong></p>
<p><strong>But the other side of the peer-to-peer thing that I think is critically important, look at the explosion of the computing devices around a person anymore, both in the home and on our person.  We have one, two, three, four even.  And the number of devices that are online for you that are yoursâ€¦I look at my home network router and Iâ€™ve got 30 devices in my house on Wi-Fi.  What the heck?  Thatâ€™s a lot of devices.</strong></p>
<p><strong><br />
But right now, all of those devices, for me to work with them, Iâ€™m almost always going through a server somewhere, through a data center somewhere, which is ridiculous at face value.  You go five, 10 years out from now, thereâ€™s probably going to be 300 devices on me in some form.</strong></p>
<p><strong>Tish Shute:</strong> So we need a peer-to-peer network just to manage our own devices?</p>
<p><strong>Jeremie Miller:  A peer-to-peer, yes.  You know, my phone should be talking straight to my computer, or to the iPad, or to the washing machine, or refrigerator.  The applications in my TV, or whatever, they should all be talking peer-to-peer.  And it should be easy to do that.  It shouldnâ€™t be that the only way you can do that is to go through a data center somewhere.</strong></p>
<p>[Our conversation continued, but to sum things up, for now, here is the final question I asked Jeremie which pretty much packs in everything I would like to do with TeleHash, the Locker project, and Singly tools/trust layer all in one!]</p>
<p><strong>Tish Shute:</strong> How can TeleHash, the Locker, and Singly help people combine personal data from different sources &#8211; web and mobile for example, so the data locked up in our social graph on the web can be integrated with, for example, the location data and â€œthe data wakeâ€ from our cell phone sensors, to know not only where we have been but to give us more ways to know where we are going?</p>
<p><strong>Jeremie Miller: That&#8217;s a pretty packed question, but here&#8217;s my simple answer, hopefully just seeds the right discussion <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </strong></p>
<p><strong>Telehash is the protocol that lets the apps (mobile, sensor, or anywhere) talk to a locker as well as lockers talk to each other, it&#8217;s the chatter, moving the bits around the network.  The locker is the storage for a person&#8217;s data and the crunching ability to analyze it or trigger actions from it. Singly is the company sponsoring the project(s) and helping anyone dev apps atop it.  We&#8217;re going to build the platform and looking to the world to create some amazing things on top of it (we have lots of our own personal ideas we already want to create, hah!).</strong></p>
<p><strong><br />
</strong></p>
<h3>The Locker Project is not just â€œone more rebel army trying to undo these big data aggregations,&#8221; Tim O&#8217;Reilly</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-12.01.29-AM.png"><img class="alignnone size-full wp-image-6120" title="Screen shot 2011-02-10 at 12.01.29 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-12.01.29-AM.png" alt="" width="240" height="238" /></a></p>
<p><strong><a href="http://twitter.com/#!/lockerproject" target="_blank">@lockerproject</a>: &#8220;We&#8217;ll be posting updates on the Locker Project (<a rel="nofollow" href="https://github.com/quartzjer/Locker" target="_blank">https://github.com/quartzjer/Locker</a>) here as we make progress, very awesome stuff &#8220;</strong></p>
<p>During the Strata Media Conversation I asked Roger Magoulas about Singly and The Locker Project because Roger played Yentl and brought Singly and Bloom together!Â  Although there was not much time to discuss it, the relationship of TeleHash, The Locker project and Singly to the social network encumbents, came up, and Roger Magoulis and Tim Oâ€™Reilly gave some very insightful comments on this when I talked to them afterwards (see below).</p>
<p>Roger Magoulas pointed out:</p>
<p><strong>â€œI think Singly has Facebook like aspects, but I think a better description is an app platform that integrates your personal and social network data &#8211; including data from Facebook. Sing.ly is likely to have challenges with some of their data sources, particularly if Sing.ly gains traction with users.</strong></p>
<p><strong>I like the app platform business model, although they face risks getting critical mass and app developer attention, and I like how they plan on using open source connectors to keep up with changing social network platforms. Jeremie has credibility with the open source community and is likely to find cooperating developers. The team seems to bring complementary strengths to the project and you can tell they all work well together. â€<br />
</strong><br />
And Tim O&#8217;Reilly went on to elaborate the awesome potential of this platform to bring something new to the ecosystem, and to comment on just how interesting Bloom&#8217;s insight into, &#8220;data visualization as a means of input and control&#8221; is.</p>
<h3>Talking with Tim O&#8217;Reilly</h3>
<p><strong>Tish Shute:</strong> So will the Locker Project be able to break the lock of  Facebook&#8217;s and other big sites&#8217; control of everyoneâ€™s data.  Sometimes  I feel we are stuck in the era of Zyngification, where you have to do what Zynga did and leverage the system in order to gain traction or do anything with social data?</p>
<p><strong>Tim Oâ€™Reilly:  I donâ€™t think that is the objective of  the Locker Project â€”to break the Facebook lock, because I tend to agree,  the value of Facebook is having your data there with other peopleâ€™s data.  What Singly may be able to accomplish is to give people better tools for managing their data.  Because if you can actually start to abstract the data from various sites and you can set it and manage it yourself, then you can potentially make better decisions about what youâ€™re going to allow and not allow.  Because right now, the interfaces on a lot of these sites make it very, very difficult to understand exactly what the implications are.</strong></p>
<p><strong>And I think all this done right will create a marketplace where people will build better interfaces that will give people more control over their data.  Theyâ€™ll still want to put it on those sites, because why do you put your money in the bank?  You know, because itâ€™s more valuable being with other peopleâ€™s money.</strong></p>
<p><strong>And I think that to conceive of it as one more rebel army trying to undo these big data aggregations is just the wrong way to frame it.</strong></p>
<p><strong>Tish Shute: </strong> Yes and framing the question the way you just did &#8211; that this is not just one more rebel army, might mean that the stage at Strata will be filled with new startups next year!  Thatâ€™s what I thought when I found out what The Locker Project and Singly  was about &#8211; that we are about to see an explosion of creativity with personal and social data.</p>
<p><strong>Tim Oâ€™Reilly:  Yeah, sure.  I mean, because at the end of the day, if you can start to extract your personal data in ways that make it more useful, you can potentially create the ability for people to build better interfaces.  Itâ€™s not just Facebook.</strong></p>
<p><strong>You know, you think, â€œOh, wow, Iâ€™d really like to have a management console for all my contacts.â€  And you go, â€œWell, Iâ€™m stuck with, I can use Facebook, I can use LinkedIn, I can use my address book in Outlook or Gmail or whatever, or on my local machine.â€  The tools are pretty primitive.  And if we get a better set of tools, I think weâ€™ll see a lot of innovation.</strong></p>
<p><strong>Now, some of those startups might well be acquired by a Facebook or a Google.  But it if moves the ball forward in giving people better visibility and control over their data, thatâ€™s a good thing.</strong></p>
<h3>Bloom&#8217;s insight,  &#8220;data visualization will become a means of input and control.&#8221;</h3>
<p><strong>Tish Shute:</strong> I loved the marriage with Bloom, which is interesting, because Ben and the Bloom team havenâ€™t really talked a lot about Bloom yet, but I gather Bloom is moving to consumer facing work with data?</p>
<p><strong>Tim Oâ€™Reilly:  Whatâ€™s really interesting about Bloom is the notionâ€¦You know people think of data visualization as output.  And the insight that I think Ben has had with Bloom is that data visualization will become a means of input and control.</strong></p>
<p><strong>Tish Shute: </strong> Right, very cool.</p>
<p><strong>Tim Oâ€™Reilly: So I&#8217;ve started to feel like data visualization as a way of making sense of complex data is kind of a dead-end.  Because what you really want to do is to build these feedback loops where you actually figure something out, some particular atomic action well enough that you can create an application that letâ€™s somebody actually do something with it. But the idea of visualization as a way of manipulating the data in real-time, data visualization as interface rather than as a report, itâ€™s a small but subtle shift that I think becomes kinda cool.</strong></p>
<h3>Talking with Alistair Croll</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=19738228&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=1&amp;color=00ADEF&amp;fullscreen=1&amp;autoplay=0&amp;loop=0" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=19738228&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=1&amp;color=00ADEF&amp;fullscreen=1&amp;autoplay=0&amp;loop=0" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><a href="http://vimeo.com/19738228">Sing.ly &#8211; Join or Die</a> from <a href="http://vimeo.com/user5977233">Singly Inc</a> on <a href="http://vimeo.com">Vimeo</a>.</p>
<p><strong>Alistair Croll:</strong> <strong>So Iâ€™m a big fan of Singly.  They were my choice for the Startup Showcase.  I think itâ€™s certainly the right time &#8211; the team can execute on it.  But the thing I like the most is I thought back to the early days of Photoshop.</strong></p>
<p><strong>So, Photoshop was a neat application that could take data in the form of an image and manipulate it.  But the real value from Photoshop came from these amazing plugins.  Like, thereâ€™s a company called Kai&#8217;s Power Tools that made these things that would allow you to do manipulations.  Today, commonplace things that are built in.  But at the time, they were things like building bubbles, and spheres, and drop shadows and stuff like that, cutouts, in amazing ways.</strong></p>
<p><strong>Another company, I think, called Alien Skin that made these things.  Thereâ€™s whole ecosystems of plugins.  So you could go and get a plugin and transform that original data in ways you hadnâ€™t thought of.  And eventually, there was a macro language for scripting how you could do those things, and that found its way into the Photoshop environment.</strong></p>
<p><strong>But you think about the transformation of digital design from Photoshop, I think if you can take that same pattern of you create the basic ecosystem of a few tools, and then you allow people an open system on top of that, thatâ€™s unprecedented.  I think it really does allow you to take ownership of that.</strong></p>
<p><strong>And then when you allow people the proper tools to federate that information.  I was actually thinking of starting a company a couple of years ago based on data federation like that.  But what you really want to say is Iâ€™ve got a patternâ€¦Itâ€™s almost like a multi-channel mixer.  Youâ€™ve got a band that is your health, your weight, your blood pressure, family photos, words youâ€™ve used.  You know, the more data I record when I carry my phone around with a headset of whatever, all of that stuff goes in, all my searches, everything.</strong></p>
<p><strong>And then I say, â€œAh, I want to federate height, weight, and blood pressure with my doctor. I want to federate sleep cycles and nutrition with my childâ€™s teacher,â€ and so on and so on.  And you start to create these federated sources of data where now you have a teacher data mining, in a safe manner, the sleep and health habits of all the students along with report card information.  And you suddenly realize that Johnny is borderline diabetic and falls asleep at recess.</strong></p>
<p><strong>Thatâ€™s something that never would have happened.  And that happens when you have tools to federate data and then compute on top of them.  So this idea of, like, lifestreaming or life logging, this is a logical consequence of the whole lifestreaming movement; that whole recorded future stuff.</strong></p>
<p><strong>Tish Shute:</strong> Yes it really is a wonderful fruition to the visions of the lifestreaming movement [<a href="http://lifestreamblog.com/interview-with-david-galernter-on-the-future-of-lifestreaming-and-my-thoughts/" target="_blank">see this interview with David Galernter]</a>.  And best of all it sits on a new open protocol, TeleHash and the open source Locker Project that will give tools to everyone to work with these data streams.</p>
<p><strong>Alistair Croll:  Exactly.  This is the toolset that sits on top of that stuff.  Because once Iâ€™ve life-streamed everything, great, I have this bucket of stuff that I did that I never look at again. But if I can suddenly unlock that with data mining tools and analyze patterns, all of a sudden that life logging has a reason to have existed.</strong></p>
<p><strong> The biggest problem we have with data right now is we donâ€™t have apriori knowledge of what will be useful.  We could have been recording crime reports in the city of Chicago, and a year later it turns out that data is really useful for predicting diabetes in the city, but we didnâ€™t know it was related.</strong></p>
<p><strong>So the problem, and one of the things I think that distinguishes big data from traditional data, traditional data is collected to some apriori knowledge of how it will be used.  Big data tends to be collected for the sake ofâ€¦itâ€™s almost collected on faith that later on it will be useful for something.</strong></p>
<p><strong>Tish Shute:</strong> I am very interested in this idea of federation, I actually went as far as to deep dive into Wave servers because of thisâ€¦.</p>
<p><strong>Alistair Croll:  Yeah, Wave was a great example of federation, just too complicated.  When it was canceled, both users [and developers] were furious.</strong></p>
<p><strong>Tish Shute:</strong> Yeah, I suppose you could see Google Wave as a bit of an Icarus project, right?  I am so excited by Singly because  it is coming sort of bottom-up &#8211; a very different approachâ€¦</p>
<p><strong>Alistair Croll:  And remember, Facebook didnâ€™t work before Friendster.  The only difference between being wrong and being too early is that too early costs a lot of money.  So it may be that this is an idea that works now, but a couple years ago didnâ€™t work.</strong></p>
<p><a href="http://twitter.com/#%21/acroll" target="_blank">Alistair Croll</a>, co-chair of <a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a>, in his post, reframed the question, <a href="http://mashable.com/2011/01/12/data-ownership/" target="_blank">â€œWho Owns Your Data?â€</a> as, â€œItâ€™s not who owns the data, itâ€™s about who can put the data to work.â€</p>
<p>And I am sure there  will be many more people able to put data to work, and into play, in a multitude ofÂ   interesting ways, now we have TeleHash, the Locker Project, and Singly.</p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/TishStrata.png"><img class="alignnone size-medium wp-image-6127" title="TishStrata" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/TishStrata-300x197.png" alt="" width="300" height="197" /></a><br />
</strong></p>
<p><em>Photo by <a href="http://duncandavidson.com/" target="_blank">Duncan Davidson</a>.<br />
<a href="http://strataconf.com/strata2011" target="_blank">Strata 2011</a> is presented by O&#8217;Reilly Media. Produced by<a href="http://2goodcompany.com/" target="_blank"> Good Company Communications.</a></em></p>
<p>I think the photo above gives a good idea of how I felt on the last day  at the Strata conference.  Yup &#8211; like the cat who got the cream!</p>
<p>And in case you are wondering<em> </em>where AR is in this story &#8211; it is everywhere!Â  Below is a pic of the AR concept designs that were omnipresent in the media communications at Strata.Â  The one below I snapped off the job board.Â  But as <a href="http://sproke.blogspot.com/" target="_blank">Sophia Parafina</a> noted,Â  <strong>&#8220;AR is maturing from displaying last year&#8217;s text bubbles and dinosaurs to big data overlaid on the world.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-1.39.01-AM.png"><img class="alignnone size-medium wp-image-6137" title="Screen shot 2011-02-10 at 1.39.01 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2011/02/Screen-shot-2011-02-10-at-1.39.01-AM-300x222.png" alt="" width="300" height="222" /></a></p>
<p><em><br />
</em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2011/02/10/jeremie-miller-the-locker-project-give-a-data-platform-to-the-people-in-the-era-of-data-everywhere-and-bloom-presents-fizz/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>The Missing Manual for the Future: Tim Oâ€™Reillyâ€™s Four Cylinder Innovation Engine</title>
		<link>https://www.ugotrade.com/2010/10/31/tim-o%e2%80%99reilly%e2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/</link>
		<comments>https://www.ugotrade.com/2010/10/31/tim-o%e2%80%99reilly%e2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/#comments</comments>
		<pubDate>Sun, 31 Oct 2010 21:25:02 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Bar Camp]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[#w2e]]></category>
		<category><![CDATA[algorithmic economies]]></category>
		<category><![CDATA[arduino]]></category>
		<category><![CDATA[Area/Code]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[Battlestorm]]></category>
		<category><![CDATA[Chris Arkenberg]]></category>
		<category><![CDATA[Cloudera]]></category>
		<category><![CDATA[counter surveillance]]></category>
		<category><![CDATA[Credit Suisse trading bots]]></category>
		<category><![CDATA[CrowdFlower]]></category>
		<category><![CDATA[data is gasoline]]></category>
		<category><![CDATA[Defeating Big Brother]]></category>
		<category><![CDATA[Dennis Crowley]]></category>
		<category><![CDATA[Dr Alex Kilpatrick]]></category>
		<category><![CDATA[ecologies of human and machine inteligence]]></category>
		<category><![CDATA[Esther Dyson]]></category>
		<category><![CDATA[Facebook for Data]]></category>
		<category><![CDATA[food52]]></category>
		<category><![CDATA[Four Square]]></category>
		<category><![CDATA[Four Square and Dodge Ball]]></category>
		<category><![CDATA[Four Square API]]></category>
		<category><![CDATA[Fred Wilson]]></category>
		<category><![CDATA[Games That Know Where You Live]]></category>
		<category><![CDATA[geopollster]]></category>
		<category><![CDATA[Glympse]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Gov 2.0]]></category>
		<category><![CDATA[Hackett]]></category>
		<category><![CDATA[Hadoop World]]></category>
		<category><![CDATA[high frequency trading]]></category>
		<category><![CDATA[hour.ly]]></category>
		<category><![CDATA[iPad]]></category>
		<category><![CDATA[iphone apps]]></category>
		<category><![CDATA[Jet Packs]]></category>
		<category><![CDATA[jetpack]]></category>
		<category><![CDATA[John Battele's Points of Control Map]]></category>
		<category><![CDATA[Kevin Slavin]]></category>
		<category><![CDATA[Knight Foundation]]></category>
		<category><![CDATA[Lars Rasmussen]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Loitering on the Motherboard]]></category>
		<category><![CDATA[machine to machine data]]></category>
		<category><![CDATA[machine to machine intelligence]]></category>
		<category><![CDATA[Macon Money]]></category>
		<category><![CDATA[Madagascar Institute]]></category>
		<category><![CDATA[Maker Faire]]></category>
		<category><![CDATA[Mary Haskett]]></category>
		<category><![CDATA[Mike Olsen]]></category>
		<category><![CDATA[mobile social augmented reality]]></category>
		<category><![CDATA[Nanex]]></category>
		<category><![CDATA[Nanex API]]></category>
		<category><![CDATA[Next Jump]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[Pachube API]]></category>
		<category><![CDATA[pathfinder]]></category>
		<category><![CDATA[people are the platform]]></category>
		<category><![CDATA[Platforms for Growth]]></category>
		<category><![CDATA[Points of Control Map]]></category>
		<category><![CDATA[Qualcomm vision based augmented reality SDK]]></category>
		<category><![CDATA[quant trading]]></category>
		<category><![CDATA[quantative analysis]]></category>
		<category><![CDATA[real time data analytics]]></category>
		<category><![CDATA[real time internet]]></category>
		<category><![CDATA[Samasource]]></category>
		<category><![CDATA[sensor platforms]]></category>
		<category><![CDATA[Shazam]]></category>
		<category><![CDATA[Shazam for faces]]></category>
		<category><![CDATA[sousveillance]]></category>
		<category><![CDATA[stock market flash crash]]></category>
		<category><![CDATA[Strata]]></category>
		<category><![CDATA[surveillance bots]]></category>
		<category><![CDATA[The Battle for the Internet Economy]]></category>
		<category><![CDATA[The Battle of the Networks]]></category>
		<category><![CDATA[The Business of Data]]></category>
		<category><![CDATA[The Consequences of Living in a World of Data]]></category>
		<category><![CDATA[The Future: The Missing Manual]]></category>
		<category><![CDATA[The Gartner Hype Cycle]]></category>
		<category><![CDATA[the internet is a data operating system]]></category>
		<category><![CDATA[The Internet Operating System]]></category>
		<category><![CDATA[The Jet Ponies]]></category>
		<category><![CDATA[The Missing Manual For The Future]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[Tim O'Reilly's Four Cylinder Engine for Innovation]]></category>
		<category><![CDATA[Tim O'Reilly's Four Cylinder Innovation Engine]]></category>
		<category><![CDATA[trading bots]]></category>
		<category><![CDATA[Twitter for Sensors]]></category>
		<category><![CDATA[Union Square Ventures]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[Valveless Pulse Jets]]></category>
		<category><![CDATA[WanderID]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Wave in a Box]]></category>
		<category><![CDATA[Web 2.0 Expo]]></category>
		<category><![CDATA[Web 2.0 Expo start ups]]></category>
		<category><![CDATA[Web 2.0 Summit]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[Where 2.0]]></category>
		<category><![CDATA[William Gibson]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5985</guid>
		<description><![CDATA[The Missing Manual for The Future (or The Future: The Missing Manual) Oâ€™Reilly Media, is famous for is producing&#160; â€œmissing manualsâ€ for new technologies, but thinking of Oâ€™Reilly as just a publisher of books would be like saying Facebook is just a website (this came up in the discussion at Media Round Table at Web [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="../wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM.png" mce_href="../wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM.png"><img class="alignnone size-medium wp-image-5786" title="Screen shot 2010-10-11 at 11.40.56 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM-300x198.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-11-at-11.40.56-AM-300x198.png" alt="Screen shot 2010-10-11 at 11.40.56 AM" height="198" width="300"></a><br mce_bogus="1"></p>
<h3>The Missing Manual for The Future (or The Future: The Missing Manual)</h3>
<p>Oâ€™Reilly Media, is famous for is  producing&nbsp; <a href="http://missingmanuals.com/" mce_href="http://missingmanuals.com/" target="_blank">â€œmissing manualsâ€</a> for new  technologies, but thinking of Oâ€™Reilly as just a publisher of  books would be like saying Facebook is just a website (this came up in  the discussion at Media Round Table at <a href="http://www.web2expo.com/" mce_href="http://www.web2expo.com/">Web 2.0 Expo, NY, 2010)</a>.&nbsp;&nbsp; In recent weeks, I managed to catch Tim Oâ€™Reilly at several events, <a href="http://makerfaire.com/newyork/2010/" mce_href="http://makerfaire.com/newyork/2010/" target="_blank">Maker Faire</a>, <a href="http://www.web2expo.com/" mce_href="http://www.web2expo.com/">Web 2.0 Expo</a>, <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" mce_href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a>, and the free webcast Tim did with John Battelle on <a href="http://radar.oreilly.com/2010/10/the-battle-for-the-internet-ec.html" mce_href="http://radar.oreilly.com/2010/10/the-battle-for-the-internet-ec.html" target="_blank">The Battle for the Internet Economy </a> (although Tim spoke several other times during this period!).</p>
<p>It  occurred to me, as I immersed myself in the depth and breadth of  innovation showcased and discussed at these events that Tim Oâ€™Reilly,  and the  Oâ€™Reilly team, are creating, <b>The Missing Manual for the Future.<br />
</b></p>
<p>As Tim  puts it, we are <b>â€œchanging the world by  spreading the knowledge of   innovators.â€</b> Tim uses a quote from William Gibson to illuminate what is at the heart of the Oâ€™Reilly project<b>:</b></p>
<p><b> </b></p>
<p><b>â€œThe Future is here, it is just not evenly distributed yet.â€ (William Gibson). </b></p>
<p>But Tim Oâ€™Reilly makes another point about the future when he  speaks.&nbsp; The future unfolds unexpectedly â€“ so we must invent for an  unknown future not a known future, or as Alex Steffen put it so well in  his post, <a href="http://www.worldchanging.com/archives/010959.html" mce_href="http://www.worldchanging.com/archives/010959.html" target="_blank"><span>Why Our Bright Green Futures Will Be Weirder Than We Think</span>,</a> â€“ <b>â€œThe world we need is one weâ€™ve never yet seen.â€</b> The magic of  attending an Oâ€Reilly event is that it gives you a chance to work on  this koan in interesting ways, and to take more responsibility for how  things turn out.<b> </b><b><br />
</b></p>
<p>Tim Oâ€™Reilly also urges that we think more deeply about what we are doing.&nbsp; His keynote for <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" mce_href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a> , NYC, billed as, <b>â€œThe Business of Dataâ€ </b>turned towards <b>â€œThe Consequences of Living in a World of Data.â€ </b>The  900 strong crowd at Hadoop World was probably one of the most savvy  crowds in the world about the business of data, so this was a nice turn.<b> </b></p>
<p><b> </b></p>
<p><a href="http://www.web2expo.com/" mce_href="http://www.web2expo.com/">Web 2.0 Expo</a> with the theme, <b>Platforms for Growth,</b> was a deep dive into the business of innovation.&nbsp; Tim Oâ€™Reillyâ€™s keynote at <a href="http://www.web2expo.com/" mce_href="http://www.web2expo.com/">Web 2.0 Expo</a>,&nbsp; â€œThinking Hard About The Futureâ€ (or rather â€œthinking a little bit creatively or differently about the future)&nbsp; â€“ see<a href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" mce_href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" target="_blank"> video here,</a> developed the call he made at Web 2.0 Expo 2008, to <b>â€œwork on stuff that matters,â€</b> into a Four  Cylinder Engine for Innovation. &nbsp; The first of the four  cylinders in the firing order is, <b>â€œHaving Fun!â€</b> But,&nbsp; at Maker Faire, Web 2.0 Expo, and Hadoop World I  got an inside  look at the workings of all four cylinders, and there is more to come, I  am sure, as the other Oâ€™Reilly events unfold over the coming months  including,&nbsp; <a href="http://www.web2summit.com/web2010" mce_href="http://www.web2summit.com/web2010" target="_blank">Web 2.0 Summit</a>, <a href="http://strataconf.com/strata2011" mce_href="http://strataconf.com/strata2011" target="_blank">Strata </a>(a new Oâ€™Reilly conference on The Business of Data), and <a href="http://radar.oreilly.com/2010/10/where-20-2011-cfp-is-open.html" mce_href="http://radar.oreilly.com/2010/10/where-20-2011-cfp-is-open.html" target="_blank">Where 2.0,  2011</a>.</p>
<p>In a free webcast, last week (<a href="http://www.youtube.com/oreillymedia#p/c/7/8CEyHSoWJcs" mce_href="http://www.youtube.com/oreillymedia#p/c/7/8CEyHSoWJcs" target="_blank">recording here</a>), previewing <a href="http://www.web2summit.com/web2010" mce_href="http://www.web2summit.com/web2010" target="_blank">Web 2.0 Summit</a>, John Battelle and Tim Oâ€™Reilly discussed the <a href="http://map.web2summit.com/" mce_href="http://map.web2summit.com/" target="_blank">Points of Control Map</a> which is developing into a fun and useful tool to examine a very  serious topic, â€œThe Battle for the Internet Economy,â€ and how the  â€œincreasingly direct conflicts between its major playersâ€ could effect  â€œpeople, government and the future of technology innovation.â€ &nbsp; In my  previous post, <a title="Permanent Link to Platforms for Growth and Points of Control for Augmented Reality: Talking with Chris Arkenberg" rel="bookmark">Platforms for Growth and Points of Control for Augmented Reality</a>, I had a great conversation with <a href="http://www.urbeingrecorded.com/" mce_href="http://www.urbeingrecorded.com/" target="_blank">Chris Arkenberg</a> using this map as a springboard.&nbsp; More on Points of Control later in this post.</p>
<h3>The Four Cylinders of Innovation</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-7.45.36-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-7.45.36-PM.png"><img class="alignnone size-medium wp-image-5814" title="Screen shot 2010-10-23 at 7.45.36 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-7.45.36-PM-300x193.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-7.45.36-PM-300x193.png" alt="Screen shot 2010-10-23 at 7.45.36 PM" height="193" width="300"></a><br mce_bogus="1"></p>
<p><i>click to enlarge</i></p>
<h3>From Jet Ponies to Jet Packs: The First Cylinder of Innovation â€“ â€œHave Funâ€</h3>
<p>The â€œmakerâ€ energy and its spirit of play, and the courage to create,  hack, reinvent and re-purpose everything and anything, is a  quintessential example of the first cylinder of innovation firing big.&nbsp;  Many â€œmakerâ€ projects also go on to fire on all four cylinders. &nbsp; But  the Maker forte definitely is in the first cylinder zone (and safety  third as some of the rides, including Jet Ponies, warned).&nbsp; The photo  opening this post by Marc  de Vinck â€“ for more pics <a href="http://www.flickr.com/photos/wurx/sets/72157624914508135/with/5027190140/" mce_href="http://www.flickr.com/photos/wurx/sets/72157624914508135/with/5027190140/">see here</a>, is of <a href="http://blog.makezine.com/archive/2010/09/tim_oreilly_rides_the_jet_ponies.html" mce_href="http://blog.makezine.com/archive/2010/09/tim_oreilly_rides_the_jet_ponies.html" target="_blank">Tim riding The Jet  Ponies</a> at <a href="http://makerfaire.com/newyork/2010/" mce_href="http://makerfaire.com/newyork/2010/" target="_blank">Maker Faire </a>which took&nbsp; the New York Hall of Science by storm in late September â€“ see<a href="http://makerfaire.com/newyork/2010/" mce_href="http://makerfaire.com/newyork/2010/" target="_blank"> </a><a href="http://cityroom.blogs.nytimes.com/2010/09/24/where-engineering-prowess-meets-burning-man/" mce_href="http://cityroom.blogs.nytimes.com/2010/09/24/where-engineering-prowess-meets-burning-man/" target="_blank">The New York Times coverage here</a>.&nbsp; The ride was <b>â€œbuilt by the  dastardly  danger-hackers at  the <a href="http://madagascarinstitute.com/" mce_href="http://madagascarinstitute.com/" target="_blank">Madagascar  Institute.</a>â€œ</b> See this <a href="http://thefastertimes.com/jetpacks/2009/10/09/this-guy-might-build-a-jetpack-or-at-least-a-hovercraft/" mce_href="http://thefastertimes.com/jetpacks/2009/10/09/this-guy-might-build-a-jetpack-or-at-least-a-hovercraft/" target="_blank">wonderful interview </a>with    Hackett on his work to design <b>â€œour specific jets from a patent that   was  filed in 1960s by a Mr. Lockwood, for Valveless Pulse Jets.â€ </b> Hackett points out:<b> </b></p>
<p><b>â€œLouder than god, glowing white-hot and looking like the  trombone of the Apocalypse, pulse jets are also really shitty,  inefficient engines,â€</b></p>
<p>But, he adds:</p>
<p><b>â€œI have always wanted a jetpack, and one of the reasons I learned to build these things was to further that    goal.â€</b></p>
<p>This grand vision behind the Jet Ponies is a key to firing, <b>The Second Cylinder of Innovation,&nbsp; â€œHey, we can change the world!â€</b></p>
<p>But Jet Ponies, as a stepping stone to jet packs, also really struck a  chord for me as I have been devoting a lot of time lately to the  emerging Augmented Reality industry, a technology which was lumped in  the same category of sci fi  chimera  as jet packs until very recently.</p>
<h3><b> Data is the Gasoline</b></h3>
<p><b><a href="../wp-content/uploads/2010/10/data.jpg" mce_href="../wp-content/uploads/2010/10/data.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/data.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/data.jpg"><img class="alignnone size-full wp-image-5862" title="data" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/data.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/data.jpg" alt="data" height="212" width="300"></a><br />
</b></p>
<p><b> </b></p>
<p><b>â€œThe faces are coming from the sky. &nbsp;The locations are coming   from  the sky.   &nbsp;All these apps depend on something, somewhere up.   &nbsp;And   that,  to me,  was always the heart of Web 2.0. &nbsp;And I am so  delighted   that        people are   finally getting it. &nbsp;Because for a long time,  people   thought, â€˜Oh,  Web 2.0, itâ€™s about    lightweight  advertising   supported   in a web  start up.â€™&nbsp;  So I   went, â€˜No, no, no.    Itâ€™s about  the fact that  weâ€™re  building  these    giant database    subsystems in  the  sky  that are   going to   drive    applications.â€™&nbsp;  And   now, of  course, the  same      application is  on   your PC,  itâ€™s  on  your   phone,  itâ€™s on you    iPad.  &nbsp;And  clearly, the    applications are   just sort of  an  interface   to   something    that   is being  driven  from the    cloud,   and that is     fabulous. &nbsp;Thatâ€™s     the  difference.   &nbsp;People get it    now.â€ </b>(Tim Oâ€™Reilly, said this as part of a response to the first questioner at the Media Round table Web 2.0 Expo)</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/5036745797_cf544d22cd_z.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/5036745797_cf544d22cd_z.jpg"><img class="alignnone size-medium wp-image-5802" title="Media Roundtable" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/5036745797_cf544d22cd_z-300x199.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/5036745797_cf544d22cd_z-300x199.jpg" alt="Media Roundtable" height="199" width="300"></a><br mce_bogus="1"></p>
<p><i>Answering questions about the importance of â€œHaving Funâ€ to innovation doesnâ€™t look quite as fun as riding Jet Ponies!</i> <i>Photo above from<a href="http://www.flickr.com/photos/lucasartoni/5036745797/in/photostream/" mce_href="http://www.flickr.com/photos/lucasartoni/5036745797/in/photostream/" target="_blank"> luca.sartoniâ€™s Flickr stream</a></i></p>
<p><i>&#8220;</i><b> the  data that  is generated by the sensors  and the applications  that  use  that data is  going to be where people  are going to be  innovative.â€ (Tim O&#8217;Reilly)<br />
</b></p>
<p>During the Media Round Table, I had a chance to ask Tim more about  the role of bottom up innovation in a world where big data is the  gasoline for increasingly sophisticated engines â€“ platforms integrating  machine to machine intelligence and real time analytics.</p>
<p><b>Tish Shute:</b> You brought up Maker Faire in your  keynote, and again now. &nbsp;I was    there, which not many people in the  audience were&nbsp; [not too many hands   went up when Tim asked during his  keynote]. &nbsp;But I think one of  the things that struck me   was the jet  ponies â€“ they were just earthshaking to stand near. &nbsp;They   made the  ground tremble; they made the  world shake.&nbsp; Yet, most of your keynote,  and most of whatâ€™s on our minds here,   at Web 2.0 Expo, is extracting  intelligence from the big data [in the   sky],  and algorithmic  intelligences are the jet engines of the   internet.&nbsp; And of course, not  to be forgotten, as we are here in  New   York City, where the trading  markets are creating the air we breathe&nbsp;   [although we probably don't  realize it until we lose our mortgage or   something] and these  algorithmic economies or â€œrobot casinosâ€ as Kevin Slavin put it, are all  about speed â€“ itâ€™s not just real-time, issues of latency are&nbsp; so  critical that co-location is key to winning the game of the markets.&nbsp;  [Kevin Slavin brilliantly unpacks this in his talk, "Loitering on the  Motherboard."  For more in this see my conversation with Kevin Slavin  below].</p>
<p>So   my question is, whoâ€™s making the jet ponies for the algorithmic    economies in the sky that you just described?&nbsp;&nbsp; How can we make a play    from the bottom up?&nbsp; I always feel <a href="http://www.ushahidi.com/" mce_href="http://www.ushahidi.com/">Ushahidi</a> is one of the jet ponies of   the data  algorithmic space [because of  their great work to bring human   and machine intelligence together to  solve problems in crisis   situations]. &nbsp;But who do you think is doing  exciting work and how can we   ensure that this powerful  world of data  and algorithmic intelligences does not become hidden in a   closed black   box [only really accessible to elite players like the  NYC  trading  markets]?</p>
<p><b> </b></p>
<p><b>Tim Oâ€™Reilly: â€œWell, I think thereâ€™s certainly a lot of  interesting things happening    in, say, the financial services that a  lot of, kind of, the Internet    folks are kind of blind to. &nbsp;I think  that there are companies like <a href="http://www.nextjump.com/" mce_href="http://www.nextjump.com/" target="_blank">Next  Jump</a> which are really good with data and good with algorithms. But  kind of  speaking specifically to the maker side of this, that   whole  sensor  enabled world which is going to produce data is in its   infancy.  &nbsp;What  we have that I think is so powerful right now is we have   the first   portable sensor platform. &nbsp;I said in my talk the other day,   you know,   your phone has ears, it has eyes, it has a sense of where  it  is. &nbsp;And   these are all available to application developers. You know, you can  compare, say, Dodgeball to Foursquare, you can see how  differentâ€¦  Dodgeball is Foursquare in the tele-type era.&nbsp; Foursquare is now  possible because there are so many more capabilities  on the phone.</b></p>
<p><b>And  I think that we are going to see a lot of other areas  that are revolutionized by the sensors in the device. &nbsp;It could well be  that some    of them will come explicitly out of the maker kind of  projects, or it could just be that make is sort of a proxy for them.&nbsp; So  yeah, <a href="http://www.arduino.cc/" mce_href="http://www.arduino.cc/" target="_blank">Arduino</a> is  this great maker sensor platform, but hey, hereâ€™s a    consumer sensor  platform [holding up phone]. Maybe we vaulted past  the  maker stage  already  and we just didnâ€™t know it.</b></p>
<p><b> </b></p>
<p><b>And  thatâ€™s not entirely true, because Arduino is building a  whole economy  of special purpose devices. &nbsp;But it feels a little bit  like the days when people rolling their own PCs coexisted with the rise  of Dell, who was a kid in his college dorm room who made his own PCs and  sold them  on the net, but figured out how to scale it pretty quickly  and get  good  at  it.  But  there were still a lot of garage shops, you  know, â€˜Iâ€™ll make a PC  and sell it to youâ€™ people for probably a decade  before there was   really a  clue that that was a commodity industry.  &nbsp;In fact, I do think   the sensor  platforms are going to become a  commodity industry. &nbsp;And  the  data that  is generated by the sensors  and the applications that  use  that data is  going to be where people  are going to be innovative.â€</b></p>
<h3><b>The internet operating system is a data operating system and it is happening in real time (Tim Oâ€™Reilly)<br />
</b></h3>
<p><b> </b></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Hadooppost.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Hadooppost.jpg"><img class="alignnone size-medium wp-image-5839" title="Hadooppost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Hadooppost-300x202.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Hadooppost-300x202.jpg" alt="Hadooppost" height="202" width="300"></a><br mce_bogus="1"></p>
<p><i>click to enlarge the image above&nbsp; â€“ a slide from Mike Olsenâ€™s&nbsp; (CEO of Cloudera) keynote at <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" mce_href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Hadoop World</a></i></p>
<p>Not only  do  we have a portable sensor platform in our pockets&nbsp;    but developers also have  powerful platforms and tools to make sense of  data that fuel  our apps. &nbsp; Opensource <a href="http://hadoop.apache.org/" mce_href="http://hadoop.apache.org/" target="_blank">Hadoop</a> makes  available, to    anyone with   some data  munching chops, the  power to work  with giant  unstructured databases and  do <a target="_blank" mce_href="http://gigaom.com/2009/09/20/getting-closer-to-real-time-with-hadoop/" href="http://gigaom.com/2009/09/20/getting-closer-to-real-time-with-hadoop/">the kind of  real time  analytics</a>  previously only available to giants  like Google.&nbsp;  Big players  like  Yahoo, Facebook, and Twitter use Hadoop (Jonathon  Gray from Facebook noted they add 10TB <i>a day)</i>. &nbsp; But, as <a href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" mce_href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" target="_blank">this great roundup of Hadoop World </a>points  out, while Hadoop gets  the press for handling petabytes of data , Mike  Olsen (CEO of Cloudera) noted, the fastest growing area of  users are  working with clusters   smaller than 10TB and over half of the Hadoop  clusters were under 10TB in size.</p>
<h3>Four Square: A Platform for Growth with an ecosytem built on top of data that exists in the real world</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-2.27.19-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-2.27.19-AM.png"><img class="alignnone size-medium wp-image-5888" title="Screen shot 2010-10-26 at 2.27.19 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-2.27.19-AM-300x256.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-2.27.19-AM-300x256.png" alt="Screen shot 2010-10-26 at 2.27.19 AM" height="256" width="300"></a><br mce_bogus="1"></p>
<p>As an augmented reality enthusiast it is not hard to guess that one of my favorite platforms for growth is <a href="http://foursquare.com/apps/" mce_href="http://foursquare.com/apps/" target="_blank">Four Square</a>.&nbsp; See <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15652" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15652" target="_blank">Dennis Crowleyâ€™s keynote at Web 2.0 Expo</a> here.&nbsp; The Four Square API has been available to developers since   November 2009,&nbsp; and there are already a number of&nbsp; interesting   applications, and there will be many more to come.&nbsp; The screen shot  above is of <a href="http://geopollster.com/" mce_href="http://geopollster.com/" target="_blank">geopollster</a> â€“ <a href="http://foursquare.com/apps/" mce_href="http://foursquare.com/apps/" target="_blank">see the gallery of Four Square apps here</a>.</p>
<p><i><b><b><b>@dens  tweeted recently&nbsp; â€œPolitics +  @Foursquare = @GeoPollsterâ€   http://geopollster.com &lt;- I love love  love that people are using 4SQ   to think about election tools</b></b></b></i></p>
<p>As Kati London pointed out in her keynote, Four Square is the <b>â€œkind   of augmented reality that is aimed at shifting or  changing a   personâ€™s  social reality, e.g. the mayor badges in Four Square  that   change my  relationship to the people and the place I am in, and   augment   engagement and reputation through socially driven consumer tie   ins.â€ </b> We are already see augmented reality developers beginning to work with the Four Square API â€“ see here, <a href="http://recombu.com/apps/iphone/arstreets-app-review_M12590.html" mce_href="http://recombu.com/apps/iphone/arstreets-app-review_M12590.html" target="_blank">Foursquare + Augmented Reality + Virtual Graffiti = ARstreets</a>.</p>
<p>As augmented reality development tools mature, Four Square will, increasingly, become an important platform<b> </b>for creative AR developers interested in integrating the power of this platform for augmented engagement and reputation with <b>â€œdevice aided augmented  reality that can shift visual experiences of situated geolocal  experiences.â€ </b> With the <a href="http://developer.qualcomm.com/dev/augmented-reality" mce_href="http://developer.qualcomm.com/dev/augmented-reality" target="_blank">Qualcomm vision based augmented reality SDK</a> now available for download, and <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" mce_href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a> soon? to be released, and an <a href="http://arwave.org/" mce_href="http://arwave.org/" target="_blank">ARWave</a> client working on Android (almost!), I have been exploring the Four Square API in my non existent spare time!!</p>
<p>The Four Square API also offers some interesting possibilities for  exploring games that take the complex economy of Four Square â€“ not  personal data but aggregates of behavior, as their subject matter (for  more on this see my conversation with Kevin Slavin later in this post  and in an upcoming post).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/DennisatWhere2009post.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/DennisatWhere2009post.jpg"><img class="alignnone size-medium wp-image-5886" title="DennisatWhere2009post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/DennisatWhere2009post-199x300.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/DennisatWhere2009post-199x300.jpg" alt="DennisatWhere2009post" height="300" width="199"></a><br mce_bogus="1"></p>
<p><i>I took this picture of Dennis at <a href="http://where2conf.com/where2009/" mce_href="http://where2conf.com/where2009/" target="_blank">Where 2.0, 2009</a> at the beginning of Four Squareâ€™s phenomenal growth (they are at 4 million plus users now).</i></p>
<p><i><br />
</i></p>
<h3><b><b><b>Pachube (Patch-Bay): </b></b></b>a web service for storing and sharing sensor, energy and environmental data</h3>
<p><b><b><b><a href="../wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1.png" mce_href="../wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1.png"><img class="alignnone size-medium wp-image-5838" title="Screen shot 2010-10-24 at 7.58.17 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1-300x198.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-7.58.17-PM1-300x198.png" alt="Screen shot 2010-10-24 at 7.58.17 PM" height="198" width="300"></a><br />
</b></b></b></p>
<p>Eighteen months ago, I interviewed Usman Haque (architect and director, <a id="o.td" title="Haque Design + Research" href="http://www.haque.co.uk/" mce_href="http://www.haque.co.uk/" target="_blank">Haque Design + Research</a>) and founder of <a id="cpbp" title="Pachube" href="http://www.pachube.com/" mce_href="http://www.pachube.com/">Pachube</a> â€“ see <a target="_blank">Pachube, Patching the Planet</a>. &nbsp; Usman pointed me to this wonderful evocative image from <a href="http://www.geog.ubc.ca/%7Etoke/Profile.htm%20%3Chttp://www.geog.ubc.ca/%7Etoke/Profile.htm" mce_href="http://www.geog.ubc.ca/%7Etoke/Profile.htm%20%3Chttp://www.geog.ubc.ca/%7Etoke/Profile.htm" target="_blank">T.R. Okeâ€™s</a> book, <a href="http://www.amazon.com/Boundary-Layer-Climates-T-Oke/dp/0415043190" mce_href="http://www.amazon.com/Boundary-Layer-Climates-T-Oke/dp/0415043190" target="_blank">â€œBoundary Layer Climatesâ€</a> (original photo source Prof. L. E. Mountâ€™s <a href="http://www.alibris.com/booksearch?qwork=1137594&amp;matches=1&amp;author=Mount%2C+Laurence+Edward&amp;browse=1&amp;cm_sp=works*listing*title" mce_href="http://www.alibris.com/booksearch?qwork=1137594&amp;matches=1&amp;author=Mount%2C+Laurence+Edward&amp;browse=1&amp;cm_sp=works*listing*title" target="_blank">The Climatic Physiology of the Pig</a>).&nbsp; â€œ<i>Itâ€™s  the same piglets, in the same box, but on the right hand side  the  temperature has been increased. This small change in how the space  is  â€œprogrammedâ€ has dramatically changed the way the â€˜inhabitantsâ€™  relate  to each other and how they relate to their space.â€</i></p>
<h3><b><b><b><b><b><b>The Challenge of Connecting people and environments.</b></b></b></b></b></b></h3>
<p>At Web 2.0 Expo, I got  the opportunity to talk with Usman Haque again.&nbsp;&nbsp; <a href="http://www.pachube.com/" mce_href="http://www.pachube.com/" target="_blank">Pachube,</a> is becoming an established platform now, Usman explained.&nbsp; They have a  development team of eleven and robust back end.&nbsp; And, they will now be  spending some more time on the front end, including a redesign of the  website,&nbsp;making <b>â€œit a lot easier to widgetize the entire website  so that you will be  able to take almost any element and embed that  into your own website.â€ </b>And, as <a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" mce_href="http://www.web2expo.com/webexny2010/public/schedule/speaker/43845" target="_blank">Usman mentioned in his presentation</a>,  they are working on an augmented reality interface, Porthole, for  facilities management and, â€œas a consumer-oriented application that  extends the universe of Pachube data into the context of AR â€“ a  â€˜portholeâ€™ into Pachubeâ€™s data environments..&nbsp; Usman is also  contributing to the AR standards discussion and on the program committee  now <a href="http://www.w3.org/2010/06/16-w3car-minutes.html#item02" mce_href="http://www.w3.org/2010/06/16-w3car-minutes.html#item02" target="_blank">for the W3C group on augmented reality</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-10.22.24-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-10.22.24-PM.png"><img class="alignnone size-medium wp-image-5912" title="Screen shot 2010-10-26 at 10.22.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-10.22.24-PM-300x134.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-10.22.24-PM-300x134.png" alt="Screen shot 2010-10-26 at 10.22.24 PM" height="134" width="300"></a><br mce_bogus="1"></p>
<p>Click to enlarge the image above from Chris Burmanâ€™s paper for the W3C, <a href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" mce_href="http://www.w3.org/2010/06/w3car/portholes_and_plumbing.pdf" target="_blank">Portholes and Plumbing: how AR erases boundaries between â€œphysicalâ€ and â€œvirtualâ€</a><br mce_bogus="1"></p>
<p>Pachube, is sometimes described as the Facebook    for Data or an  analogy Usman prefers, a Twitter for   Sensors.&nbsp; At Web 2.0 Expo, I had    an amazing opportunity  to   hear from Twitter and Facebook about  their strategies as platforms for growth.&nbsp; This gave me lots of fuel for  questions about Pachubeâ€™s approach to developing their platform.&nbsp;  Simplicity was a theme that Facebook&nbsp; and Twitter both affirmed as a  key.&nbsp; One of Pachubeâ€™s challenges will be to deliver ease of use, and  the equivalent of Facebookâ€™s â€œlikeâ€ and &nbsp;Twitterâ€™s â€œfollowâ€ to gain mass  appeal.</p>
<p>Here is a brief excerpt from my upcoming conversation with Usman:</p>
<p><b>Tish Shute</b>:  So as a platform you see Pachube as having  more in common with Twitter â€“ a Twitter for Sensors. In what ways is  Pachube similar to Twitter?</p>
<p><b>Usman Haque:  Well we are the Twitter of sensors, devices  &amp; machines in the sense that, really, the API that enables all this  communication is important, much more so than the website itself.  It is  where, basically, most of the millions of our hits actually go, is to  the backend.  And weâ€™ve now got dozens of applications built on top of  the system, a little bit like Twitterâ€™s applications; you know, all the  apps are the important part.</b></p>
<p><b>But we are actually going to be doing some quite exciting  things with API keys that we havenâ€™t really spoken that much about in  public.  But we have come up with a pretty innovative solution to make  almost every resource have granular privacy options on it, <a href="http://community.pachube.com/node/526" mce_href="http://community.pachube.com/node/526">now discussed here</a>. </b></p>
<p>At Hadoop World, Tim Oâ€™Reilly also raised some interesting broader  questions that are very relevant to Pachubeâ€™s vision to â€œpatch the  planetâ€, e.g, the problem of digital identity in the  age of sensors?  (Smart phones already know their users by the way they walk!) And, <b>â€œHow should we think about privacy in a world where data can be triangulated?â€</b></p>
<p>Usman talked about  Pachubeâ€™s approach to both the   technical  aspects of  how to build  a   massively scalable system, and the   conceptual aspects of  how people connect to  each other, and what they   might do with  these   new opportunities to  connect environments and     sensor data&nbsp; (see my   earlier talk with Usman, <a target="_blank">Pachube, Patching the Planet</a>, for a detailed    explanation of some of the   concepts behind  Pachube).</p>
<p>I look forward to posting this conversation.  Pachube is growing, and  Usman always goes beyond the familiar tropes of connecting human and  machine intelligence.</p>
<h3><b> 2nd Cylinder of Innovation: â€œHey Can We Change the World!â€</b></h3>
<p><b><a href="../wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM.png" mce_href="../wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM.png"><img class="alignnone size-medium wp-image-5826" title="Screen shot 2010-10-24 at 5.26.55 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM-300x217.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-5.26.55-PM-300x217.png" alt="Screen shot 2010-10-24 at 5.26.55 PM" height="217" width="300"></a><br />
</b></p>
<p>The possibilities for reimagining of the role of data in healthcare  produced some of the most powerful â€œHey Can We Change the Worldâ€ moments  for me at both Web 2.0 Expo and Hadoop World.&nbsp; The slide above is from Esther  Dysonâ€™s brilliant Ignite presentation, <a href="http://www.slideshare.net/ignitenyc/esther-dyson-what-you-can-and-cant-learn-from-your-genes" mce_href="http://www.slideshare.net/ignitenyc/esther-dyson-what-you-can-and-cant-learn-from-your-genes" target="_blank">â€œWhat you can and canâ€™t learn from your genes?â€ are here</a>,  &nbsp; Tim Oâ€™Reilly also brought up the powerful role real time data  analytics can play in improving healthcare in his Hadoop World Keynote.&nbsp;  Also see Alex Howardâ€™s post, <a href="http://radar.oreilly.com/2010/10/top-10-lessons-for-gov-20-from.html" mce_href="http://radar.oreilly.com/2010/10/top-10-lessons-for-gov-20-from.html" target="_self">10 Lessons for Gov 2.0 from Web 2.0 </a>for some more great, â€œhey we can change the world momentsâ€ at Web 2.0 Expo.&nbsp; The keynote from <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15726" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15726" target="_blank">Lukas Biewald of CrowdFlower and Leila Chirayath Janah of Samasource </a>(screen shot below)<a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15726" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15726" target="_blank"> </a>in particular, is a provocative exploration of the future of work in the new ecologies of human and machine intelligence.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-8.21.43-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-8.21.43-PM.png"><img class="alignnone size-medium wp-image-5870" title="Screen shot 2010-10-25 at 8.21.43 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-8.21.43-PM-300x184.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-8.21.43-PM-300x184.png" alt="Screen shot 2010-10-25 at 8.21.43 PM" height="184" width="300"></a><br mce_bogus="1"></p>
<h3><b>Changing the World When Our Lives Are Increasingly Shaped by Forces Invisible To Us?</b></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-11.49.32-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-11.49.32-PM.png"><img class="alignnone size-medium wp-image-5840" title="Screen shot 2010-10-24 at 11.49.32 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-11.49.32-PM-300x152.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-24-at-11.49.32-PM-300x152.png" alt="Screen shot 2010-10-24 at 11.49.32 PM" height="152" width="300"></a><br mce_bogus="1"></p>
<p><i>Click to enlarge</i></p>
<p>Mike Olsen, CEO of Cloudera, noted that <b>â€œthe largest area of  data growth does not come from humans interacting  with machines;  rather, itâ€™s from machines interacting with each otherâ€ </b>(see here in <a href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" mce_href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" target="_blank">Minor Technical Difficulties</a>).&nbsp;&nbsp; One of the most  interesting presentations at Web 2.0 Expo was <a href="http://www.web2expo.com/webexny2010/public/schedule/speaker/86516" mce_href="http://www.web2expo.com/webexny2010/public/schedule/speaker/86516" target="_blank">Kevin Slavinâ€™s, â€œLoitering  on the Motherboard,â€ </a>which,  as Tim Oâ€™Reilly pointed out in his keynote at Hadoop World, is a  talk  that raises all  kinds of questions about a system where big  players  are gaming the data  for their own ends.</p>
<p>Kevin Slavin, a founder of <a href="http://areacodeinc.com/" mce_href="http://areacodeinc.com/">Area/Code</a>,  notes  the operating system of our mortgage, life insurance, the  operating  system of currencies and gold is now governed by machine to  machine  intelligence and algorithimic economies outside of human  cognitive  processes.&nbsp; The  markets are now legible only to bots  in an  algorithmic  arms race with bots surveilling bots, and throwing off   false  information in a bid for counter-surveillance.&nbsp; He showed some  slides of  the eery but beautiful visualizations of traces of the  trading bots  created from the Nanex API.</p>
<p>The screenshot above is from the <a href="http://www.nanex.net/FlashCrash/CCircleDay.html" mce_href="http://www.nanex.net/FlashCrash/CCircleDay.html" target="_blank">Nanex: Crop Circle of the Day â€“ Quote Stuffing and Strange Sequences</a>.&nbsp; <b>â€œThe   common theme with the charts shown on this page is they are  all   generated in code and are algorithmic. Some demonstrate  bizarre price   or size cycling, some demonstrate large burst of quotes in  extremely   short time frames and some will demonstrate bothâ€¦â€</b> This one is a   zoom of the NSDQ â€œWild Thing.â€&nbsp; Wild  price/size repeater from NSDQ   running at 1,000 quotes per second,  effecting the BBO along the way (I   love the great names Nanex gives the different patterns and traces   produced by the trading bots).</p>
<p>Nanex supplies a <a href="http://www.nanex.net/" mce_href="http://www.nanex.net/">real-time data feed</a> comprising trade and quote data for all US equity, option, and futures exchanges. They have <a href="http://www.nanex.net/historical.html" mce_href="http://www.nanex.net/historical.html">archived this data</a> since 2004 and have created and used numerous tools to â€œsift through   the enormous dataset: approximately 2.5 trillion quotes and trades as of   June 2010.â€ May 6th 2010 (day of the flash crash), had approximately  7.6  billion trade, quote, level 2, and depth records.</p>
<p>Kevin points out that our lives are being shaped by criteria  invisible to  us and the old hackneyed tropes of machine to machine  intelligence such a  robots reading HUDs in English are long worn out.&nbsp;  The latter  point is, perhaps, something for us augmented reality geeks  absorbed in  ideas of â€œmaking the invisible visibleâ€ to chew on.</p>
<p>Changing a world shaped by forces that are, increasingly, invisible to us presents a huge challenge.</p>
<p>But I had the glimmer of a, â€œHey Can We Change the Worldâ€ moment,  when I attended Kevin Slavin founder of Area/Codeâ€™s presentation and had  a conversation with him after his talk.&nbsp; Could games take these complex  economies as their subject matter?&nbsp; The economies of&nbsp; Farmville and  games like WoW are not opaque at all, and these are environments with  complex economic behavior, <b>â€œwhere you can actually have enough data to understand what it isâ€</b> â€“ <b>â€œitâ€™s not so much about personal data. &nbsp;Itâ€™s more about, like, aggregate behaviors.â€ </b> <b>â€œGames   that can really model those, and play with those, and take those as  the  subject the way that Monopoly takes Monopoly as a subject could be   really interesting.â€ </b>Kevin made many fascinating points â€“ more to come on this topic.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/KevinSlavin.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/KevinSlavin.jpg"><img class="alignnone size-medium wp-image-5980" title="Kevin Slavin" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/KevinSlavin-300x199.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/KevinSlavin-300x199.jpg" alt="Kevin Slavin" height="199" width="300"></a><br mce_bogus="1"></p>
<p>Photo by <a rel="nofollow" href="http://duncandavidson.com/" mce_href="http://duncandavidson.com/">James Duncan Davidson</a>, of Kevin Slavin speaking at Web 2.0 Expo NY, 2010, from the <a href="http://www.flickr.com/photos/oreillyconf/5035426532/" mce_href="http://www.flickr.com/photos/oreillyconf/5035426532/" target="_blank">Oâ€™Reilly Conferences Flickr stream</a><br mce_bogus="1"></p>
<p>Here is the beginning of our conversation:</p>
<h3>Talking With Kevin Slavin</h3>
<p><b><b>Tish Shute: </b></b>You began your talk  today about visibility and where some of the  algorithmic masters of  disguise went to work, after they had solved the  math behind stealth  bombers. &nbsp;I thought perhaps you were leading into  ideas about a reverse  surveillance society.</p>
<p>But  you surprised me, as I felt you made visibility itself kind of a   non-issue by the end of your presentation and that counter  surveillance  became basically a time and speed issue. &nbsp;Now I am not  sure quite how to  imagine a counter-surveillance society, something I  try to think  aboutâ€¦</p>
<p><b><b>Kevin Slavin: Well, letâ€™s see. &nbsp;Thereâ€™s a couple ways  to think about it. &nbsp;I think  one point is just that when we talk about  counter-surveillance, we  usually locate that as something that comes  from &nbsp;the bottom up,  something that comes from the population. Think  about the way the  plane spotters discovered the CIA black rendition  flights.</b></b></p>
<p><b><b>I  think in general, when people talk about counter  surveillance, or  sousveillance, they imagine it as an inversion of the  traditional  relationship between the people and the state.</b></b></p>
<p><b><b>But  thatâ€™s whatâ€™s interesting. Whatâ€™s happening now,  is that there are  forms of surveillance and counter-surveillance that  are in play beyond  any human perceptual horizons. These forms are at  their most  sophisticated in financial services, in the markets.</b></b></p>
<p><b><b>If  you were a bot, and could read the market legibly  (which humans  cannot), what you would see, effectively, are bots that  are surveilling  bots. Then you have bots that are throwing off false  information in a  bid for counter-surveillance. Many of the bots are,  themselves,  surveilling other bots; each one of them is trying to  figure out what  all the other ones are going to do. In essence, itâ€™s an  algorithmic arms  race, and game theory has become concrete, since the  theories are code,  the code is action, and the action affects, letâ€™s  say: your mortgage.</b></b></p>
<p><b><b>And  so, basically what you have is you have this  series of algorithms that  are all looking to discern each other, while  also trying to prevent  themselves from being discerned. I think of the  tunnels under the  trenches in WWI, tunnels to surveil the trenches, and  then, later,  tunnels to surveil the tunnels. Thereâ€™s a few examples of  this kind of  thing. &nbsp;But Itâ€™s especially strange when itâ€™s computer  code, and at the  magnitude weâ€™re seeing today.</b></b></p>
<p><b><b>All  of it, as noted in the talk, accounting for 70%  of all the trades in  the market. 70% of the market trades are never  touched by human hands or  even seen by human eyes; they donâ€™t move  through a conventional  cognitive process. &nbsp;And thatâ€™s why you get  things like the Credit Suisse  algorithm, it was buying, selling 200,000  shares of stocks to itself  over and over and over again. It was a bug  and it slowed the market to a  crawl.</b></b></p>
<p><b><b>Credit  Suisse was fined, in essence, for failing to  control an algorithm.  Maybe thatâ€™s the first time an algorithm was  treated like a human, in a  way. As if the algorithm broke the law, and  Credit Suisse was  responsible for letting it do so. For me, that feels  like a threshold  event.</b></b></p>
<p><b><b>Itâ€™s not that humans never made mistakes when trading on the market. But when algorithms err, they err with magnitude.</b></b></p>
<p><b><b>The  idea that we now have bugs in the United States  market economy is  really worth looking at. &nbsp;If Apple canâ€™t keep code  bugs from the most  simple iPhone apps in a closed and regulated  ecosystem, Iâ€™m pretty  certain weâ€™ll have a lot more Credit Suisse type  bugs in the future.</b></b></p>
<p><b><b>And  that will be pretty interesting. There will be  viruses, and the  operating system they will operate on will be the  operating system of  the United States. The operating system of your  pension, your house,  your life insurance. The operating system of  currencies and gold.</b></b></p>
<p><b><b>Tish Shute:</b></b> I was hard-pressed by  the end of your talk to think of like, â€œWell,  what would be the  equivalent of, sort of a peopleâ€™s uprising to create a  better fairer  society in this kind of world where, really, the things  that affect the  key aspects of lives most are going on beyond human perception at an  algorithmic  level?â€&nbsp; But you made a pretty radical suggestion at the  endâ€¦</p>
<p><b><b>Kevin Slavin: Well  I think increasingly the markets  have become delaminated from anything  meaningful. First from goods,  then from fundamentals, and now finally  from homo sapiens. So thatâ€™s  hard to fight.</b></b></p>
<p><b><b>Itâ€™s  the race towards abstraction that makes it  impossible to simply  â€œresist.â€ The latest version in the long series of  fiscal catastrophes  was based on Wall Street finding goods that could  be rolled up and sold  with false valuations, but goods that would take a  long time to fail.  Mortgages are handy like that. Itâ€™s the tradition  of extending the  abstraction as long as possible, until finally the  bill arrives and the  banks fail. I donâ€™t know if thatâ€™s something to  rise up against or not.  Itâ€™s like a rally against evil.</b></b></p>
<p><b><b>But  really, I think the point is that it wonâ€™t be  the people that rise up.  It will be the financial services themselves  that rise up. Theyâ€™ll just  detach completely.</b></b></p>
<p><b><b>That  was harder to do with cotton or with wheat,  with simple futures; they  keep financial services tied to the ground.  &nbsp;So what weâ€™re doing is  creating increasingly complex financial  instruments that are further and  further removed from anything you can  touch. &nbsp;Like the way a mortgage  is abstract. But, of course, the bottom  line is that at the end of that  mortgage lies someoneâ€™s home.</b></b></p>
<p><b><b>Itâ€™s  said that Wall Street is now moving onto life  insurance, because thatâ€™s  going to take even longer to fail. &nbsp;Theyâ€™re  doing the exact same thing.  The word is that they are rolling up CDOs  made out of crap life  insurance policies, same way they rolled them up  with crap mortgages a  few years ago.</b></b></p>
<p><b><b>And  those will probably take, I donâ€™t know, 15 or 20  years to unwrap and  unravel. &nbsp;But what you see in the meantime, is  that they are looking for  things that are increasingly abstract,  intangible, removed as far as  possible from the experience of everyday  life.</b></b></p>
<p><b><b>So  maybe this is good. Maybe thatâ€™s financial  services rising up. Lifting  off. I think best case scenario now is that  they actually leave humans  alone altogether. &nbsp;That, someday, they are  just trading, effectively,  completely arbitrary goods, the stocks could  be anything at all, maybe  for crops that no longer exist, and Iâ€™m just  saying that then these bots  would no longer affect what we do and what  we are, it would just be a  robot casino, an invisible paradise in the  air.</b></b></p>
<p><b><b><br />
</b></b></p>
<h3><b><b>People are the platform: How Games Can Be Engines of Innovation in Our Lives</b></b></h3>
<p><b><b><a href="../wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM.png" mce_href="../wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM.png"><img class="alignnone size-medium wp-image-5872" title="Screen shot 2010-10-25 at 11.34.58 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM-300x204.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-25-at-11.34.58-PM-300x204.png" alt="Screen shot 2010-10-25 at 11.34.58 PM" height="204" width="300"></a><br />
</b></b></p>
<p><i><b><b>See the video of <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">Games that Know Where We Live</a> here (screen shot above)<br />
</b></b></i></p>
<p><i><b><b> </b></b></i></p>
<p>Kati London, Senior Producer, <a href="http://areacodeinc.com/" mce_href="http://areacodeinc.com/">Area/Code</a>, in her keynote showed how <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">games that know where we  live</a> can shift players perspectives â€“ from device aided augmented  reality  that can shift visual experiences of situated geolocal  experiences to a  kind of augmented reality that is aimed at shifting or  changing a  personâ€™s social reality, e.g. the mayor badges in Four Square  that  change my relationship to the people and the place I am in, and  augment  engagement and reputation through socially driven consumer tie  ins.</p>
<p>Area/Code has recently developed<a id="internal-source-marker_0.7281649763651145" href="http://www.knightfoundation.org/news/press_room/knight_press_releases/detail.dot?id=370129" mce_href="http://www.knightfoundation.org/news/press_room/knight_press_releases/detail.dot?id=370129"> two games for the Knight Foundation</a> that take people as the platform.&nbsp; Macon  Money, uses very simple games dynamics (for more <a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15446" target="_blank">see the video</a> of Katiâ€™s keynote) in a game designed to help â€œKnightâ€™s continuing  efforts  to support revitalizing Macon and creating a vibrant college  town.â€</p>
<p>The  other game that Area/Code has designed with the support of the  Knight  Foundation &nbsp;is for the Biloxi and Gulf Coast community, a game  called  Battlestorm.&nbsp; <a href="http://www.knightfoundation.org/news/press_room/knight_press_releases/detail.dot?id=370129" mce_href="http://www.knightfoundation.org/news/press_room/knight_press_releases/detail.dot?id=370129"> â€œThe gameâ€™s purpose is to increase awareness about natural disasters and change the way people prepare for them.â€</a><br mce_bogus="1"></p>
<p><b><br />
</b></p>
<h3><b>3rd Cylinder of Innovation: Build products, business models and entire industries.</b></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.06.57-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.06.57-PM.png"><img class="alignnone size-medium wp-image-5822" title="Screen shot 2010-10-23 at 11.06.57 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.06.57-PM-300x151.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.06.57-PM-300x151.png" alt="Screen shot 2010-10-23 at 11.06.57 PM" height="151" width="300"></a><br mce_bogus="1"></p>
<p><a href="http://www.glympse.com/" mce_href="http://www.glympse.com/" target="_blank">Glympse</a> â€“ real-time, private location tracking</p>
<p>Julianne Pepitone, Yahoo! Finance, nailed the essence of Web 2.0 Expo, NYC, this year in her post, <a href="http://finance.yahoo.com/news/Web-20-Expo-startups-are-big-cnnm-2700333063.html?x=0&amp;.v=2" mce_href="http://finance.yahoo.com/news/Web-20-Expo-startups-are-big-cnnm-2700333063.html?x=0&amp;.v=2" target="_blank">Web 2.0 Expo startups are big on neighborhoods, storytelling</a>.&nbsp; She writes:</p>
<p><b>â€œAt   the Web 2.0 Expo in New York City this week, executives  from big   sites  like Facebook, Twitter and Pandora all spoke about  industry   trends.  But the showcase of 27 startup tech companies stole  the show.â€</b></p>
<p>Listen  carefully to Tim Oâ€™Reilly and Fred Wilson, Union Square Ventures,  question their picks from the<a href="http://www.web2expo.com/webexny2010/public/schedule/detail/15525" mce_href="http://www.web2expo.com/webexny2010/public/schedule/detail/15525" target="_blank"> startup showcase</a> at Web 2.0 Expo.&nbsp; Also see <a href="http://www.youtube.com/watch?v=Xbui5_5_NCA&amp;p=6F97A6F4BA797FB3" mce_href="http://www.youtube.com/watch?v=Xbui5_5_NCA&amp;p=6F97A6F4BA797FB3" target="_blank">this video of Fred and Tim discussing their conversations with all the start ups</a>.&nbsp;  This&nbsp; is one of the clearest public windows onto both how to present  your company to VC, and how to figure out what are the most important   questions for you as an entrepreneur&nbsp; building a  business in a world of  data.</p>
<p><a href="http://www.glympse.com/" mce_href="http://www.glympse.com/">Glympse</a> <a href="http://www.youtube.com/watch?v=EuKScQbPvVc&amp;feature=channel" mce_href="http://www.youtube.com/watch?v=EuKScQbPvVc&amp;feature=channel" target="_blank">successfully  pitches </a>their  â€œjet ponyâ€ strategy for a  location based business, and is Fredâ€™s  pick.&nbsp; They hold up well under pressure and  answer Tim and Fredâ€™s hard  questions  about how their start up will not  get overtaken by an  encumbent player with resources  and market share before they can gain   traction.&nbsp;&nbsp; <a href="http://www.food52.com/" mce_href="http://www.food52.com/">food52</a> <a href="http://www.youtube.com/watch?v=NZZ0apJTUQA&amp;feature=channel" mce_href="http://www.youtube.com/watch?v=NZZ0apJTUQA&amp;feature=channel" target="_blank">responds to Timâ€™s probing about their  strategy</a> for business data  analytics that he points out are vital if they  want  to survive with the  small margins of ecommerce.&nbsp; There is a list of  all the participants in the start up showcase in Bradyâ€™s <a href="http://radar.oreilly.com/2010/09/the-startups-at-the-expo-showc.html" mce_href="http://radar.oreilly.com/2010/09/the-startups-at-the-expo-showc.html" target="_blank">post here.</a> <a href="http://hour.ly/" mce_href="http://hour.ly/" target="_blank">hour.ly</a> was the audience pick.</p>
<h3><a href="http://www.shazam.com/" mce_href="http://www.shazam.com/" target="_blank">Shazam</a> for Faces!</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.14.52-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.14.52-AM.png"><img class="alignnone size-medium wp-image-5897" title="Screen shot 2010-10-26 at 4.14.52 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.14.52-AM-300x134.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.14.52-AM-300x134.png" alt="Screen shot 2010-10-26 at 4.14.52 AM" height="134" width="300"></a><br mce_bogus="1"></p>
<p>My favorite start up  was a biometric service doing face, iris, and finger print matching,<a href="http://www.tacticalinfosys.com/" mce_href="http://www.tacticalinfosys.com/" target="_blank"> Tactical Information Systems</a>.</p>
<p>Tim and Fred also liked them, and they have an interesting discussion  about the merits or not of approaching your platform through a narrow  first application as Tactical Information Systems are with <a href="http://www.wanderid.org/" mce_href="http://www.wanderid.org/" target="_blank">WanderID</a> -&nbsp; an application to help identifying lost Alzheimer patients.&nbsp; As Fred pointed out, they are potentially the <a href="http://www.shazam.com/" mce_href="http://www.shazam.com/" target="_blank">Shazam</a> for faces, so why start so small?</p>
<p>I&nbsp; had asked TIS the same question when I met them in the â€œspeed  datingâ€ session.&nbsp; This is just their first toe in the water as they are a  two person company at the moment. Their vision for their platform is  big.&nbsp; Mary Haskett and Dr Alex Kilpatrick, the founders of this  quintessential jet pony for the algorithmic economies in the sky, are  not only a partnership with the credentials to do a&nbsp; <a href="http://www.shazam.com/" mce_href="http://www.shazam.com/" target="_blank">Shazam</a> for faces â€“ <a href="http://www.tacticalinfosys.com/about.html" mce_href="http://www.tacticalinfosys.com/about.html" target="_blank">see their bios here</a>, they are the people I would want to be running a <a href="http://www.shazam.com/" mce_href="http://www.shazam.com/" target="_blank">Shazam</a> for faces!&nbsp; They really get the consequences of living in a world of  data â€“ check out Dr Kilpatrickâ€™s absolute killer Ignite talk, <a href="http://ignite.oreilly.com/2010/10/defeating-big-brother-by-dr-alex-kilpatrick-ep-75.html" mce_href="http://ignite.oreilly.com/2010/10/defeating-big-brother-by-dr-alex-kilpatrick-ep-75.html" target="_blank">â€œDefeating Big Brother.â€</a> (screenshot below)</p>
<p><i><b><b><b><a href="http://ignite.oreilly.com/2010/10/defeating-big-brother-by-dr-alex-kilpatrick-ep-75.html" mce_href="http://ignite.oreilly.com/2010/10/defeating-big-brother-by-dr-alex-kilpatrick-ep-75.html" target="_blank"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.03.11-PM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.03.11-PM.png"><img class="alignnone size-medium wp-image-5819" title="Screen shot 2010-10-23 at 11.03.11 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.03.11-PM-300x229.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-23-at-11.03.11-PM-300x229.png" alt="Screen shot 2010-10-23 at 11.03.11 PM" height="229" width="300"></a><br />
</b></b></b></i></p>
<h3>How Can Augmented Reality Add Value to the Real Time Internet/Data Operating System?</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.12.57-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.12.57-AM.png"><img class="alignnone size-medium wp-image-5896" title="Screen shot 2010-10-26 at 4.12.57 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.12.57-AM-300x199.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-26-at-4.12.57-AM-300x199.png" alt="Screen shot 2010-10-26 at 4.12.57 AM" height="199" width="300"></a><br mce_bogus="1"></p>
<p><i> <a href="http://www.planefinder.net/" mce_href="http://www.planefinder.net/" target="_blank">planefinder.net</a> â€“ an augmented reality app that lets you find information about planes  by pointing your phone at the sky, â€œincluding flight  number, aircraft  registration, speed, altitude and how far away  it isâ€ (via <a href="http://www.maclife.com/article/news/do_some_plane_scouting_augmented_reality_plane_finder_app" mce_href="http://www.maclife.com/article/news/do_some_plane_scouting_augmented_reality_plane_finder_app">MacLife</a>).</i></p>
<p>The new opportunities in the algorithmic economies in the sky were    center stage at Web 2.0 Expo and there are some interesting AR apps for  the real time internet/data operating system emerging, like <a href="http://www.planefinder.net/" mce_href="http://www.planefinder.net/" target="_blank">planefinder.net</a>.&nbsp; But Augmented Reality was still pretty   low profile at Web 2.0 Expo (<a target="_blank">except that NVidia augmented reality demo attracted a lot of attention at the sponsors expo</a>).&nbsp;  However, everyone working in the emerging industry of AR should  recognize that   apps big on â€œneighborhoods and story tellingâ€ are  heading right up the   AR street, and that platforms like Four Square  and Pachube present enormous opportunity to explore the possibilities of  AR.&nbsp; And if augmented reality enthusiasts are not already paying    attention to real time data analytics, and <a href="http://hadoop.apache.org/" mce_href="http://hadoop.apache.org/" target="_blank">Hadoop</a>, they should be (see <a href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" mce_href="http://www.cscyphers.com/blog/2010/10/12/hadoop-world-2010/" target="_blank">this post for an excellent round up</a> on Hadoop World).</p>
<p>At Hadoop World, Tim Oâ€™Reilly referenced the great tagline from the&nbsp; <a href="http://vimeo.com/11742135" mce_href="http://vimeo.com/11742135">IBM commercial</a>:</p>
<p><i><b><b><b><b>â€œ</b></b></b></b></i><b><b><b><b>Would you be willing to cross the street â€” blindfolded â€” on  data that was five minutes old? Five hours? Five days?â€</b></b></b></b></p>
<p>As I have noted in several earlier posts â€“ <a href="../../2010/09/27/urban-games-storytelling-with-augmented-reality-the-big-arny-and-inside-ar-talking-with-thomas-alt-metaio/" mce_href="../../2010/09/27/urban-games-storytelling-with-augmented-reality-the-big-arny-and-inside-ar-talking-with-thomas-alt-metaio/" target="_blank">see here</a> and <a href="../../2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/" mce_href="../../2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/" target="_blank">here</a> for starters,&nbsp; we are just seeing the tools&nbsp; for developing near field,  vision based, mobile, social AR become widely available to developers,  so there should be a new level of AR apps emerging through 2011.&nbsp; There  is a wonderful discussion in the comments of this post by Mac  Slocum, <a href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" mce_href="http://radar.oreilly.com/2010/10/two-ways-augmented-reality-app.html" target="_blank">â€œHow Augmented Reality Apps Can Catch On,â€ </a> between Mac, Raimo one of     the founders of <a href="http://www.layar.com/" mce_href="http://www.layar.com/" target="_blank">Layar</a>, and <a href="http://www.urbeingrecorded.com/" mce_href="http://www.urbeingrecorded.com/" target="_blank">Chris Arkenberg</a> on what constitutes a platform for growth for     augmented reality.</p>
<p>Macâ€™s post, the comments and <a href="http://www.urbeingrecorded.com/news/2010/10/13/is-ar-ready-for-the-trough-of-disillusionment/" mce_href="http://www.urbeingrecorded.com/news/2010/10/13/is-ar-ready-for-the-trough-of-disillusionment/" target="_blank">Chris Arkenbergâ€™s post</a> on the <a href="http://www.gartner.com/it/page.jsp?id=1447613" mce_href="http://www.gartner.com/it/page.jsp?id=1447613" target="_blank">latest edition of the Gartner Hype Cycle,</a> that rather curiously placed Augmented reality almost at the peak of  inflated expectations. really got me excited     about exploring an idea  I have been thinking about for a while, which   is   to get the AR  community to discuss the <a href="http://map.web2summit.com/" mce_href="http://map.web2summit.com/">Points of Control map</a>. &nbsp;&nbsp; See my discussion with Chris Arkenberg here, <a rel="bookmark" href="http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/" mce_href="http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/" target="_blank">Platforms for Growth and Points of Control for Augmented Reality</a><a href="http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/" mce_href="http://www.ugotrade.com/2010/10/27/platforms-for-growth-and-points-of-control-for-augmented-reality-talking-with-chris-arkenberg/" target="_blank">.</a> The recording of&nbsp; John Battelle&#8217;s and Tim O&#8217;Reilly&#8217;s webcast on Points of Control <a href="http://www.youtube.com/oreillymedia#p/c/7/8CEyHSoWJcs" mce_href="http://www.youtube.com/oreillymedia#p/c/7/8CEyHSoWJcs" target="_blank">is posted here.</a><br mce_bogus="1"></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM.png"><img class="alignnone size-medium wp-image-5932" title="Screen shot 2010-10-27 at 2.01.38 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM-300x124.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-2.01.38-AM-300x124.png" alt="Screen shot 2010-10-27 at 2.01.38 AM" height="124" width="300"></a><br mce_bogus="1"></p>
<p><a href="http://map.web2summit.com/" mce_href="http://map.web2summit.com/" target="_blank">The interactive Points of Control map</a> is an amazing  tool    to think with! Check it out  in movements, territory and movements, acquisition mode.&nbsp; There is a  competition for the most interesting comment and most interesting  acquisition suggestion.&nbsp; The prize is a ticket to Web 2.0 Summit!</p>
<h3>What is the Future of Social?</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ARwave_logo_small.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ARwave_logo_small.png"><img class="alignnone size-full wp-image-5987" title="ARwave_logo_small" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ARwave_logo_small.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/ARwave_logo_small.png" alt="ARwave_logo_small" height="146" width="208"></a><br mce_bogus="1"></p>
<p>The recent â€œdefectionâ€ from Google to Facebook â€“ see <a title="Lars Rasmussen, Father Of Google Maps And Google Wave, Heads To&nbsp;Facebook" rel="bookmark" href="http://techcrunch.com/2010/10/29/rasmussen-facebook-google/" mce_href="http://techcrunch.com/2010/10/29/rasmussen-facebook-google/">Lars Rasmussen, Father Of Google Maps And Google Wave, Heads To&nbsp;Facebook</a>,&nbsp; is as MG Siegler of TechCrunch points out, â€œthe biggest one since Chrome OS lead <a href="http://www.crunchbase.com/person/matthew-papakipos" mce_href="http://www.crunchbase.com/person/matthew-papakipos">Matthew Papakipos </a>made <a href="http://techcrunch.com/2010/06/28/closing-in-on-chrome-os-launch-key-architect-matthew-papakipos-jumps-to-facebook/" mce_href="http://techcrunch.com/2010/06/28/closing-in-on-chrome-os-launch-key-architect-matthew-papakipos-jumps-to-facebook/">the same jump in June</a>â€ (TechCrunch also notes â€œcurrent Facebook CTO <a href="http://www.crunchbase.com/person/bret-taylor" mce_href="http://www.crunchbase.com/person/bret-taylor">Bret Taylor</a> was heavily involved in the launch of Google Mapsâ€).</p>
<p>These moves have drawn my particular attention as did <a href="http://www.youtube.com/watch?v=ZqDYjA5RGCU&amp;p=6F97A6F4BA797FB3" mce_href="http://www.youtube.com/watch?v=ZqDYjA5RGCU&amp;p=6F97A6F4BA797FB3" target="_blank">Bret Taylorâ€™s response in his conversation with Brady Forrest at Web 2.0 Expo</a> to Bradyâ€™s question, <b>â€œHow soon until we get the Facebook firehose?â€ </b></p>
<p>If you have been reading Ugotrade you already know<b> </b>how  important I think an open, distributed, standard for  real-time  communications such as the very innovative Wave Federation Protocol  could be for AR development&nbsp; -&nbsp; see <a href="http://www.arwave.org/" mce_href="http://www.arwave.org/" target="_blank">ARWave </a>and <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" mce_href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">my presentation at MoMo13, Amsterdam</a> last year, <a rel="bookmark" href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" mce_href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</a><br mce_bogus="1"></p>
<p>The anticipated release of&nbsp; <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" mce_href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box, </a>has  raised hopes in the developer community that&nbsp; WFP will soon become  easier to work with, and hopefully more widely adopted.&nbsp; Like many  others, I wonder what will happen to <a href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" mce_href="http://googlewavedev.blogspot.com/2010/09/wave-open-source-next-steps-wave-in-box.html" target="_blank">Wave in a Box</a> now?</p>
<p>But the innovation of Wave is deep and broad (and as many have  pointed out hugely ambitious).&nbsp; Perhaps the boldest attempt yet to  innovate both at the low level of architecture (where Google is so  powerful) and at the high level of <b>the Mark Zuckerberg, â€œbig idea,â€ which  as Tim Oâ€™Reilly notes is, â€œWhat is the future of social?â€ </b> MG Siegler  noted <a title="Facebook Groups Is Sort Of Like Google Wave For Human&nbsp;Beings" rel="bookmark" href="http://techcrunch.com/2010/10/07/facebook-groups-google-wave/" mce_href="http://techcrunch.com/2010/10/07/facebook-groups-google-wave/">Facebook Groups Is Sort Of Like Google Wave For Human&nbsp;Beings</a>.</p>
<p>But I deeply hope that the open, distributed standard part of the Wave big idea is not lost in the mix here.</p>
<p><b><br />
</b></p>
<h3><b>Fourth Cylinder of Innovation: Keep the Ecosystem Going, Create More Value than You Capture<br />
</b></h3>
<p><i><b><a href="../wp-content/uploads/2010/10/Screen-shot-2010-10-21-at-5.58.27-AM.png" mce_href="../wp-content/uploads/2010/10/Screen-shot-2010-10-21-at-5.58.27-AM.png"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM.png" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM.png"><img class="alignnone size-medium wp-image-5931" title="Screen shot 2010-10-27 at 1.56.15 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM-300x181.png" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Screen-shot-2010-10-27-at-1.56.15-AM-300x181.png" alt="Screen shot 2010-10-27 at 1.56.15 AM" height="181" width="300"></a><br />
</b></i></p>
<p><i>The Points of Control map is interactive, so please <a href="http://map.web2summit.com/" mce_href="http://map.web2summit.com/" target="_blank">click here </a>or on the image above for the full experience.</i></p>
<p>Tim Oâ€™Reilly points out that there is a worrisome dark side to the Points of Control Map â€“ see <a href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" mce_href="http://www.youtube.com/watch?v=3637xFBvkYg&amp;p=6F97A6F4BA797FB3" target="_blank">Timâ€™s keynote here</a>.&nbsp; To paraphrase some or his points:</p>
<p>There are companies on the map that are forgetting to think about  creating a sustainable ecosystem.&nbsp; Rather than growing the pie, they are  trying to divide up the pie and that threatens to cause the fourth  cylinder of innovation to misfire.&nbsp; This fourth cylinder is essential to  the ecosystem.</p>
<p>Tim Oâ€™Reilly looks back to the lessons of the personal computing  industry which was incredibly vital and creative, and lots of people  made money until a couple of big players <b>â€œsucked all the air out of the ecosystemâ€</b> and innovation had to go elsewhere.</p>
<p>The Power of Platforms is to create value not just for your company  but for other people.&nbsp;&nbsp; Create value for yourself by creating value for  other people.&nbsp; Tim Oâ€™Reilly used the wonderful example of&nbsp; Henry Ford  inventing the weekend so that there would be enough people with time and  money to buy his mass produced cars.&nbsp; Think about building the  ecosystem that will support the future your are going to build.&nbsp; Grow  the pie rather than cut up the pie.&nbsp; This will be the vital fourth  cylinder of innovation in a <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" mce_href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank">Web Squared</a> world.</p>
<p>Tim Oâ€™Reilly has long proposed that&nbsp;<a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" mce_href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/" target="_blank"> </a><a href="http://www.oreillynet.com/go/web2" mce_href="http://www.oreillynet.com/go/web2">Web 2.0 is all about harnessing collective intelligence</a>,&nbsp; But as Gartner predicts, â€œ<span lang="EN-GB">By  year end 2012, physical sensors will create 20 percent of non-video  internet traffic.â€ </span><span lang="EN-GB"> </span>Yet   another  previously unevenly distributed future is going mainstream,  and if you havenâ€™t read it already, now is the time to read<span lang="EN-GB"> this  paper by Tim Oâ€™Reilly and John Batelle, </span><a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" mce_href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>.</p>
<h3><b><b><b>The Consequences of Living in a World of Data</b></b></b></h3>
<p><i><b><b><b><b><a href="../wp-content/uploads/2010/10/Dataarmsrace.jpg" mce_href="../wp-content/uploads/2010/10/Dataarmsrace.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Dataarmsrace.jpg" mce_href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Dataarmsrace.jpg"><img class="alignnone size-medium wp-image-5817" title="Dataarmsrace" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Dataarmsrace-300x199.jpg" mce_src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/10/Dataarmsrace-300x199.jpg" alt="Dataarmsrace" height="199" width="300"></a><br />
</b></b></b></b></i></p>
<p>To bring this very long post to a close!&nbsp; Here are just a few of the  key questions re The Consequences of Living in a World of Data that Tim  Oâ€™Reilly raised during his keynote for Hadoop World:</p>
<p><b><b><b><b>â€œHow would we solve the problem of  digital identity in the age of sensors? (Our smart phones are able to  know their users by the way they walk â€“ their gait!)</b></b></b></b></p>
<p><b><b><b><b>â€œHow will we input data when our devices are smart enough to listen on their own?â€</b></b></b></b></p>
<p><b><b><b><b>â€œHow should we think about privacy in a world where data can be triangulated?â€</b></b></b></b></p>
<p><b><b><b><b>â€œWe are moving to a world in which  every device generates useful data, in which every action creates  information shadows on the net.â€</b></b></b></b></p>
<p><b><b><b><b>â€œShouldnâ€™t we regulate the misuse of data rather than the possession of it?â€</b></b></b></b></p>
<p><b><b><b><b>â€œHow do we avoid a data arms race?â€</b></b></b></b></p>
<p><b><b><b><b>â€œCreate more value than you capture.â€</b></b></b></b></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/10/31/tim-o%e2%80%99reilly%e2%80%99s-four-cylinder-innovation-engine-the-missing-manual-for-the-future/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Augmented Twitter at Jeff Pulver&#8217;s #140conf</title>
		<link>https://www.ugotrade.com/2010/04/23/augmented-twitter-at-jeff-pulvers-140conf/</link>
		<comments>https://www.ugotrade.com/2010/04/23/augmented-twitter-at-jeff-pulvers-140conf/#comments</comments>
		<pubDate>Fri, 23 Apr 2010 14:25:03 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[#140conf]]></category>
		<category><![CDATA[#ashtag. TEDxVolcano]]></category>
		<category><![CDATA[3D mailbox]]></category>
		<category><![CDATA[Alon Nir]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality games]]></category>
		<category><![CDATA[augmented twitter]]></category>
		<category><![CDATA[Dancing Ink Productions]]></category>
		<category><![CDATA[EComm]]></category>
		<category><![CDATA[Evolutionary Reality]]></category>
		<category><![CDATA[Farmville]]></category>
		<category><![CDATA[federation protocol]]></category>
		<category><![CDATA[Foure Square]]></category>
		<category><![CDATA[Gamepocalypse]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[Jeff Pulver]]></category>
		<category><![CDATA[Jerry Paffendorf]]></category>
		<category><![CDATA[Joshua Fouts]]></category>
		<category><![CDATA[Latitude]]></category>
		<category><![CDATA[Loveland]]></category>
		<category><![CDATA[micro-real estate]]></category>
		<category><![CDATA[mobial social]]></category>
		<category><![CDATA[mobile social augmented reality]]></category>
		<category><![CDATA[mobile social games]]></category>
		<category><![CDATA[Open AR]]></category>
		<category><![CDATA[Open AR Web]]></category>
		<category><![CDATA[open standard federated protocol]]></category>
		<category><![CDATA[Rita J. King]]></category>
		<category><![CDATA[Rouli Nir]]></category>
		<category><![CDATA[social games]]></category>
		<category><![CDATA[The Kotel]]></category>
		<category><![CDATA[Tish Shute]]></category>
		<category><![CDATA[tishshute]]></category>
		<category><![CDATA[wave federation prtocol]]></category>
		<category><![CDATA[WhereCamp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5406</guid>
		<description><![CDATA[Augmented Twitter &#8211; open, mobile, social augmented reality via ARwaveView more presentations from Tish Shute. Augmented Twitter Presenting Augmented Twitter (see video and slides above) at Jeff Pulver&#8217;s 140 Characters Conference (#140conf ) was super fun, and great video makes this a conference that you can enjoy catching up on after the fact.Â  Jeff Pulver [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ippio.com/view_video.php?viewkey=da6ab5c15dd856998e4b" target="_blank"><img class="alignnone size-full wp-image-5407" title="Screen shot 2010-04-22 at 9.52.22 AM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/04/Screen-shot-2010-04-22-at-9.52.22-AM.png" alt="Screen shot 2010-04-22 at 9.52.22 AM" width="458" height="368" /></a></p>
<div id="__ss_3817428" style="width: 425px;"><strong style="display:block;margin:12px 0 4px"><a title="Augmented twitter - open, mobile social augmented reality via ARwave" href="http://www.slideshare.net/TishShute/augmented-twitter">Augmented Twitter &#8211; open, mobile, social augmented reality via ARwave</a></strong><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="355" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=augmentedtwitter-100422085925-phpapp01&amp;stripped_title=augmented-twitter" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="425" height="355" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=augmentedtwitter-100422085925-phpapp01&amp;stripped_title=augmented-twitter" allowscriptaccess="always" allowfullscreen="true"></embed></object>View more <a href="http://www.slideshare.net/">presentations</a> from <a href="http://www.slideshare.net/TishShute">Tish Shute</a>.</div>
<p> </br></p>
<h3>Augmented Twitter</h3>
<p>Presenting <a href="http://www.ippio.com/view_video.php?viewkey=da6ab5c15dd856998e4b" target="_blank">Augmented Twitter</a> (see video and slides above) at <a href="http://140conf.com/" target="_blank">Jeff Pulver&#8217;s 140 Characters Conference</a> (#140conf ) was super fun, and <a href="http://www.ippio.com/140conf" target="_blank">great video </a>makes this a conference that you  can enjoy catching up on after the fact.Â  Jeff Pulver does an excellent job of keeping people to a challengingly short format.Â  Even I managed to bring my talk in under 5 mins!</p>
<p>#140conf is a real time mobile social crowd, and pretty attuned to Augmented Reality.Â  Everyone had heard of Augmented Reality in the audience, and while most had never tried an AR app, nearly everyone used a mobile social app like, <a href="http://foursquare.com/" target="_blank">Four Square</a>, <a href="http://gowalla.com/" target="_blank">Gowalla</a>, or <a href="http://www.google.com/latitude/intro.html" target="_blank">Latitude</a>. Â  As Dan Harple (@dharple) &#8211; Executive Chairman,<a href="http://www.gypsii.com/" target="_blank"> GyPSii</a>, said in hisÂ  interesting presentation, <a href="http://www.ippio.com/view_video.php?viewkey=44143e1f2f13b2b729ab"><strong>Evolution  of Location and Places</strong></a>,Â  &#8220;everyone get&#8217;s connection, and that connection in real time is the thing if we can get it, and that real time connection is innately mobile.&#8221;</p>
<p><a href="http://www.arwave.org/" target="_blank">ARwave</a> aims to push mobile, social, real time connection even further with augmented reality.Â  As Anselm Hook puts it so brilliantly in his <a href="http://www.slideshare.net/anselm/20100421-ecomm-pressy" target="_blank">presentation at EComm</a>, &#8220;AR is about publishing &#8220;verbs&#8221; &#8211; interactive, actionable, digital agents not publishing 3D models.&#8221;Â  I have some mega posts brewing on this topic.Â  Augmented Reality will need to support publishing game like behavior, and digital agents that can  embody a set of actions and reactions.</p>
<p>This need for augmented reality to publish behavior, and to share and integrate, in one view, multiple real time data streams are just some of the reasons <a href="http://www.arwave.org/" target="_blank">AR Wave</a> uses <a href="http://www.waveprotocol.org/" target="_blank">an open federated   protocol</a>.Â  Federation is also particularly important for augmented reality because, as Anselm pointed out at <a href="http://wherecamp.org/" target="_blank">WhereCamp</a>,Â  AR will certainly demand very efficient distribution of state change at the systems level &#8211; Â to move the computation to its lowest latency.</p>
<p>The only other cloud over our Augmented Reality party at #140confÂ  was that #ashtag kept our co-panelist and panel chair from joining us. Â  Rita J King, @ritajking, who is Innovator-in-Residence at IBMâ€™s Analytics Virtual Center, the &#8220;General of the Imagination Age,&#8221; and <a href="http://dancinginkproductions.com/" target="_blank">Dancing Ink Productions</a>, and Joshua Fouts, @josholalia, &#8220;Cultural AttachÃ©,&#8221; and Chief Global Strategist of Dancing Ink, were on a 5 day trek out of #ashcloud, and, sadly, not there for our panel.</p>
<p>Bu Twitter, once again, was a life line in a time of crisis connecting them to <a href="TEDxVolcano">TEDxVolcano,</a> an impromptu unconference with must see presentations from Rita and others, see<a href="http://www.theimaginationage.net/" target="_blank"> Rita&#8217;s blog for more</a>.</p>
<p>So the two of us carried the flag forÂ  Augmented Twitter.Â  Myself and Jerry Paffenfdorf, futurist, artist, entrepreneur and swell guy  &#8211; the co-inventor of the most famous real time social web system you have never heard of (actually I tried and loved it in alpha testing, before it was quote &#8220;shut down by blood thirsty investors&#8221;).</p>
<p>Now Jerry lives in Detroit Michigan where he works on the <a href="http://makeloveland.com/" target="_blank">Loveland Micro-real estate project</a> which is the simplest, cheapest, funnest way to become a land owner.Â   At a dollar a square inch it mixes video games and real estate, like Farmville for urban development.</p>
<p>Joshua and Rita, our very virtual panel mates, are the first and largest inchvestors, and creating their own micro city within the project.Â   Jerry is one of the most creative and original thinkers on the planet, so treat yourself to glimpse of what is on his mind in the video above &#8211; <a href="http://makeloveland.com/" target="_blank">Loveland</a>, <a href="http://www.3dmailbox.com/" target="_blank">3D mailbox</a>, canned augmented reality, and the relationship of virtual worlds to the real time social web.</p>
<p>Jerry also hat tipped one of the most captivating projects and presentations of the conference, Alon Nir&#8217;s, <a href="http://www.ippio.com/view_video.php?viewkey=510442f2fd40f2100b05"><strong>The  Story Behind @TheKotel</strong></a>, &#8220;Tweet Yr Prayers!&#8221;Â  What a great story about the power of Twitter to reach out into the world, and beyond!Â  I got a chance to chat with Alon at #140conf, and I found out he is brother of augmented reality guru, Rouli Nir, @augmented.Â  Rouli is known for his sharp and comprehensive AR commentary on <a href="http://artimes.rouli.net/" target="_blank">Augmented Times </a>and <a href="http://gamesalfresco.com/2010/04/22/the-future-of-ar-browser/" target="_blank">Games Alfresco</a>.Â  Cool family!</p>
<p>Before I close this post, I want to mention @AndyDixn&#8217;s talk on the prison sysetm, <a href="http://www.ippio.com/view_video.php?viewkey=7bc562a711ef96884a38"><strong>A  conversation with Andy Dixon: What the prison yard &amp; twitter have  in common</strong></a>.Â  This conversation, I think, is a great example about what makes #140conf special.Â  As @nwjerseyliz pointed out, we, &#8220;hear few voices from those who&#8217;ve experienced that side of the issue.&#8221;</p>
<p>Thank you @jeffpulver for creating such a cool staging for so many diverse voices.</p>
<p>And before I close here is what the only slide I didn&#8217;t have time to show said!</p>
<h3><strong>If you liked &#8220;Augmented Twitter&#8221;<br />
Donâ€™t miss Augmented Reality Event! </strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/04/are234x60augmented_w.png"><img class="alignnone size-full wp-image-5424" title="are234x60augmented_w" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/04/are234x60augmented_w.png" alt="are234x60augmented_w" width="234" height="60" /></a></p>
<p><strong>2 days, 3  tracks, 40 AR companies, 76 SpeakersArt! Magic!  Competitions!  Awards!Bruce (the Prophet) Sterling, Will (The Sims)  Wright, Jesse  (Gamepocalypse) Schell, Blaise Aguera y Arcas (Microsoft  Bing) and You! </strong> T<strong>he <a href="http://augmentedrealityevent.com/2010/04/10/sneak-preview-of-are-2010-schedule-packed-with-augmented-reality-goodness/">sneak preview of the schedule is here</a>.<br />
</strong><br />
<strong>Register today at<a href="http://augmentedrealityevent.com/" target="_blank"> Augmented Reality Event.com</a></strong></p>
<p><strong>Discount  code for @140 attendees, (and readers of this post!) <a href="https://register03.exgenex.com/GcmRegister/Index.Aspx?C=70000088&amp;M=50000500" target="_blank">TISH245</a> activates $245 price for full  conference.</strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/04/are234x60augmented_w.png"></a></p>
<p><strong>See you there!</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/04/23/augmented-twitter-at-jeff-pulvers-140conf/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>The Physical World Becomes a Software Construct: Talking with Brady Forrest about Where 2.0, 2010</title>
		<link>https://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/</link>
		<comments>https://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 05:37:24 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Artificial Life]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Phones in Africa]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[ARCommons]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[crisis management]]></category>
		<category><![CDATA[Crisis Mappers]]></category>
		<category><![CDATA[CrisisCamp]]></category>
		<category><![CDATA[CrisisMapping]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Google Wave Federation Protocol]]></category>
		<category><![CDATA[H.E.AI.D]]></category>
		<category><![CDATA[human energized artificial intelligence]]></category>
		<category><![CDATA[hyperlocal search]]></category>
		<category><![CDATA[hyperlocal view]]></category>
		<category><![CDATA[image links]]></category>
		<category><![CDATA[iPad]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[local search]]></category>
		<category><![CDATA[location based analysis]]></category>
		<category><![CDATA[location based technologies]]></category>
		<category><![CDATA[Mike Liebhold]]></category>
		<category><![CDATA[Mixer Labs]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction]]></category>
		<category><![CDATA[Nathan Torkington]]></category>
		<category><![CDATA[O'Reilly Media]]></category>
		<category><![CDATA[Open CV]]></category>
		<category><![CDATA[Open Street Map]]></category>
		<category><![CDATA[OpenAR]]></category>
		<category><![CDATA[Ovi]]></category>
		<category><![CDATA[People Finder]]></category>
		<category><![CDATA[physical hyperlinks]]></category>
		<category><![CDATA[proximity-based social networking]]></category>
		<category><![CDATA[real-time social location aware applications]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[Steve the Robot H.E.AI.D]]></category>
		<category><![CDATA[Twitter and geolocation]]></category>
		<category><![CDATA[Uber Geek]]></category>
		<category><![CDATA[Ushahidi]]></category>
		<category><![CDATA[Vernor Vinge]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[Yelp Monocle]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5137</guid>
		<description><![CDATA[&#8220;The internet eats everything it touches,&#8221; write Brady Forrest and Nathan Torkington, Oâ€™Reilly Media, Inc., in their must read 2006 companion essay The State of Where 2.0 (PDF).Â  Now in 2010 that statement is more true than ever. Last week,Â  I talked to Brady about what we can look forward to at Where 2.0, 2010,Â  [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://heaid.com/" target="_blank"><img class="alignnone size-medium wp-image-5138" title="Screen shot 2010-02-08 at 11.05.18 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/Screen-shot-2010-02-08-at-11.05.18-PM-300x202.png" alt="Screen shot 2010-02-08 at 11.05.18 PM" width="300" height="202" /></a></p>
<p>&#8220;The internet eats everything it touches,&#8221; write <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrest</a> and <a href="http://nathan.torkington.com/" target="_blank">Nathan Torkington</a>, Oâ€™Reilly Media, Inc., in their must read 2006 companion essay <a style="border-width: 0px; margin: 0px; padding: 0px; color: #a43000; text-decoration: none;" title="Opens link in a new browser window." href="http://assets.en.oreilly.com/1/event/4/state_of_where_20.pdf" target="_blank">The State of Where 2.0</a> (PDF).Â  Now in 2010 that statement is more true than ever.</p>
<p>Last week,Â  I talked to Brady about what we can look forward to at <a href="http://en.oreilly.com/where2010" target="_blank">Where 2.0, 2010</a>,Â  and what he thinks will be the &#8220;internet eating&#8221; trends emerging this year.Â  Brady is uniquely positioned to get a glimpse of things to come.Â  His job for Oâ€™Reilly Media is tracking changes in technology and organizing large scale events, including Where 2.0 which he chairs, and Web 2.0 Expo in San Francisco and NYC which he co-chairs.Â  Brady also runs <a href="http://ignite.oreilly.com/" target="_blank">Ignite</a>, and previously worked at Microsoft on Live Search.Â  And, when not doing his day job, he participates in such Uber Geek activities as <a id="swtp" title="Steve the Robot H.E.AI.D - A Human Energized Artificial Intelligence Device...with lasers and generative sound." href="http://heaid.com/?page_id=5">Steve the Robot H.E.AI.D &#8211; A Human Energized Artificial Intelligence Device&#8230;with lasers and generative sound,</a> (click on pic above or see <a id="qvff" title="video here" href="http://vimeo.com/7153320">video here</a>).Â  Look out for <a id="swtp" title="Steve the Robot H.E.AI.D - A Human Energized Artificial Intelligence Device...with lasers and generative sound." href="http://heaid.com/?page_id=5">Steve the Robot H.E.AI.D,</a> at <a id="sfnk" title="Augmented Reality Event, June 2nd and 3rd, Santa Clara, CA" href="http://augmentedrealityevent.com/">Augmented Reality Event, June 2nd and 3rd, Santa Clara, CA</a>,Â  and a presentation from Brady.</p>
<p>As <a href="http://en.wikipedia.org/wiki/Vernor_Vinge" target="_blank">Vernor Vinge</a> pointed out in his intro to <a href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> &#8211; the &#8220;possibilities are both scary and wondrous&#8221; as &#8220;the physical world becomes much more like a software construct.&#8221;Â  Brady Forrest has taken a lead role, since 2004 &#8211; when &#8220;&#8216;local search&#8217; was interesting but not yet real,&#8221; in shaping this transformation.</p>
<p><a id="j70w" title="Where 2.0" href="http://en.oreilly.com/where2010">Where 2.0</a>, together with <a id="y46x" title="WhereCamp" href="https://wherecamp.pbworks.com/session/login?return_to_page=FrontPage">WhereCamp</a> (this year at Google) constitutes WhereWeek &#8211; a crucible for emerging trends in web mapping platforms, and location based technologies.Â  This year augmented reality, proximity-based social networking, local search, and the rapidly maturing field of Crisis Management are in theÂ  mix along with the huge and long established GIS industry which has moved rapidly into the Where 2.0 space.</p>
<p>But what business models will oxygenate the system is still a key question &#8211; one Brady discusses in the interview below.Â  Certainly, the usefulness of location based analysis, mapping, new interfaces, and bringing this data to every application is clear.</p>
<p>Crisis management is center stage this year <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2345">Jeffrey Johnson</a> (Open Solutions Group), <a href="http://en.oreilly.com/where2010/public/schedule/speaker/67704">John Crowley</a> (Star-Tides), <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2118">Schuyler Erle</a> (Entropy Free LLC) who will present on, <a id="d4lf" title="Haiti: CrisisMapping the Earthquake" href="http://en.oreilly.com/where2010/public/schedule/detail/13201">Haiti: CrisisMapping the Earthquake</a>.Â  And Chris Vein &amp; Tim O&#8217;Reilly will &#8220;discuss how cities and application developers will benefit from open data and what these programs will look like in the future&#8221;Â  in the plenary <a id="pv3i" title="City Data" href="http://en.oreilly.com/where2010/public/schedule/detail/14124">City Data</a>.</p>
<p>Mobile social, proximity- based social networking, which may soon emerge as a challenger to web based social networks, and augmented reality are the sexy rockstars ofÂ  the Where 2.0&#8242;s 2010 showcase of potentially disruptive technologies.Â  Augmented Reality has had a breakthrough year, and this is reflected in its strong showing on the Where 2.0 schedule.Â  But, as Brady notes, AR awaits the killer app, that will drive it to the next levelÂ  Of course, we hope to unveil thatÂ at<a href="http://augmentedrealityevent.com/" target="_blank"> are2010</a>!</p>
<p>At Where 2.0, I am presenting on <a id="mknx" title="The Next Wave of AR: Exploring Social Augmented Experiences" href="http://en.oreilly.com/where2010/public/schedule/detail/11046">The Next Wave of AR: Exploring Social Augmented Experiences</a> panel.Â  We will look at how social augmented experiences will be key to the next wave of mobile augmented reality.Â  <a href="http://en.oreilly.com/where2010/public/schedule/speaker/2119" target="_blank">Mike Liebhold</a>, in a complementary presentation, looks at <a id="e0_a" title="Truly Open AR." href="http://en.oreilly.com/where2010/public/schedule/detail/11096">Truly Open AR.</a> If you have been reading Ugotrade, you already know I am an advocate for an open, distributed, real time communications framework for AR &#8211; see <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a>.Â  Wave Federation Protocol is an open fast, compact, federated, communications protocol that is a dream come true for AR.Â  And, I would hazard a guess that in 2010, real time communications plus location will become oxygen.</p>
<p>But also key to the next wave of AR, as I discussed with <a href="http://www.hook.org/" target="_blank">Anselm Hook</a> in this post on <a id="it3q" title="Visual Search, Augmented Reality and a Social Commons for the Physical World Platform" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">Visual Search, Augmented Reality and a Social Commons for the Physical World Platform</a>, will be a view constructed through complex â€œhybrid tracking and sensor fusion techniquesâ€ (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation.&#8221;</p>
<p>And as Brady notes in the interview below,Â  a key step forward would be<strong> &#8220;to take advantage of your location, but it doesnâ€™t need to have been mapped before.&#8221;</strong></p>
<p>For some interesting news on the mapping front (<em>and a discount code for Where 2.0 for Radar readers</em>) see Brady&#8217;s post, <a href="http://radar.oreilly.com/brady/" target="_blank">Flickr Photos in Google Street View</a>. These kind of human built maps have the potential to develop into â€œphoto-based positioning systemsâ€ that could create new opportunities for augmented reality.Â  Brady asks:</p>
<p><strong>&#8220;how often the Flickr photos get updated, where else these Flickr photos are going to show up in Google&#8217;s services (Google Goggles perhaps?) and will they show up in new search partner <a href="http://www.bing.com/maps/">Bing</a>? I am doubly curious if Facebook will ever let its photos be used in a similar way.&#8221;</strong></p>
<p><a id="ooyl" title="Lion Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743"><em> </em><em><em> </em></em></a><em> </em><a id="ooyl" title="Lion Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743">Lior Ron</a> of Google Goggles will be at Where 2.0 to tell us all about, <a id="oy8v" title="Looking into Google Goggles" href="http://en.oreilly.com/where2010/public/schedule/detail/14123">Looking into Google Goggles</a>.Â  And if you want to learn more about how our view of the physical world will be &#8221; rooted in powerful computing, pervasive connectivity, and the cloud&#8221; don&#8217;t miss this one.Â  I will be there.Â  And I very much hope there is a Q and A with this session.</p>
<p>During our conversation (see the full conversation below) Brady gave me his short list for breakthroughs that he sees as having big significance in 2010:</p>
<p><strong>&#8220;Well, I think Google Goggles is one of the most exciting things to me.Â  Having access to a visual search&#8230;having someone actually release a visual search engine in that way, to consumers, I think is huge.Â  You know, you see stuff like that in the labs. But I donâ€™t see it&#8230; itâ€™s rare to see it out. </strong></p>
<p><strong>I think Android is huge.Â  And the way Google is pushing hardware to show off the platform; so the Nexus One being another example and the fact that itâ€™s breaking free from the carriers.Â  Because I think when we get away from the carriers we are able to see more innovation, it&#8217;s whatâ€™s going to allow people or developers and companies to really innovate.</strong></p>
<p><strong>And I think Twitter adding geo-location to their APIs and buying <a href="http://www.crunchbase.com/company/mixer-labs" target="_blank">MixerLabs</a> is a huge move. I think Twitter may end up becoming the end-all be-all of location services. They are going to be updated constantly by people; they are going to have a really good grasp, real-time, of what is happening in any one place, at least based on the people. </strong></p>
<p><strong>And then with the addition of the MixerLabs data, they&#8217;re going to have more datasets at their ready, as well as any data that they start to collect from the clients themselves, like from TweetDeck.</strong></p>
<p><strong>So there are global clients that are updating Twitter.Â  I think those are some of the most exciting things.Â  And again, just to come back to Yelp, I think Yelp&#8217;s Monocle is also pretty significant, just because it&#8217;s an AR [augmented reality] app that&#8217;s being pushed into consumers&#8217; hands. </strong></p>
<p><strong>And we&#8217;ll see how useful they find it.&#8221;</strong></p>
<p><strong><br />
</strong></p>
<h3><strong><strong>Talking With Brady Forrest</strong></strong></h3>
<p><strong><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/bradyandgenomepost.jpg"><img class="alignnone size-medium wp-image-5141" title="bradyandgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/bradyandgenomepost-300x199.jpg" alt="bradyandgenomepost" width="300" height="199" /></a></strong></strong></p>
<p><em>Pic above from WhereCamp 2009, Brady Forrest, facing camera, checks out Mark Powell&#8217;s <a id="a-:n" title="Food Genome Project.Â  Check it out here" href="http://www.foodgenome.com/home">Food Genome Project</a>.Â  <a id="a-:n" title="Food Genome Project.Â  Check it out here" href="http://www.foodgenome.com/home">Check it out here</a> &#8211; it just woke up!</em></p>
<p><strong>Tish Shute:</strong> So last year when you were <a id="q5wp" title="interviewed for WebMonkey" href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web">interviewed by Michael Calore for WebMonkey</a> before Where 2.0 you said, â€œLocation is no longer a differentiator itâ€™s going to become oxygen.â€ And after attending Where Week 2009, I agreed with you and <a id="k.gp" title="wrote about it here" href="../../2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/">wrote about it here</a>.Â  But, in what ways did this prediction exceed expectations, and what ways were you disappointed now as we get close to Where 2.0, 2010?</p>
<p><strong><strong>Brady Forrest:</strong> Well, it exceeded expectations in that there are now five different mobile OSâ€™s where you can load on third party applications that active usersâ€™ locations that can then be shared out.</strong></p>
<p><strong>And so, what it is making is the possibility of real-time social location aware applications.Â  And this is something that hasnâ€™t truly been possible in years past.  Looking back three years ago when the iPhone launched, it was the first major phone, especially in the US, to be location aware.Â  And a year later, the Apps Store launched, giving developers full access to location, which previously had been held onto very, very, incredibly tightly by the carriers.</strong></p>
<p><strong>And now, a year and a half later, you have Android, you have Palm Pre, you have Blackberry working on their SDK to make it better, but it still is there.Â  You have Windows Mobile working on their SDK.Â  And, you know, who knows?Â  Maybe even BREW will get into the mix. </strong></p>
<p><strong>And AT&amp;T is opening up their own interactive store.Â  And so, AT&amp;T and Verizon and all their smart phones may now be looking at BREW. </strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Right. It was very exciting <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">last year at Where 2.0,</a> where we had all these new toolsets announced and then the iphone being location aware.  What were the best implementations of these new capabilities that became available in 2009, do you think?Â  What, in your view, was the most creative, surprising and disruptive?</p>
<p><strong><strong>Brady Forrest:</strong> Well, I am a huge fan of <a href="http://www.youtube.com/watch?v=jHEcg6FyYUo" target="_blank">Yelp Monocle.</a> I think, you know, that is just a real life example of using Augmented Reality.Â  You are on a street.Â  You have got a bunch of restaurants.Â  You have got a bunch of businesses.Â  And just to be able to swing through and look for peopleâ€¦I mean and look for ratings and reviews. </strong></p>
<p><strong>They have just started to institute check in, so you will be able to know where your friends are and where your friends have gone.Â  And that type of real-time, incredibly useful data is what will make augmented reality a standard part of the landscape. </strong></p>
<p><strong>I think it is that type of data, more so than, say, reference data, that will make people want to have all the possible sensors.Â  So, what do you need for that?Â  You need a camera.Â  You need a compass for orientation.Â  You need a GPS or, at least, a decent location service.Â  And then you need a screen where you can actually see the data, and then you need an Internet connection. </strong></p>
<p><strong>So it is not like any phone can handle this.Â  And so, you are going to need those killer apps to actually drive people to the type of phones that can support this.Â  I donâ€™t think AR is quite there yet. </strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I agree, for true AR you need more that compass, camera, and GPS.Â  There are some missing pieces for the real deal experience &#8211; and not just a pair of sexy AR spec.Â  As you mention, hybrid tracking and sensor fusion techniques that can combine computer vision technology withÂ  compass and GPS are vital.Â  We need the compass.Â  We need the GPS.Â  We definitely need the camera!Â  But we need this combined with computer vision techniques to get the tracking, mapping and registration for true AR, or even to deliver a stable experience with the post-it/geonote AR that we see emerging with Layar, Wikitude, and others. At the moment we need to put together the tools for a true AR hyper-local experience.</p>
<p>And, of course, another aspect of this is the kind of physical hyper-links that applications like Google Goggles are building.</p>
<p>Do you have a speaker from Google Goggles at Where 2.0.Â  I would be absolutely fascinated to hear more about their road map?</p>
<p><strong>Brady Forrest: I was loading Google Goggles onto the program yesterday.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Oh, you did?Â  Oh, fantastic. And you have <a id="namh" title="Lior Ron speaking" href="http://en.oreilly.com/where2010/public/schedule/speaker/4743">Lior Ron speaking</a>!</p>
<p><strong><strong>Brady Forrest:</strong> It is actually possible it is not up on the website, but I talked to them and got them to agree to do a talk on it.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>I very much want to hear more about their road map.Â  Google Goggle&#8217;s is a very, very significant step towards the physical internet and this integration of computer vision with sensor fusion techniques necessary for true AR.</p>
<p><strong><strong>Brady Forrest:</strong> I mean that combination with Computer Vision is going to be incredibly valuable, because,Â  and then the other issue you have there is like is it on the client,Â  or is it on the server?Â  And right now, Google Goggles is definitely on the server, and that is not fast enough in real-time AR.Â  So that is like more of a 10 blue links IO interface. </strong></p>
<p><strong><strong>Tish Shute:</strong></strong> And also, they havenâ€™t got an open API, have they?</p>
<p><strong><strong>Brady Forrest:</strong> No, not yet.<br />
<strong><br />
Tish Shute:</strong> </strong>Maybe they will announce that.Â  Can you nudge them?Â  For true AR,Â  we need to move forward in several areas &#8211; of course, there is the mediating device issues, like access to the video buffers in the iphone, and the development of cool AR eye wear would be nirvana!</p>
<p>But my recent obsession has been working on a real-time communications infrastructure for AR, because that is quite doable now, yet we donâ€™t really have that real-time infrastructure, i.e. a real-time mobile social utility that is really up to the real time requirements of AR [see more about this <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a> and on <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">ARWave</a> wiki].</p>
<p>But we certainly donâ€™t have the integration of computer vision and sensor techniques, and the access to the big image databases we need, let alone the clients we need to put it all together either!</p>
<p><strong><strong>Brady Forrest:</strong> Google has done work to help out the community with their support of <a href="http://opencv.willowgarage.com/wiki/" target="_blank">Open CV</a>. </strong></p>
<p><strong>It is based out of <a href="http://www.willowgarage.com/" target="_blank">Willow Garage</a>, but I believe that Google has done quite a bit of work on it.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Could you talk a bit more about Open CV?</p>
<p><strong><strong>Brady Forrest: </strong><a href="http://oreilly.com/catalog/9780596516130" target="_blank">O&#8217;Reilly hasÂ  a 500 page book</a> on it.Â  It came out of the Darpa Project, or the  Darpa Contest, where unmanned vehicles are raced.Â  And that has since become, at least in my mind, the primary computer vision library that people work with. </strong></p>
<p><strong>I actually used itâ€¦or, one of the teammates did, on our project we did this summer.Â  We implemented an Open CV pretty quickly that detected where people were, and then we would play music based on that. </strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3185351345_67e3514d36_o.jpg"><img class="alignnone size-medium wp-image-5144" title="3185351345_67e3514d36_o" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3185351345_67e3514d36_o-300x225.jpg" alt="3185351345_67e3514d36_o" width="300" height="225" /></a></strong></p>
<p><a href="http://www.flickr.com/photos/55361487@N00/3185351345/" target="_blank"><em>Uber Geek Meeting from ShellyShelly&#8217;s photostream</em></a><br />
<strong>Tish Shute:</strong> Is that your Burning Man project? Do you have a link for that, and some pictures, video?</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah.Â  <a id="riim" title="Heaid.com" href="http://heaid.com/">Heaid.com</a>.Â  Human Enhanced Artificial Intelligence Dancing.<br />
</strong></p>
<p><strong>Tish Shute:</strong> Thank you! This year the augmented reality story has been fairly basic &#8211; relying on basic sensors, compass, gps, accelerometers.Â  But it has also been an exciting year becauseÂ  we hadnâ€™t even hadÂ  smart phones with the camera, and GPS, and compass before this.</p>
<p>But now, the big adventure is to hook this all these sensor fusion techniques up with computer vision so that we can actually do reverse positioning for example from photos from what we are looking at, right?</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah, and start to use it in a more ad-hoc manner so that as you are traveling around, yes, it will take advantage of your location, but it doesnâ€™t need to have been mapped before.</strong></p>
<p><strong>Tish Shute:</strong> Right &#8211; moving from mapping to context awareness.Â  Could you give like a quick explanation of what you did in your Burning Man project and how that relates to this kind of,Â  ad-hoc, on the fly, beginning to know what you are looking at without it having been mapped before, that is fascinating.</p>
<p><strong>Brady Forrest:</strong> <strong>Sure.Â  So we mounted a camera about 30 feet off the ground.Â  And as people would move underneath or dance, they would move from block to block.Â  And we had kind of created kind of bitmap of the area underneath and set up different sound zones.Â  So as people moved from zone to zone, it would play different music.</strong></p>
<p><strong>And we used Maxim FP to handle the computer vision, although it has Open CV library to handle the computer vision part and to handle determining which of the audio to fire off.Â  And then, also, we had a laser that would play at the same time.</strong></p>
<p><strong>And then we used Ableton Live, which is a very popular DJ software to actually handle the music.Â  So as someone moved from, say, square A to square B, it would fire off various MIDI signals and Ableton would interpret that.Â  And each person who went in, up toâ€¦well, theoretically, up to 4- 8 people.Â  But because of how small the stage was and how the sounds are played, realistically, more like 4-6 people.</strong></p>
<p><strong>Each person had there own set of sound.</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3921063406_db4fbee6af_b.jpg"><img class="alignnone size-medium wp-image-5145" title="3921063406_db4fbee6af_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/02/3921063406_db4fbee6af_b-300x168.jpg" alt="3921063406_db4fbee6af_b" width="300" height="168" /></a></p>
<p><em>Pic from <a href="http://www.flickr.com/photos/extramatic/"><strong>extramatic</strong></a>&#8216;s Flickr </em><a id="sgdt" title="stream here" href="http://www.flickr.com/photos/extramatic/3921063406/sizes/l/"><em>s</em><em>tream here</em></a></p>
<p><strong>Tish Shute: </strong> Wow! Awesome.</p>
<p><strong>Brady Forrest:</strong> <strong>We would be able to detect different people, assign them a sound, or a set of sounds, so, like bass, drums, vocals.Â  And then we would have clips that played well together that were 3-5 seconds in length.</strong></p>
<p><strong>Tish Shute:</strong> At what distance could you detect people?</p>
<p><strong>Brady Forrest: </strong> <strong>We had a 22 foot  area underneath the camera.Â  That was mostly based on what the lens could capture.</strong></p>
<p><strong>Tish Shute:</strong> OMG I love this!Â  This is really the next step for augmented realities &#8211; not just attaching reference data to the world but exploring new shared &#8220;cosensual realities&#8221; (see Anselm Hook&#8217;s interview part 2 upcoming).</p>
<p>I am very interested in how in something you talk about a lot in your &#8220;State of Where 2.0&#8243; essay, about lifestyle coming first for a potentially disruptive technology, not commercial considerations.Â  I still have to post the second half to my interview withÂ  Anselm Hook but Anselm has some brilliant ideas in this area.Â  He is working on a project called <a href="http://makerlab.org/news/21" target="_blank">Angel</a>, where part of the vision is for people to actually find what they need without explicitly having to ask for it having to ask for it.</p>
<p>And this brings me to something that is very, to me, noticeable about Where 2.0 this year, and very exciting.Â  This is that location aware technology and crisis management basically has matured, hasnâ€™t it?Â  We are beginning to see really useful stuff in this area now.</p>
<p>What is different this year that has brought crisis management and location aware technology together, a world in crisis?</p>
<p><strong>Brady Forrest: </strong> <strong>Well, I think the primary thing that has brought all these technologies together is Haiti.Â  Without Haitiâ€¦A lot of times, future crises benefit from the current one, because people put in a lot of work.Â  And so, there is new infrastructure being laid with things such as <a href="http://www.ushahidi.com/" target="_blank">Ushahidi</a>, which is an open source platform for trackingâ€¦well, originally for tracking election violence in, but now is being used to track people and their locations and food requests in Haiti.</strong></p>
<p><strong>Also, Haiti did not have solid, accessible, good maps at the time of the of the earthquake.Â  And there have been two volunteer projects that have sprung up to help with that.Â  One being headed by the <a href="http://www.harrywood.co.uk/blog/2010/01/21/haiti-earthquake-on-openstreetmap/" target="_blank">Open StreetMap Wood Foundation</a> and many volunteers.Â  And then the other, Google Map Maker.  And in both cases the activity around Haiti on these programs went up exponentially&#8230;or, I donâ€™t know about exponentially, but a lot.Â  In the case of Map Maker, it was up 100 times and was the most worked on country for that week.Â   And one of the most downloaded for that week.</strong></p>
<p><strong>Tish Shute:</strong> Yes the work being done in <a href="http://crisiscommons.org/" target="_blank">CrisisCamps</a> around the country is very encouraging.</p>
<p><strong>Brady Forrest: And then also, you know, not just Ushahidi or Open Street Map, but also the<a href="http://haiticrisis.appspot.com/" target="_blank"> People Finder</a> which had open API so that different organizations could share their data, thus learning from Katrina.Â  There are all these different pieces of technology will be used in the future and hopefully be able to save more lives.Â  I didnâ€™t see&#8230;there are iPhones apps that were released.Â  But Iâ€™m not aware of any Android apps.Â  Iâ€™m not aware of any AR apps.</strong></p>
<p><strong>Tish Shute:</strong> We donâ€™t have smart phones devices distributed widely enough for them to be appropriate, do we, in a lot of areas where crisis strikes.</p>
<p><strong>Brady Forrest:</strong> <strong>Yeah and there was criticism that they shouldnâ€™t have been on iPhone.Â  You know, that iPhones were a waste of time. Because they arenâ€™t&#8230;a lot of on the ground agencies arenâ€™t going to have iPhones.Â  However, a lot of people who are going from the States will, and if the apps are there, then people will start to have them.</strong></p>
<p><strong>But relatively speaking, an iPhone is not that expensive.</strong></p>
<p><strong>Tish Shute:</strong> One thing I noticed and actually I discussed this in the second half of the interview I did with Anselm which I am getting ready to post.Â  But one of the aspects of the crisis filter was having people working as curators looking at messages coming out of Haiti, and while integrating the streams that would be useful is still probably a challenge, many curators will be on iPhones because they are based in the US.</p>
<p>We need to work across all platforms probably.<br />
<strong><br />
Brady Forrest:</strong> <strong>Yes.Â  Patrick Meier of Ushahidi, who runs <a href="http://www.crisismappers.net/forum/topics/task-force-haiti-earthquake" target="_blank">Crisis Mappers</a>, he ran a 24/7 emergency room.  It was out of the Fletcher School in Boston.</strong></p>
<p><strong>They had volunteers all over the States and Canada.Â  They had volunteers in Vancouver that were translating Creole messages in under ten minutes.</strong></p>
<p><strong>Tish Shute:</strong> Yes and another point that is interesting in terms of the reconstruction and rebuilding ofÂ  Haiti isÂ  the whole idea of leap frogging, and the idea that you can really&#8230; thereâ€™s always, as weâ€™ve seen in other parts of the world, opportunity, when you miss pieces of basic infrastructure, to skip a whole stage and go onto the next one, like how virtual banking took off in Africa because of the absence of brick and mortar infrastructure.</p>
<p><strong>Brady Forrest:</strong> <strong>To skip to a topic that been in my head, Iâ€™m just so bummed that the iPad does not have a camera.</strong></p>
<p><strong>Tish Shute:</strong> I was bummed is barely the word I would use.Â  Particularly as we had just been planning our ground breaking AR/next generation ebook in the days leading up to the announcement!</p>
<p>I suppose there is the hope theyâ€™re going to put it in the next one.Â   But I suppose the play for conventional content delivery is so big that everything else is trivial in comparison &#8211; especially in seems jump starting the emerging augmented reality industry!</p>
<p>So we might get thrown a camera and compass in the next round but will we get access to the video buffers?Â  AR enthusiasts may have to live on table scraps from Apple a bit longer it seems.</p>
<p>But what blows my mind is why hasnâ€™t the iTouch got a camera, been AR enabled?Â  AR gaming would get an enormous boost from that alone. My son loves even the simple minded AR games available now on the iphone, and he loves iphone games &#8211; he has 110 games downloaded!</p>
<p><strong>Brady Forrest:</strong> <strong>Ridiculous.Â  Yeah.Â  I donâ€™t know what they donâ€™t like about cameras.Â  And I plan on getting an iPad, but because of the limitations I plan on using it for base content and will probably get the bottom line model. I canâ€™t imagine&#8230;I donâ€™t know.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>It is very interesting, who actually puts together the big enabling mediating device for AR is still an open question, isnâ€™t it?Â  I mean, thatâ€™s the truth; we have sort of mediating devices but we donâ€™t have the magic brew yet do we?</p>
<p><strong><strong>Brady Forrest:</strong> No. Not yet.</strong></p>
<p><strong><strong>Tish Shute:</strong></strong> Good enough in some ways, and certainly a start but not quite the real deal.Â  For me, Where 2.0 this year covers the groundwork for true AR, mobile social proximity-based social networking, visual search, computer vision and sensor fusion techniques&#8230;.Â Â  And because all these things have a chicken and egg relationship laying the groundwork is basically as important as having the mediating device otherwise you canâ€™t do interesting things when we get the mediating device, right?</p>
<p>Is this the year we get the magic brew for AR, i.e., the business model, the killer app, and the mediating device?</p>
<p><strong><strong>Brady Forrest:</strong> This is not the year.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Then I should ask you. Are you in the Goggles camp? That is do you think AR needs eyewear to go mainstream?</p>
<p><strong><strong>Brady Forrest:</strong> I think this may be where we get&#8230;we start to see what is going to be the killer app that gets people to buy the hardware that will support AR.Â  You see what I mean?Â  And then from there the apps will come out and the hardware will advance in that direction.</strong></p>
<p><strong>I donâ€™t think AR has made that leap yet.Â  It hasnâ€™t, to use almost a clichÃ©, it hasnâ€™t crossed the chasm yet and it hasnâ€™t proven that it will.Â  Because I donâ€™t know if&#8230;I think itâ€™s difficult to tell right now.Â  Is it going to be games?Â  Is it going to be data layers? What is going to drive people to an AR device, especially one fully dedicated to it?</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>I think in terms of AR games taking off a bit of help from the mediating device e.g. access to the iphone video buffers would probably be enough to stoke up AR games into being a hot commodity.Â  But in terms of AR data layersÂ  going mainstream, we need some of the other players in the location space to put together the magic brew on the business model, donâ€™t we?</p>
<p><strong><strong>Brady Forrest:</strong> Thatâ€™s why Iâ€™m so curious though&#8230;thatâ€™s why I gave Yelp their own talk.Â  They are&#8230;Those guys are gang busters, theyâ€™re a consumer company, very consumer facing website.Â  Theyâ€™ve got amazing data stores.Â  They do a lot of interesting stuff with their data.Â  And I donâ€™t think people always give them the geek credit they deserve.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>You began Where 2.0 back in 2004, when as you point out, &#8220;&#8216;local search&#8217; was interesting but not yet real&#8221; and you have always stressed something thatâ€™s proven to be absolutely true which is lifestyle before commerce, right?Â  And that if location based services were going to be big it was because they meant something in terms of our lifestyle, not just because they told us where to get another good burger.Â  Right?</p>
<p>I think thereâ€™s been a lot of breakthrough in that area this year in terms of what location based services and proximity based social networks are to us now, how theyâ€™re changing our lifestyle.Â  What do you see as the breakthroughs for in 2009 and what are you hoping for in 2010?</p>
<p><strong><strong>Brady Forrest:</strong> Well, I think Google Goggles is one of the most exciting things to me.Â  Having access to a visual search&#8230;having someone actually release a visual search engine in that way, to consumers, I think is huge.Â  You know, you see stuff like that in the labs. But I donâ€™t see it&#8230; itâ€™s rare to see it out.</strong></p>
<p><strong>I think Android is huge.Â  And the way Google is pushing hardware to show off the platform; so the Nexus One being another example and the fact that itâ€™s breaking free from the carriers. Because I think when get away from the carriers we are able to</strong><strong> see more innovation, it&#8217;s whatâ€™s going to allow people or developers and companies to really innovate.</strong></p>
<p><strong>And I think Twitter adding geo-location to their APIs and buying MixerLabs is a huge move. I think Twitter may end up becoming the end-all be-all of location services. They are going to be updated constantly by people; they are going to have a really good grasp, real-time, of what is happening in any one place, at least based on the people.</strong></p>
<p><strong>And then with the addition of the MixerLabs data, they&#8217;re going to have more datasets at their ready. As well as any data that they start to collect from the clients themselves, like from TweetDeck.</strong></p>
<p><strong>So there are global clients that are updating Twitter. I think those are some of the most exciting things. And again, just to come back to Yelp, I think Yelp&#8217;s Monocle is also pretty significant, just because it&#8217;s an AR app that&#8217;s being pushed into consumers&#8217; hands.</strong></p>
<p><strong>And we&#8217;ll see how useful they find it.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong><a href="http://en.oreilly.com/where2010/public/schedule/speaker/24907" target="_blank">Gary Gale, Yahoo! Inc.,</a> is going to talk on overcoming the business, social, and technological hurdles so we can reach the long promised [Laughs] Hyperlocal Nirvana. I think you&#8217;ve outlined some of these obstacles in relation toÂ  AR, where there are obstacles are in terms of mediating device, and bringing all the pieces together including computer vision techniques in order to have an AR view. That&#8217;s the AR side of it. But the layer below that, which is the layer where actual location based apps that are beginning to go mainstream now,Â  are these presenting successful business models for location-based services.</p>
<p>So in short, in your view, what are the big hurdles to Hyperlocal Nirvana before we get to AR, even just for these location-based services?</p>
<p><strong><strong>Brady Forrest:</strong> Well, how do you make money?</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yeah, to put it bluntly. I like <a href="http://battellemedia.com/" target="_blank">John Battelle&#8217;s</a> way of putting it [laughs] how do we oxygenate the system!</p>
<p><strong><strong>Brady Forrest:</strong> So are location-based services something that you can make money in the long-term? Nokia bought NavTec for $8 billion. And then two years later, they&#8217;re giving it away free as part of Ovi Maps.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>Right.</p>
<p><strong><strong>Brady Forrest: </strong>I&#8217;m assuming that that&#8217;s actually part of the plan.Â  And that although their hand may have been forced by Google with their release of Turn-By-Turn thatâ€¦but it&#8217;s still got to be a hard nut to swallow that this huge investment in location ends up becoming a loss leader to sell more phones.</strong></p>
<p><strong>So, can you make money through subscriptions, through selling apps? And I think that is still being proven. The other one is, can you use advertising? And it&#8217;s kind of scary to see that Apple is restricting the use of advertisers to use location.</strong></p>
<p><strong>It came out yesterday or two days ago that advertisers cannot use location, or app developers cannot use location for ads. They can only use location to show something interesting or useful to their customers.</strong></p>
<p><strong>And there&#8217;s a lot of speculation that it&#8217;s because Apple wants to control the location-based ads that go on the iPhone.</strong></p>
<p><strong>Tish Shute</strong>: Yes. I heard a strange rumor.Â  Actually its an un-strange rumor, a likely rumor in fact,Â  that Apple and MS are getting together to replace some of the Google aspects of the iPhone like search and maps?</p>
<p><strong><strong>Brady Forrest:</strong> Yes, &#8230;. Microsoft employees get 10% off at the Apple store. There&#8217;s a longstanding relationship between those two companies.</strong></p>
<p><strong>And Android is definitely more of a competitive threat than Windows Mobile is.Â  And it&#8217;s well-known what the relationship between PCs and Macs are. So I donâ€™t thinkâ€¦I donâ€™t find that to be that surprising of a rumor.Â  I do wonder if it would hurt the iPhone, but it doesnâ€™t surprise me that they would consider it.</strong></p>
<p><strong><strong>Tish Shute:</strong></strong> I do know, certainly from the AR point of view, Microsoft has recently hired some of the key researchers, including Georg Klein. And they are looking for more people in the image recognition area so it seems currently MS is going to be making a bigger push not just with PhotoSynth, but with image ID.</p>
<p>So it could be a pretty powerful combo between the iPhone, and Microsoft &#8211; they have some of the key computer vision research that would be needed for full AR.</p>
<p><strong><strong>Brady Forrest</strong>: Oh, yeah. Microsoft has amazing research depth. They&#8217;ve got an amazing team.</strong></p>
<p><strong><strong>Tish Shute: </strong></strong>But it is a bit of a mystery to me why Microsoft haven&#8217;t done more with Photosynth.Â  As I noted in myÂ <a id="jyr:" title="previous post" href="../../2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/">previous post</a>, <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokiaâ€™s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.Â  And â€œphoto-based positioning systemsâ€ -Â  3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing could be hugely important to AR.Â  But that brings me to another mystery why haven&#8217;t we seen more from Nokia in this space  yet &#8211; the N900 doesn&#8217;t have a compass?</p>
<p><strong><strong>Brady Forrest:</strong> Yeah, I donâ€™t know why Nokia hasnâ€™t made more of a space for themselves in these things. They did a lot of early work in these areas. I think they are trying toâ€¦my guess is that they&#8217;re trying to restructure themselves. They made some pretty big changes on the web-Ovi made its own division. And they&#8217;ve been doing a lot of location-based acquisitions: Places, Gate 5 several years ago, Gossler, just the past six months.  And so I think that&#8217;s really been their focus&#8230;</strong><strong>and the research team.</strong></p>
<p><strong>And a large company, since they havenâ€™t found a business model, which is what we&#8217;ve been discussing here, they are hesitant to launch it, or toâ€¦they donâ€™t really know if this is a business that they need to launch, or if this is an app that they should have there out for fun.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Yeah. And that&#8217;s back to the oxygenation of the system and location.Â  We really still have some work to do to with the business models</p>
<p>Final question!Â  At the core of many of today&#8217;s business model is the idea of hoarding data &#8211; that&#8217;s an underpinning.</p>
<p>But ultimately, for open AR, we want a situation where we can really share data so that we donâ€™t really have the data all locked inside one particular browser or app. The current crop of AR browsers arenâ€™t really browsers in the sense that we understand a browser on the web today, because the data&#8217;s locked inside each service, Wikitude, Layar, Acrossair etc.</p>
<p>I have become very interested with Federation as a model for solving this, so that we can begin to have an opportunity to build consensual relations around data,  sometimes sharing, sometimes not. Federation is my big dream at the moment.Â  And now we even have something to work with in the Wave Federation Protocol. But how do we get from here to there, where we really have a federated world of data for AR and location-based services? But you think people need to solve the question of business models first?<strong><br />
<strong><br />
Brady Forrest:</strong> I think people needâ€¦I think one potential is ads; so serving up content.Â  And by ads, I also mean coupons, meals, the Foursquareâ€¦. what it looks like Foursquare&#8217;s going to do, featured content, which is Layar&#8217;s.</strong></p>
<p><strong>So we need to see, is that the way we&#8217;re going to sell these? The other is to have the best viewer, which in some ways is a race in selling that, but that&#8217;s potentially a race to the bottom, price-wise.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Right. Do you think Google Wave Federation Protocol has a chance of taking off and changing the game for real-time communications, federation, real-timeâ€¦<strong><br />
<strong><br />
Brady Forrest:</strong> Quite possibly with the real-time. I think they need to work on the UI.</strong></p>
<p><strong><strong>Tish Shute:</strong> </strong>Oh dear we can&#8217;t discuss the Wave UI right at the end of the interview &#8211; of course I believe it would do better in an AR view!Â Â  I know you have to goÂ  now but I have to say Google Wave not standardizing the client/server interface &#8211; so we could seem some new UIs for Wave [we are working with PygoWave for ARWave because of this], andÂ  iPad&#8217;s lack of camera were two huge disappointments in recent months.</p>
<p><strong><strong>Brady Forrest: </strong>Yeah. It&#8217;s [the Wave client] is very difficult to use.</strong></p>
<p><strong>Tish Shute: </strong>But the Wave Federation Protocol is an open fast, compact protocol that is a dream come true for AR.Â  Open, distributed, real time communications is a very big enabler for AR.Â  I would hazard a guess that in 2010 real time communications plus location becomes oxygen.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/02/10/the-physical-world-becomes-a-software-construct-talking-with-brady-forrest-about-where-2-0-2010/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Visual Search, Augmented Reality and a Social Commons for the Physical World Platform: Interview with Anselm Hook</title>
		<link>https://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/</link>
		<comments>https://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/#comments</comments>
		<pubDate>Sun, 17 Jan 2010 17:05:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR Commons]]></category>
		<category><![CDATA[AR Consortium]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[ardevcamp]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARNY Meetup]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[ARWave Wiki]]></category>
		<category><![CDATA[augmented reality conference]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality goggles]]></category>
		<category><![CDATA[augmented reality social commons]]></category>
		<category><![CDATA[brightkite]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[Davide Carnivale]]></category>
		<category><![CDATA[distributed AR]]></category>
		<category><![CDATA[distributed augmented reality]]></category>
		<category><![CDATA[federated search]]></category>
		<category><![CDATA[FourSquare]]></category>
		<category><![CDATA[Games Alfresco]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[gowalla]]></category>
		<category><![CDATA[graffitigeo]]></category>
		<category><![CDATA[hacking maps]]></category>
		<category><![CDATA[Head Map manifesto]]></category>
		<category><![CDATA[imageDNS]]></category>
		<category><![CDATA[imagemarks]]></category>
		<category><![CDATA[imagewiki]]></category>
		<category><![CDATA[location based services]]></category>
		<category><![CDATA[Map Kiberia]]></category>
		<category><![CDATA[Mikel Maron]]></category>
		<category><![CDATA[mobile internet]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[Muku]]></category>
		<category><![CDATA[neo-viridian]]></category>
		<category><![CDATA[Nokia's ImageSpace]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open distributed AR]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[paige saez]]></category>
		<category><![CDATA[photo-based positioning systems]]></category>
		<category><![CDATA[physical world platform]]></category>
		<category><![CDATA[placemarks]]></category>
		<category><![CDATA[Planetwork]]></category>
		<category><![CDATA[Platial]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[proximity based social networks]]></category>
		<category><![CDATA[snaptell]]></category>
		<category><![CDATA[social cartography]]></category>
		<category><![CDATA[social commons]]></category>
		<category><![CDATA[social search]]></category>
		<category><![CDATA[SpinnyGlobe]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Tonchidot]]></category>
		<category><![CDATA[trust filters]]></category>
		<category><![CDATA[Viridian]]></category>
		<category><![CDATA[viridiandesign]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Wave]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[whurley]]></category>
		<category><![CDATA[yelp]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5050</guid>
		<description><![CDATA[Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like Google Goggles, Point and Find, and SnapTell &#8211; see this post by Katie Boehret.Â  And while there may be no truly game changing augmented [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook.jpg"><img class="alignnone size-medium wp-image-5051" title="anselmhook" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselmhook-300x225.jpg" alt="anselmhook" width="300" height="225" /></a></p>
<p>Visual search is heating up, and with it a key stage of turning the physical world into a platform is underway as images become hyperlinks to the world in applications like <a href="http://www.google.com/mobile/goggles/#dc=gh0gg" target="_blank">Google Goggles</a>, <a href="http://pointandfind.nokia.com/" target="_blank">Point and Find</a>, and <a href="http://www.snaptell.com/" target="_blank">SnapTell</a> &#8211; <a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank">see this post by Katie Boehret</a>.Â   And while there may be no truly game changing augmented reality goggles for a while, make no mistake, key aspects of our augmented view, factors that will have a lot to do with what we will actually see when an augmented vision of the world is a commonplace, are already in the works.Â  And, as Anselm Hook (pic above <a href="http://www.flickr.com/photos/caseorganic/2994952828/" target="_blank">from @caseorganic&#8217;s flickr</a>) notes:</p>
<p><strong>&#8220;There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see?&#8221;</strong></p>
<p>Cooperating services, e.g., Google Earth, Maps, Streetview, Google Goggles, and leader in local search like Yelp (<a href="http://www.huffingtonpost.com/ramon-nuez/google-is-getting-ready-f_b_426493.html" target="_blank">see here</a>) would have an enormous ability to filter and control a mobile, social, context aware view of the physical world, and Google themselves see an ethical quandary.</p>
<p><strong> &#8220;A Google spokesperson says this app has the ability to use facial recognition with Goggles, but hasnâ€™t launched this feature because it hasnâ€™t been built into an app that would provide real value for users. The spokesperson also cites â€œsome important transparency and consumer-choice issues we need to think throughâ€ </strong><strong> (quote from Wall Street Journal Column</strong><a href="http://solution.allthingsd.com/20100112/in-search-of-images-worth-1000-results/" target="_blank"> by Katie Boehret)</a>.</p>
<p><a href="http://www.hook.org/" target="_blank">Anselm Hook</a> and <a href="http://paigesaez.org/" target="_blank">Paige Saez</a>, with great prescience, have been advocating a social commons for the placemarks and imagemarks to our physical world platform through a number of pioneering projects, including <a href="http://imagewiki.org/" target="_blank">imagewiki</a>.Â Â  I have interviewed both Anselm and Paige (upcoming) in depth, recently.Â  My talk with Anselm was nearly three hours long!Â  So I am publishing the transcript in two parts.</p>
<p>Understanding what it means to have a social commons forÂ  our physical world platform, and augmented reality, are key questions for all of us to think about, but especially important for those of us involved in the emerging industry of augmented reality.</p>
<p>Anselm <a href="http://blog.makerlab.org/2009/11/augmentia-redux/">notes</a> :</p>
<p><strong>â€œThe placemarks and imagemarks in our reality are about to undergo that same politicization and ownership that already affects DNS and content. Creative Commons, Electronic Frontier Foundation and other organizations try to protect our social commons. When an image becomes a kind of hyperlink â€“ thereâ€™s really a question of what it will resolve to. Will your heads up display of McDonalds show tasty treats at low prices or will it show alternative nearby places where you can get a local, organic, healthy meal quickly? Clearly thereâ€™s about to be a huge ownership battle for the emerging imageDNSâ€</strong></p>
<p>The mobile internet is moving beyond the internet in your pocket phase of mobility with mobile, social, proximity-based, context aware networks like <a href="http://www.foursquare.com/">FourSquare</a>, <a href="http://gowalla.com/" target="_blank">Gowalla</a>, <a href="http://brightkite.com/" target="_blank">Brightkite</a> and <a href="http://www.geograffiti.com/">GraffitiGeo</a> (see <a href="http://smartdatacollective.com/Home/23811">Smart Data Collective</a>) likely, soon, to start to take precedence over other forms of social network.</p>
<p>Regardless of the timeline for true augmented reality &#8211; 3D images &amp; graphics tightly registered to the physical world,Â  proximity-based social networking and real time search are already taking us into a hyper-local mode and the realm of augmented reality which is <strong><strong>&#8220;inherently about who you are, where you are, what you are doing, and what is around you&#8221; </strong></strong>(<a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> &#8211; see <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">here</a>).<strong><strong> </strong></strong>The ground is being prepared for augmented reality now.<strong><strong><br />
</strong></strong></p>
<p>If you have been reading Ugotrade, you will know I have been actively involved in developingÂ  an open, distributed AR platform/mobile social interaction utility for geolocated data based on the Wave Federation Protocol &#8211; AR Wave a.k.a Muku &#8211; &#8220;crest of a wave&#8221; (see my posts <a href="http://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/" target="_blank">here</a>, <a href="http://www.ugotrade.com/2009/12/04/ar-wave-project-an-introduction-and-faq-by-thomas-wrobel/" target="_blank">here</a> and <a href="http://www.ugotrade.com/2009/10/13/ar-wave-layers-and-channels-of-social-augmented-experiences/" target="_blank">here</a> for more on this project, and the <a href="http://arwave.wiki.zoho.com/HomePage.html" target="_blank">AR Wave Wiki</a> here).Â  Federation is, I believe, one vital aspect to developing a social commons for augmented reality and the physical world platform.</p>
<p>Also, a bit of news, I am co-chairing the upcoming <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Augmented Reality Event (are2010)</a> with <a href="http://gamesalfresco.com/about/" target="_blank">Ori Inbar</a> of <a href="http://gamesalfresco.com/" target="_blank">Games Alfresco</a> and <a href="http://ogmento.com/" target="_blank">Ogmento</a>, <a href="http://whurley.com/" target="_blank">whurley</a>.Â  Sean Lowery, <a href="http://www.innotechconference.com/pdx/Details/other.php" target="_blank">Prospera</a>, is the event organizer, and <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> has the support of the <a href="http://www.arconsortium.org/" target="_blank">AR Consortium</a>. Â  The <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">are2010</a> web site is live and there is an <a title="Augmented Reality Event (are2010) Opens Call For Speakers" href="http://augmentedrealityevent.com/2010/01/17/augmented-reality-event-2010-opens-call-for-speakers/">Open Call For Speakers</a>.Â   You can submit your proposals and demos for one of the three tracks, business, technology, or production <a href="http://augmentedrealityevent.com/speakers/call-for-proposals/" target="_blank">on the web site here</a>.</p>
<p><a href="http://augmentedrealityevent.com/" target="_blank"><img class="alignnone size-medium wp-image-5101" title="are2010" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/are20101-300x60.png" alt="are2010" width="300" height="60" /></a></p>
<p><a href="http://www.wired.com/beyond_the_beyond/" target="_blank">Bruce Sterling</a> &#8220;prophet&#8221; ofÂ  augmented reality and more, &#8220;will deliver the most anticipated <a href="http://augmentedrealityevent.com/speakers/" target="_blank">Augmented Reality keynote</a> of the year.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost.jpg"><img class="alignnone size-medium wp-image-5105" title="bruces-brasspost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/bruces-brasspost-300x225.jpg" alt="bruces-brasspost" width="300" height="225" /></a></p>
<p>It didn&#8217;t surprise me when Anselm mentioned that Bruce Sterling was a key influence for his work on the geospatial web and augmented reality.Â  Anselm explained:</p>
<p><strong>&#8220;Iâ€™d seen <a href="http://www.viridiandesign.org/notes/151-175/00155_planetwork_speech.html" target="_blank">a talk by Bruce Sterling</a> at an event called Planetwork [May, 2000]. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about.</strong> <strong>So, his influences is a pretty significant one to me at that exact moment.&#8221;</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> -Â  seems it is time for a &#8220;Neo-Viridian,&#8221;  revival!</p>
<p>This <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">post by Bruce Sterling on Pachube Feeds</a>, and Thomas Wrobel&#8217;s <a href="http://www.ugotrade.com/2009/08/19/everything-everywhere-thomas-wrobels-proposal-for-an-open-augmented-reality-network/" target="_blank">prototype design for open distributed augmented reality on IRC</a>, were key inspirations for me when I began thinking about the potential of Google Wave Federation protocol for augmented reality.Â  I had been exploring <a href="http://www.pachube.com/" target="_blank">Pachube</a> and deeply interested in <a href="http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">the vision of Usman Haque</a>, but I had a real <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">aha moment</a> when I read this :</p>
<p><strong>â€œ(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))â€</strong></p>
<p>I think the AR Wave group will earn the extra credit and more very soon!Â  <a href="http://need2revolt.wordpress.com/about/" target="_blank">Davide Carnovale, need2revolt</a>, and <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a><strong> </strong>have been leading the coding charge, and there will be a very early AR Wave demo soon, perhaps as soon as the <a href="http://www.meetup.com/arny-Augmented-Reality-New-York/" target="_blank">Feb 16th ARNY Meetup</a>.Â  <strong><br />
</strong></p>
<p>Open access to the creation of view that will eventually find its way into AR goggles, will depend not only on the power ofÂ  an open distributed platform for collaboration like the AR Wave project.Â  Our augmented reality view will be constructed through complex &#8220;hybrid tracking and sensor fusion techniques&#8221; (Jarell Pair), cooperating cloud data services, powerful search and computer vision algorithms, and apps that learn by context accumulation will drive our augmented experiences, and at the moment, these kind of resources, at least at scale, are for the most part in private hands.</p>
<p>In the interview below, Anselm&#8217;s discussesÂ  how trust filters, and <span id="zuat" title="Click to view full content">being able to publicly permission your searches so that other people can respond and so that people can reach out to you, and the democratization of data in general, are even more of a concern </span>with augmented reality and hyper local search<span id="zuat" title="Click to view full content">.</span> The task of understanding what it means to haveÂ  a social commons for the outernet remains an open, and pressing question.</p>
<p>Anselm explains (see full interview below):</p>
<p><strong><span id="e18n" title="Click to view full content">&#8220;as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</span></strong></p>
<p><strong><span id="e18n" title="Click to view full content"> And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.&#8221;</span></strong></p>
<h3>Augmented Reality will emerge through sensor fusion techniques &amp; cooperating cloud services</h3>
<p>In 2010, sensor fusion techniques, computer vision technology in conjunction with GPS and compass data will create data linking that can enable the kind of augmented reality that has been the stuff of imagination for nearly four decades (see <a href="http://laboratory4.com/2010/01/the-reality-of-augmented-reality/" target="_blank">Jarrell Pair&#8217;s post).</a></p>
<p>Putting stuff in the world in 3D is of course key to the original vision of augmented reality, and one of its biggest challenges.Â  Augmented reality is going to be implicated in a real time mapping of the world at an unprecedented scale and granularity.Â  We have barely an inkling of the implications of this now.</p>
<p>Anselm and Paige have been working in the heart of the social cartography movement for nearly a decade.Â  The vision and experience of this community is vital to understanding how augmented reality and the world as a physical platform can evolve into something that benefits people and allows them &#8220;to have a better understanding of the opportunities around them.&#8221;</p>
<p>We have been hacking maps for millenia â€“Â  â€œfrom conceptual story mapping, to colloquial mapping in European development and the cartographic renaissance created by the global voyages and rediscovery of Ptolemyâ€™s mapsâ€ (<a href="http://highearthorbit.com/" target="_blank">Andrew Turner</a>).Â  And, recently, initiatives on a public-provided GIS, like <a href="http://opengeo.org/" target="_blank">OpenGeo</a>, have led the way toward more open, interoperable, geospatial data.</p>
<p>Mapping takes on a new an crucial role to augmented reality.Â  <a href="http://www.slashgear.com/nokia-image-space-adds-augmented-reality-for-s60-3067185/" target="_blank">Nokia&#8217;s ImageSpace</a> is beginning to do what many thought Microsoft would do with photosynth two years ago.</p>
<p>And, if we see these kind of projects developed into a &#8220;photo-based positioning systems&#8221; -Â  &#8220;3d models of the environment to cover every possible angle, and then software that can work out in reverse based on a picture precisely where you are and where your facing&#8221; (Thomas Wrobel), we would find augmented reality leap forward over night.</p>
<p>It is time to take very seriously the vast opportunities and potential pitfalls of an augmented world.</p>
<p><strong><span id="vix9" title="Click to view full content">&#8220;when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist.Â  It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></strong></p>
<p><strong><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.&#8221;</span></strong></p>
<p>There are some initiatives emerging.Â  <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a> (who <a href="http://www.techcrunch.com/2009/12/08/tonchidot-sekai-camera-funding/" target="_blank">closed on $4 million of VC for augmented reality </a>last December) has helped create the <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> in Japan.Â  <a href="http://www.tonchidot.com/corporate-profile.html" target="_blank">CFO of Tonchidot</a>, <a href="http://www.linkedin.com/ppl/webprofile?action=vmi&amp;id=499984&amp;pvs=pp&amp;authToken=r8TF&amp;authType=name&amp;trk=ppro_viewmore&amp;lnk=vw_pprofile" target="_blank">Ken Inoue</a> explained in <a href="http://www.ugotrade.com/2009/09/17/tonchidot-taking-augmented-reality-beyond-lab-science-with-fearless-creativity-and-business-savvy/" target="_blank">an interview with me in September 2009</a>.</p>
<p>&#8220;<strong>We feel that public data, such as landmarks, government facilities, and public transport should be shared. We see an AR world where people can readily and easily access information by just seeing â€“ quick, easy, and efficient.Â  And because of this ease and intuitiveness, children, the elderly and handicapped will surely benefit.Â  AR could help create a safer society.Â  Warnings, alerts, and safety information could save lives and avoid disasters.Â  These are what we, and <a href="http://translate.google.com/translate?client=tmpg&amp;hl=en&amp;u=http%3A%2F%2Fwww.arcommons.org%2F&amp;langpair=ja%7Cen" target="_blank">AR Commons</a> would like to tackle in the not so distant future.&#8221;</strong></p>
<p>But<strong> </strong>the task of building a social commons for the physical world platform has only just begun.<strong><br />
</strong></p>
<p><strong><span title="Click to view full content"><br />
</span></strong></p>
<h3>Interview with Anselm Hook</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31.jpg"><img class="alignnone size-medium wp-image-5085" title="anselm3" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/anselm31-300x225.jpg" alt="anselm3" width="300" height="225" /></a></p>
<p><em>photo from <a href="http://www.flickr.com/photos/anselmhook/3832691280/in/set-72157621946362509/" target="_blank">Anselm&#8217;s Flickr stream here</a></em></p>
<p><span id="u2mq" title="Click to view full content"><strong>Tish Shute:</strong> We <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">first met last year </a></span><span id="zjlm" title="Click to view full content"><a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">at Wherecamp</a>. </span><span id="suh4" title="Click to view full content">The start of 2009 was I think</span><span id="e_r5" title="Click to view full content"> the &#8220;OMG finally&#8221; moment for augmented reality and</span><span id="wo16" title="Click to view full content"> in less than a year AR, at least in proto forms, AR is breaking into the mainstream now! You are one of the founding visionaries/philosophers/hackers of the geo web and you have been thinking about geo web and AR for a long time &#8211; <a href="http://hook.org/headmap" target="_blank">all the way back to the legendary Head Map Manifesto</a>, and before.Â  Mostly recently you led the way in the very successful <a href="http://www.ardevcamp.org/wiki/index.php?title=Main_Page" target="_blank">ARDevCamp</a> in Mountain View. </span><span id="kn-y" title="Click to view full content"> Could you start by telling me a little bit about the history of your pioneering work with geolocated data?</span></p>
<p><strong>Anselm Hook: </strong>I am a long time Geo fanatic. I&#8217;m really interested in social cartography and what some people call public-provided GIS, thatâ€™s some language that people use. Anyway, my personal interest, when I talk to people who are non-technical (and it&#8217;s been a long term interest in the way I phrase it) is that I want to help people see through walls. So, the goal is very simple. I want people to have a better understanding of opportunities around them, the landscape around them. I always get frustrated when people make bad decisions because of a lack of information, especially when it&#8217;s related to their community and related to their environment. But, plainly put, I really just want &#8220;to help people see through walls&#8221;. It&#8217;s a very simple goal.</p>
<p><strong>Tish Shute:</strong> I know you worked on <a href="http://platial.com/" target="_blank">Platial</a>, which is really one of my favorite social mapping applications. It really broke new ground. What was the history of that? How did you get involved with Platial?</p>
<p><strong>Anselm Hook:</strong> Thatâ€™s an interesting question. It actually started at around 2000 when I saw Bruce Sterling talk. I had been writing video games for many years, and I was quite good at it, and I enjoyed it. But, the reasons I was doing it diverged from why the industry was doing it. I was making video games because I like to make shared spaces for my friends to play in and to share experience. I really enjoyed making shared environments. I worked on <a id="jrn-" title="BBS's" href="http://en.wikipedia.org/wiki/Bulletin_board_system">BBS&#8217;s</a> and my friends and I were always making these collaborative shared environments.</p>
<p>Once the video game industry kind of started to take off, I started to do high performance, 3D interactive video games and making compelling shared spaces, and it was a lot of fun. But, the frustration for me was that there was a huge industry growing around it and became very commercial. Although it paid well, it started to diverge from my values which were more centered around community environments, and shared understanding.</p>
<p><strong>Tish Shute:</strong> Yes very rapidly, the big games kind of devolved from the social aspects and became more and more into single player really, didnâ€™t they?</p>
<p><strong>Anselm Hook:</strong> It was the way, actually, because even though often you were in a many player world, you werenâ€™t collaborating, everything else became just a target.Â  I liked the idea of deep collaboration that calls the kind of playful space you see in IRC, or in the real world, where people are solving real world problems.</p>
<p>And I grew up in the Rockies, and I was always had a lot of access to the outside. So, I saw shared spaces and collaboration as a way to protect our environment. [ To step back ] I think people used different metrics <span id="gozb" title="Click to view full content">for measuring their choices in the world and many people have a value system centered around minimization of harm: making sure that the people are not hurt. But, my value system is different. I personally believe that protecting the planet is more important: to maximize biodiversity. I feel like protecting people around me comes from protecting the ecosystems they live in.</span></p>
<p><strong>Tish Shute:</strong> Thatâ€™s interesting, isnâ€™t it, because the history of Keyhole was really that, wasnâ€™t it.Â  Keyhole later became Google Earth, but I mean it began out of a project to look at what was going on in the ecosystem over Africa at that time, didnâ€™t it?<br />
<strong><br />
Anselm Hook:</strong> Yes, in fact many peopleâ€™s projects are stemming from an environmental concern. <a id="zxy9" title="Mikel Mironâ€™s" href="http://brainoff.com/weblog/">Mikel Maronâ€™s</a> works for example &#8211; heâ€™s doing <a id="euvm" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>, and he also worked on OpenStreetMaps.</p>
<p><strong>Tish Shute:</strong> Map Kiberia &#8211; that is the new project?</p>
<p><strong>Anselm Hook:</strong> Oh, yes his project is called <a id="r7ie" title="Map Kiberia" href="http://mapkibera.org/">Map Kiberia</a>. Heâ€™s mapping a city in Africa.<br />
[For more see <a id="ngn." title="Map Kiberia's YouTube Channel" href="http://www.youtube.com/user/mapkibera">Map Kiberia&#8217;s YouTube Channel</a> &#8211; <a id="amqx" title="photo below" href="http://www.flickr.com/photos/junipermarie/4098163856/" target="_blank">photo below</a> from <a href="http://www.flickr.com/photos/junipermarie/">ricajimarie</a> ]</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b.jpg"><img class="alignnone size-medium wp-image-5052" title="dhj5mk2g_487qfcv76ft_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_487qfcv76ft_b-300x199.jpg" alt="dhj5mk2g_487qfcv76ft_b" width="300" height="199" /></a></p>
<p><strong>Tish Shute:</strong> Right, great!</p>
<p><strong>Anselm Hook:</strong> When I started to look at GIS and mapping I started to meet people who had a very similar background. What happened to me is I kind of stepped away from games around the year 2000. Iâ€™d seen a talk by Bruce Sterling at an event called <a id="e8dn" title="PlaNetwork" href="http://www.conferencerecording.com/newevents/pla20.htm">PlaNetwork</a>. And that event was, for me, a turning point where I decided to focus full time on exactly what I cared about instead of doing things that were kind of similar to what I cared about. So, his influences is a pretty significant one to me at that exact moment.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b.png"><img class="alignnone size-medium wp-image-5053" title="dhj5mk2g_490gcp7q6fn_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_490gcp7q6fn_b-300x80.png" alt="dhj5mk2g_490gcp7q6fn_b" width="300" height="80" /></a></p>
<p>[For more see <a id="q2or" title="viridiandesign.org" href="http://www.viridiandesign.org/About.htm">viridiandesign.org</a> &#8211; seems that it is time for a &#8220;Neo-Viridian,&#8221;  revival.]</p>
<p><strong>Tish Shute:</strong> Itâ€™s interesting because now your paths are crossing again with augmented reality. You are on the same wavelength again.</p>
<p><strong>Anselm Hook:</strong> Itâ€™s funny, actually, Iâ€™ve had a couple of brief overlaps in that way.Â  Well, so in 2000 I<span id="mdsf" title="Click to view full content"> went to see this talk and I did a small project called &#8212; well, I called it <a id="bx3u" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a>. What I did is I mapped protests from a number of websites onto a globe to show the level of community opposition to the pending war in Iraq. It was the first time there had been a protest before a war. So, it was very interesting to me. [ See <a href="http://hook.org/headmap" target="_blank">http://hook.org/headmap</a> ]<br />
<strong><br />
Tish Shute:</strong> Thatâ€™s really fascinating. Do you have any pictures of that you could send me? </span></p>
<p><span id="r0h_" title="Click to view full content"><a href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/" target="_blank"><img class="alignnone size-medium wp-image-5054" title="dhj5mk2g_492ffct2df4_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_492ffct2df4_b-300x225.jpg" alt="dhj5mk2g_492ffct2df4_b" width="300" height="225" /></a></span></p>
<p><span id="mdsf" title="Click to view full content">photo from <a id="j05v" title="anselm's flickrstream" href="http://www.flickr.com/photos/anselmhook/1747152617/sizes/m/in/set-72157602696188420/">anselm&#8217;s flickrstream</a></span></p>
<p><strong>Tish Shute:</strong> Yes, Iâ€™ll definitely look <a id="ua2l" title="SpinnyGlobe" href="http://github.com/anselm/SpinnyGlobe">SpinnyGlobe</a><span id="m0:j" title="Click to view full content"> up. It sounds very interesting.Â  One of the aspects of your work on geo-located data projects like this and <a id="h.gx" title="Platial" href="http://platial.com/">Platial</a> is that you really started to develop this idea of a culture of place, about how people make place. This was the wake up call to me regarding the power of networks combined with geo-data. </span></p>
<p><span id="m0:j" title="Click to view full content">We are hoping to extend this idea into augmented reality with the an open distributed platform for AR so that we can collaboratively map our worlds from the perspective of who we are, where we are, and what we are doing.Â  I know youâ€™ve just done some work recently in augmented reality.Â  I know you put the code up already. </span></p>
<p><span id="m0:j" title="Click to view full content">By the way, I love the way you take your philosophy into the way you make code &#8211; the practice of making some code, trying some things out, making it all public and publishing your findings, you know, your comments on that experience.Â  Perhaps you could recap sort of how you picked up recently on the state of play with augmented reality and what aspects you looked at, and what came out of that experience?</span></p>
<p><strong>Anselm Hook:</strong> So, itâ€™s a very simple trajectory. Coming out of the work I had done, <a id="cs18" title="Platial" href="http://platial.com/">Platial</a>, among other projects and I started to just look at the hyper-local and I suddenly realize that even those services werenâ€™t really speaking to living, and how to really see and solve local problems. What was missing was a sense of context.</p>
<p>The map doesnâ€™t know how youâ€™re feeling, it doesnâ€™t know if youâ€™re in a hurry, it doesnâ€™t know what you want, itâ€™s very static. Even the web maps are very static. And augmented reality for me I started to recognize as a combination of &#8212; well &#8212; itâ€™s probably collision of many forces, many forces that weâ€™re all a part of. Weâ€™ve also didnâ€™t realize that the real-time web is really important, itâ€™s part of<span id="bja1" title="Click to view full content"> what AR is about.</span></p>
<p>We have all started to realize that the context is important. You know, your personal disposition, your needs, if you want to be interrupted or not. That is the kind of thing that the ubiquitous computing crowd has talked about. We started to recognize that there are sensors everywhere, and the ambient sensing communities talked about that. So what is funny for me about augmented reality is I started realizing it is just a collision of many other trends into something bigger.</p>
<p>Everything else we thought was a separate thing is actually just part of this thing. Even things like Google Maps or mapping systems we think are so great are really just kind of almost an aspect of a hyper-local view. You actually donâ€™t really care what is happening 10 blocks away or 100 blocks away. If you could satisfy those same interests and needs within a single block, one block away, you would probably be really happy. You really just want to satisfy needs and interests, find ways to contribute, or get yourself fed, or whatever it is you want. And AR seemed to be the playground to really explore the human condition.</p>
<p><strong>Tish Shute:</strong> Anyway, I think one of the things that has been very amazing this year is we to have the good mediating devices that, for the first time, give us compasses, GPS, and accelerometers. But one sort of missing pieces with AR at the moment is [tracking, mapping, and registration] &#8211; the kind of things colloquial mappings of the world could be of great help with.</p>
<p>We have seen mapping coming out of the Flickr data, e.g., the University of Washington, put the maps together from the geo-tagged Flickr photos. Now if we could have that linked up with AR, then we have the kind of mapping we need to kind of really hook the geo-data onto the world in a way that goes beyondâ€¦you know, what compass and GPS can really deliver is pretty minimal at the moment.</p>
<p><strong>Anselm Hook</strong>: There is a real risk of our augmented reality world being owned by interests which are not our own. There is a real question of when you hold up that AR goggle, what are you going to see? Are you going to see corporate advertising? Are you going to see your friendsâ€™ comments or criticisms? It is going to be an Iran or a democracy, right? It is unclear.</p>
<p><span id="vix9" title="Click to view full content">Right now there are some disturbing trends I have noticed. I am a big fan of Google Goggles. I think it is a great project. But when you are mediating the translation layer between the image and the data, then there is an opportunity for you to control it, and that opportunity is hard to resist. It is hard to choose not to own that opportunity. It is an advertising opportunity. It is a revenue opportunity. It is a chance to send a message and a tone. </span></p>
<p><span id="vix9" title="Click to view full content">I know that Google and companies like that are keenly aware of the kinds of roles they donâ€™t want to hold, but it is sometimes seductive to think about them. And I am afraid that we, as a community, need to assert an ownership, kind of a commons, over how computers will translate what they see to information that we perceive.</span></p>
<p><strong>Tish Shute:</strong> Yes. And this is how we met, again, recently [over the project to create an open, distributed platform for AR using the Wave Federation Protocol]â€¦</p>
<p><span id="e18n" title="Click to view full content">This is something I feel really deeply is that, you know, basically we need the physical internet to be as open as, as the, as the internet, as the end-to-end internet has been. Or more so, actually, because the end-to-end internet has seen the trend has been to walled gardens.Â  Basically Facebook became enormous, an enormous walled garden which, I think, was despite, our predictions about them, [walled gardens] are the social experience really on the web.Â  It&#8217;s very much in walled gardens still and I, and I really feel that with the physical internet, we need to make great efforts not for it not just to be a series of small pockets of privately funded walled gardens.</span></p>
<p>There needs to be some kind of communications infrastructure that keeps it open so that was when I got interested in looking at the Wave Federation Protocol because it was a real time, you know, an open real time protocol that could possibly be a basis for that. But I think the point you&#8217;ve talked to just now, the mapping of the world and who has the &#8220;goggles&#8221;, i.e., the image data, image databases, that make the world meaningful is really, that&#8217;s still a, it&#8217;s still a BIG question [i.e. who controls the view?].</p>
<p>When I saw <a id="ewxn" title="ImageWiki" href="http://imagewiki.org/">ImageWiki</a>, [I realized] that is a piece that is vital for, for augmented reality. We need to have a huge social effort to be involved in this,Â  linking in and creating theÂ  physical internet, in creating the image hyperlinks that will make that meaningful.</p>
<p><span title="Click to view full content"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b.png"><img class="alignnone size-medium wp-image-5055" title="dhj5mk2g_493fv23rg33_b" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/01/dhj5mk2g_493fv23rg33_b-300x219.png" alt="dhj5mk2g_493fv23rg33_b" width="300" height="219" /></a></span></p>
<p><span id="e18n" title="Click to view full content"><strong>Anselm Hook:</strong> I think that&#8217;s a great point. The search interface, the kind of Internet that we&#8217;re used to, the way we talk to the network now, is fundamentally open end to end. Yes, you can have your oligarchies inside of it, as we see with Facebook, but you can always start your own venture up and you can do a search on something, and you can find that, that website and you can join it or you can put up your own webpage and people can find it. </span></p>
<p><span id="e18n" title="Click to view full content">The translation layer, the idea of text search and the ability to discovery power and the serendipity and the openness of that discovery, it&#8217;s pretty open right now. We do have some serious boundaries of language, which is one of the reasons I was working at the <a id="xg:8" title="Meadan.org" href="http://www.imug.org/events/past2007.htm#meadan">Meedan.org</a> [hybrid distributed, natural language translation] for a couple of years, trying to bridge that issue.</span></p>
<p>But here, as we move towards a physical internet where there&#8217;s no clicking and there&#8217;s no interface and the computer&#8217;s just telling you what it thinks you&#8217;re looking at, translating, you know, an image of a billboard to the name of the rock star who&#8217;s on that billboard, or translating the list of ingredients on a can of soup to the source outlets where it thinks that, those ingredients came from. When you have that kind of automated mediation, the question of trust definitely arises.</p>
<p>And we haven&#8217;t seen the Clay Shirkys or the Larry Lessigs of the world start to talk about this yet.Â  Although I suspect that in the next four or five years that the zero click interface will become the primary interface, that we&#8217;ll have&#8230;we&#8217;ll come to assume that what we see with the extra enhanced data we get projected onto our view is the truth. Yet, at the same time, there is just no structure or mechanism even being considered for a democratic ownership of it.</p>
<p><span id="fv3x" title="Click to view full content">We have with DNS, for example, the idea that you can register the domain name and people can search for it, and find it, and go to it. There&#8217;s no such thing as an Image DNS, or an image translation to DNS right now. What does it mean when everything is just &#8220;magic&#8221;, when there&#8217;s no way for you to be a part of the conversation, where you&#8217;re just a consumer of what people tell you, or of what one company right now, tells you, is reality? That&#8217;s a real concern.<br />
<strong><br />
Tish Shute: </strong>This, to me is the most important question at the moment. I mean, it&#8217;s the big one and it&#8217;s the place to put energy if you love the Internet [and what it can now become] right. You&#8217;ve got to put a lot of energy into this because this [a democratized view of the physical world as a platform] won&#8217;t just happen, because there&#8217;s a lot of momentum already for it to be heavily privatized, partly because, one reason is, some of the computer vision algorithms that, say, make sense of things like the geotag photographs are not open.Â  I mean, for example, the beautiful maps that have been made from the University of Washington [from Flickr geotagged photo sets], that isn&#8217;t in the public domain.</span></p>
<p><strong>Anselm Hook:</strong> Right. Tish, and in fact you&#8217;re referring to [with the maps from the Flickr photos] to ordinary maps and the fact we&#8217;ve already seen that maps lie, we&#8217;ve already, seen how much maps are reflecting a certain truth that becomes the normative truth. Google maps reflects roads, because this is roads and cars, right? Only recently have they thought about buses and walking. So the normative view that people assume is the reality, is showing off you know Starbucks, and roads, and cars, that becomes the default, those prejudices are just assumed, you know, the truth. But they&#8217;re not the truth at all.</p>
<p>I was talking to a friend of mine in Montreal, [Renee Sieber], and she said that their Indian portage routes are a bridge across land and water, they don&#8217;t think of a piece of land and a piece of water as being different things, they think of them as one thing: a route. It&#8217;s already a different kind of language we can&#8217;t even reflect it.</p>
<p>So not only is there this kind of formal, anthropological lie, in a sense, but there&#8217;s this way that we deceive ourselves because of our own prejudices.</p>
<p><strong>Tish Shute:</strong> Yes I agree and that&#8217;s why I think when I saw some of the things you had written on the ImageWiki point clearly to the need to create a social commons. We need a social commons for the real-time physical internet, we need it for the image hyperlinks that make sense of that.</p>
<p>And it&#8217;s a complicated thing in a sense, though, because we don&#8217;t actually have a good distributed infrastructure for AR yet, and I found exploring AR Wave, that at last we have the suggestion of an open, federated protocol for real-time communication &#8211; the wave federation protocol. [Real time communications is a very important part of AR].Â  It isn&#8217;t an actuality yet where lots of people are able to use it, set up their own servers, and there&#8217;s not a standard all the way throughÂ  [there is not a standard for how data is sent between the client and the server].</p>
<p>But Wave Federation Protocol does make possible truly distributed social AR.Â  I started thinking when I saw ImageWiki that to bring ImageWiki together with the social collaborative power of distributed AR.Â  This really would be the basis of creating a social commons for augmented reality and the physical world as a platform &#8211; the <span id="np6x" title="Click to view full content">start of a bottom up with deep social collaboration on how we create augmented reality colloquial maps that can inform a hyper-local of the world.</span></p>
<p><strong>Anselm Hook:</strong> Yes. When Paige Saez, John Wiseman, and myself, and a few other folksâ€¦ You know, Benjamin Foote, Marlin Pohlmann, and a couple other people started to play with this, we quickly found thatâ€¦ We started to realize, â€œOh, this kind of thing will be at least as popular as IRC. There will be at least as many people doing this as chatting in little virtual spaces. Thereâ€™ll be at least as many people decorating the world with augmented reality markup, and maybe using the real world as a kind of barcode for translating what youâ€™re looking at into an artifact, a digital artifact.</p>
<p>And<span id="csy2" title="Click to view full content"> that the size of that space was going to be huge, basically. Maybe not quite as commodifiable as Twitter, but certainly very energetic.</span></p>
<p>Many of the projects we did were just kind of looking at these kinds of issues sort of from an artistic, technical, and political point of view. We werenâ€™t so much posing complete solutions, but simply using a praxis to explore the idea with an implementation, as a foundation for this discussion. So I think we sort of opened that can of worms for sure.</p>
<p><strong>Tish Shute:</strong> Did you actually set up ImageWiki to be working as a location based app yet?</p>
<p><strong>Anselm Hook:</strong> It is a location based app. It collects your longitude, latitude, and the image and stores it. And then it uses that as a way to translate that image to anything else. It could be a piece of text or a URL.<br />
<strong><br />
Tish Shute:</strong> So there is a smartphone app, but you didnâ€™t take it as far as an AR app yet?</p>
<p><strong>Anselm Hook:</strong> No. We didnâ€™t do a heads-up view. There are apps on the iPhone store that do that, but they donâ€™t do the brute force image recognition that we were using. We used a third party off the shelf algorithm that we found on Wikipedia and downloaded the source code, and threw it on the server. And John Wiseman in LA wrote the scalable database backend so that we could scale the actualâ€¦<br />
<strong><br />
Tish Shute:</strong> So how did you set the iphone app up to work?</p>
<p><strong>Anselm Hook</strong>: The iPhone side was very simple. You take a picture of something and it tells you what it is. That is all it did. We would take the location, but the client side, the iPhone side, just rendered, returned to youâ€¦It said, â€œSomeone said that this picture of a barking dog is an advertisement for a local band.â€</p>
<p><strong>Tish Shute:</strong> Right. So basically it was a geo-tagged?</p>
<p><strong>Anslem Hook:</strong> Yes. We are just collecting the geo information. Actually, there were a whole lot of technical challenges. The whole idea of ImageWiki is actually kind of beyond our technical ability for a small team like us. It really does take a team, a group like Google, to do this kind of thing in a scalable way.<br />
<strong><br />
Tish Shute:</strong> Why is that?</p>
<p><strong>Anslem Hook:</strong> There are two sides. There is the curating the images. I think that is the job of groups like us &#8211; open source groups who can curate images <span id="vxty" title="Click to view full content">that are owned by the community. And then the searching side, the algorithm side, where you are actually matching the fingerprint of one image to images in your database, that takes a much moreâ€¦that is much more industrial.Â  We get both sides, ours is not a scalable solution. It is mostlyâ€¦proving that it could be done was important.<br />
</span><br />
<span id="a3ou" title="Click to view full content"><strong>Tish Shute: </strong>In terms of hooking Imagewiki up to the collaborative possibilities of AR Wave wouldn&#8217;t federation pose some interesting possibilities for scaling search algorithms and all that?</span></p>
<p><span id="vp27" title="Click to view full content"><strong>Anselm Hook:</strong> Yes. And what is funny also, incidentally, is that, nevertheless, we did look for some financial support for it, but we couldnâ€™tâ€¦we just didnâ€™t find the investors to scale it. Now, other companies like SnapTell took a shot at it. And they have an app in the iPhone store where you can point at a beer bottle and get back the name of the beer bottle.</span></p>
<p>The classic example everyone uses is a book. Amazon has all the image jackets of all their books. You can point SnapTell at almost any book and get back links to buy that at Amazon, the price of the book, and user comments on the book. So they are treating Amazon as the canonical voice of the book, for better or worse. That is the state of the art so far, up until Google Goggles came out a little while ago, which actually blows it out of the water. But, that is where we are now.</p>
<p><strong>Tish Shute: </strong>Right. But the point you raise about how when something like Amazon comes canonical of what is book, right, this is the whole point, isnâ€™t it?</p>
<p><strong>Anselm Hook:</strong> Is Amazon truth? Itâ€™s not bad. Jeff Bezos seems like a nice guy, but, you know.</p>
<p><strong>Tish Shute:</strong> And this is the point of having these open infrastructures for this.Â  And this should be obvious in a way, but it comes back to the thing about what made the Internet great was the fact that even though as you note, you get an oligarchy like Facebook, but people always could just go off and do something else, right? Because the fundamental infrastructure was basically open and designed to be available for everyone. And many people have championed that and fought for it hard [to maintain this openness] havenâ€™t they? They have devoted their lives to keeping it that way, even if the oligarchies have done their thing.<br />
<strong><br />
Anselm Hook:</strong> Yes. There are really some things that are underneath all of this that havenâ€™t been solved yet.</p>
<p>One is that the trust in social networks has not been built yet, so we canâ€™t do peer based recommendations very well. We canâ€™t filter noise by peers. Twitter kind of is moving there, but I donâ€™t just want to listen to my Twitter friends. I want to listen to my friends of friends. If I am getting truth from somebody, I want to get that truth from people my friends say that they trust.</p>
<p>Then the second problem is that there is a search business. My friend Ed Bice, who owns <a id="lir5" title="Meedan" href="http://beta.meedan.net/">Meedan</a>, always says that a search itself, a search request, is an opportunity to makeâ€¦is a publishing moment. It is an opportunity to say what you think. In the real world, if you are just hanging out with humans and you look somewhere, other people might look at your gaze and they might look at what you are looking at. Your gaze itself is a public act.</p>
<p>Gaze is a soft act, but it is one that is visible. With Google, the gaze<span id="zuat" title="Click to view full content"> of four billion people is invisible. We don&#8217;t what people are looking at, there is no opportunity to participate. Let me give you a real example.Â  I have taken a image of something of the bust of figure or a statue.Â  Why can&#8217;t the museum in Cairo look at my request and tell me oh yeah that is Tutankhamen, or that is Nefertiti right? Why can&#8217;t they have a chance to participate in the search and respond to me?</span></p>
<p><span id="zuat" title="Click to view full content"> Right now the the only person that responds is Google when I do a search. We need to invert the search pyramid and open up search, so that search is a democratic act, so that you can publicly permission your searches so that other people can respond and so that people can reach out to you, not just you having to do a dialogue. </span></p>
<p><span id="zuat" title="Click to view full content">The common example of this.. and we see this everywhere: I am looking for a slice of pizza right, now I am hungry I want some pizza. I have to ask Google, look find twelve websites, call twelve phone numbers, and talk to each of the twelve stores, and ask them are they open late, is the food organic, is the food in any good, do my friends like it.</span></p>
<p>Whereas what I should be able to do is just say it&#8217;s a search moment and I am interested in pizza. If those pizza places my criteria like you know my friend&#8217;s like them and they are organic, they are open, then that pizza place can call me. I have the money why should I do the search? So the whole business of search, the whole structure of search is predicated around a revenue model, but its a really short-sighted revenue model, its not a brokerage.</p>
<p>Search isn&#8217;t search, search is hand waving.Â  These should be moments for us to have a discourse. So problem we are seeing in AR with communication of the right information is actually underneath AR, at the level of the whole infrastructure.</p>
<p>Search needs to be inverted, trust filters need to be built. We need to democratically own our data institutions.Â  We don&#8217;t right now.Â  That will be more of a concern, especially with AR.</p>
<p><strong>Tish Shute: </strong>Yes, especially with AR, which is this why got all excited about federation.Â  Do you think federation has the potential, an opportunity to create [the new infrastructure you describe?]</p>
<p><strong>Anselm Hook:</strong> Absolutely,Â  its absolutely what we must do. It is much harder to do. It is absolutely critical.</p>
<p><span id="lwzk" title="Click to view full content"><strong>Tish Shute:</strong> And why is it much harder to do? Could you explain that?</span></p>
<p><strong>Anselm Hook:</strong> Well, it&#8217;s very easy for a bunch of hackers to build a service that you log into and fetch some data, it&#8217;s a single thing. They don&#8217;t have to talk anybody, they can use their own protocols, they can hack it, it&#8217;s a big black box, behind the scenes. There&#8217;s running back and forth in a giant Chinese room delivering manuscripts and scrolls to you. Whatever is behind the black box, you donâ€™t care, it just works.Â  But when you federate, you need to actually publish and have standards, and then you&#8217;re talk about semantic, everyone starts getting really excited and wave some hands. It becomes a disaster. It&#8217;s, at least, another power order, more difficult than DIY, build it yourself.</p>
<p><strong>Tish Shute:</strong> So, in terms of what Google Wave have done with their approach to federation, what do you think have been their achievements and what do you think is their obstacles? What do you think are the failings of the Wave? Because it&#8217;s the first big public major player backed approach to something federated, isnâ€™t it? In real time.</p>
<p><strong>Anselm Hook:</strong> Yes. I think the most important non-federated service on the planet today is Twitter.Â  <a id="uhg3" title="Ident.ic.a" href="http://identi.ca/group/identica">Identi.ca</a> it&#8217;s not getting any traction with respect to Twitter. [ Even though ] Identi.ca is a federated version of Twitter and is very good. [ Identica is now <a id="w05j" title="Status.net" href="http://status.net/">Status.net</a> ] . So, we see already there that small players arenâ€™t being competitive. Then look at other services like IRC. IRC is the secret backbone of the Net. All the open source projects, all the teams, all the people that work on opensource projects are all on IRC. It&#8217;s the only way they get anything done.</p>
<p>With Google Wave, and the protocols underneath Google Wave, we see an attempt to build a similar kind of real time, but distributed protocol. I think it&#8217;s the right direction. I think, people should pick up the offering and make their own servers. I think that protocol is really great, I think the fact that is compressed, its high performance, <span id="md2h" title="Click to view full content">it is small, real-time of blobs of data flying around, all exactly the way it should be done. It is getting close to this kind of rewrite of the Internet that people keep talking about, because, you know, the net protocols are so bad, it is starting to treat the idea of intermittent exchanges being more transitory, volatile, and not heavy.</span></p>
<p><strong>&#8230;.to be continued.Â  Part 2 coming soon!<br />
</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/feed/</wfw:commentRss>
		<slash:comments>17</slash:comments>
		</item>
		<item>
		<title>The Next Wave of AR: Mobile Social Interaction Right Here, Right Now!</title>
		<link>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/</link>
		<comments>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/#comments</comments>
		<pubDate>Fri, 20 Nov 2009 04:53:07 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Artificial general Intelligence]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AR browsers]]></category>
		<category><![CDATA[AR Dev camp]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[calo]]></category>
		<category><![CDATA[mobile social]]></category>
		<category><![CDATA[mobile social interaction utility]]></category>
		<category><![CDATA[open distributed augmented reality]]></category>
		<category><![CDATA[pygowave]]></category>
		<category><![CDATA[real time internet]]></category>
		<category><![CDATA[siri]]></category>
		<category><![CDATA[smart things]]></category>
		<category><![CDATA[social augmented experiences]]></category>
		<category><![CDATA[social augmented reality]]></category>
		<category><![CDATA[The Copenhagen Wheel]]></category>
		<category><![CDATA[the internet of things]]></category>
		<category><![CDATA[the outernet]]></category>
		<category><![CDATA[the sentient city]]></category>
		<category><![CDATA[Wave Federation Protocol]]></category>
		<category><![CDATA[Web Squared]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4869</guid>
		<description><![CDATA[The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now! View more presentations from Tish Shute. Click on the image below or here to watch this presentation and others from Momo13]]></description>
				<content:encoded><![CDATA[<div id="__ss_2542526" style="width: 425px; text-align: left;"><a style="font:14px Helvetica,Arial,Sans-serif;display:block;margin:12px 0 3px 0;text-decoration:underline;" title="The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!" href="http://www.slideshare.net/TishShute/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526">The Next Wave of AR: Mobile Social Interaction, Right Here, Right Now!</a><object style="margin:0px" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="355" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="src" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" /><param name="allowfullscreen" value="true" /><embed style="margin:0px" type="application/x-shockwave-flash" width="425" height="355" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=thenextwaveofar2-091120000046-phpapp01&amp;stripped_title=the-next-wave-of-ar-mobile-social-interaction-right-here-right-now-2542526" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<div style="font-size: 11px; font-family: tahoma,arial; height: 26px; padding-top: 2px;">View more <a style="text-decoration:underline;" href="http://www.slideshare.net/">presentations</a> from <a style="text-decoration:underline;" href="http://www.slideshare.net/TishShute">Tish Shute</a>.</div>
<p>Click on the image below or <a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank">here to watch</a> this presentation and others from <a href="http://www.mobilemonday.nl/">Momo13</a></div>
<p><a href="http://www.mobilemonday.nl/talks/tish-shute-the-next-wave-of-ar/" target="_blank"><img class="alignnone size-medium wp-image-4876" title="Screen shot 2009-11-20 at 1.32.24 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/11/Screen-shot-2009-11-20-at-1.32.24-PM-300x167.png" alt="Screen shot 2009-11-20 at 1.32.24 PM" width="300" height="167" /></a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/11/19/the-next-wave-of-ar-mobile-social-interaction-right-here-right-now/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Augmented Reality &#8211; Bigger than the Web: Second Interview with Robert Rice from Neogence Enterprises</title>
		<link>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/</link>
		<comments>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 23:24:12 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[privacy and online identity]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[websquared]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[AMEE]]></category>
		<category><![CDATA[AR]]></category>
		<category><![CDATA[AR Platform for Platforms]]></category>
		<category><![CDATA[ARConsortium]]></category>
		<category><![CDATA[ARToolkit]]></category>
		<category><![CDATA[Augmented Reality Browsers]]></category>
		<category><![CDATA[augmented reality platforms]]></category>
		<category><![CDATA[augmented reality SDKs]]></category>
		<category><![CDATA[augmented reality toolsets]]></category>
		<category><![CDATA[Dr Chevalier]]></category>
		<category><![CDATA[Gavin Starks]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[Green Tech AR]]></category>
		<category><![CDATA[Imagination AR Engine]]></category>
		<category><![CDATA[iphone and augmented reality]]></category>
		<category><![CDATA[iphone augmented reality]]></category>
		<category><![CDATA[iphone Video API and augmented reality]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Lumus]]></category>
		<category><![CDATA[markerless AR]]></category>
		<category><![CDATA[markers and Webcam AR]]></category>
		<category><![CDATA[Mobile AR]]></category>
		<category><![CDATA[MoMo]]></category>
		<category><![CDATA[nathan freitas]]></category>
		<category><![CDATA[Neogence Enterprises]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[Robert Rice]]></category>
		<category><![CDATA[Unifeye Augmented Reality]]></category>
		<category><![CDATA[wearable displays for augmented reality]]></category>
		<category><![CDATA[Web Squared]]></category>
		<category><![CDATA[Wikitude]]></category>
		<category><![CDATA[World as a Platform]]></category>
		<category><![CDATA[World Browsers]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=4184</guid>
		<description><![CDATA[I first started talking to Robert Rice, CEO of Neogence Enterprises, Chairman of the AR Consortium, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere.jpg"><img class="alignnone size-medium wp-image-4186" title="Questions and Answers signpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/whowhowhere-300x199.jpg" alt="Questions and Answers signpost" width="300" height="199" /></a></p>
<p>I first started talking to <a href="http://www.curiousraven.com/about-me/" target="_blank">Robert Rice</a>, CEO of <a href="http://www.neogence.com/#/home" target="_blank">Neogence Enterprises</a>, Chairman of the <a href="http://docs.google.com/AR%20Consortium"><span>AR Consortium</span></a><span>, in 2008.Â  Robert was already actively working on creating the worldâ€™s first global augmented reality network.Â  But it took a few months before what Robert had said to me about impending explosion ofÂ  augmented reality into our lives really sunk in â€“ â€œthis is going to be much bigger than the Web</span>!,â€ he extolled.</p>
<p>By January, 2009 I was convinced and I posted my first interview with Robert, <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it OMG Finally for Augmented Reality?..&#8221;</a> As I mentioned in the intro, I had recently tried out <a href="http://www.wikitude.org/" target="_blank">Wikitude</a> and <a title="Nat Mobile Meets Social DeFreitas" href="http://openideals.com/" target="_blank">Nathan Freitas&#8217;s</a> grafitti app on the streets of New York City and I was impressed.Â  Now, 7 months later, Augmented Reality hasÂ  not disappointed and there is an explosion of new applications, and the arrival of some of first commercial and practical toolsets, SDKs, and APIs for aspiring developers.</p>
<p>For more on this see my previous post, <a title="Permanent Link to Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar" rel="bookmark" href="../../2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/">Augmented Realityâ€™s Growth is Exponential: Ogmento â€“ â€œReality Reinvented,â€ talking with Ori Inbar,</a> which is an introduction to my series of interviews with the key players in augmented reality and founding members of the <a href="http://www.arconsortium.org/" target="_blank">ARConsortium</a> &#8211; <a href="http://www.int13.net/en/" target="_blank">Int13</a>, <a href="http://www.metaio.com/" target="_blank">Metaio</a>, <a href="http://www.mobilizy.com/" target="_blank">Mobilizy</a>, <a href="http://www.neogence.com/" target="_blank">Neogence Enterprises</a>, <a href="http://ogmento.com/">Ogmento</a>, <a href="http://www.sprxmobile.com/" target="_blank">SPRXmobile</a>, <a href="http://www.tonchidot.com/" target="_blank">Tonchidot</a>, and <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>.</p>
<p>As I mentioned before<span>, </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> of </span><a href="http://www.sprxmobile.com/" target="_blank"><span>SPRXmobile</span></a><span> told me the other day that my first </span><a href="http://docs.google.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank"><span>Interview with Robert Rice</span></a><span>, in January of this year, was a key inspiration for SPRXmobile to get started on the development of </span><a href="http://layar.eu/" target="_blank"><span>Layar â€“ a Mobile Augmented Reality Browser</span></a><span>. Much more on Layar and </span><span>Wikitude</span><span> â€“ world browser in my upcoming interviews with </span><a href="http://www.sprxmobile.com/about-us/" target="_blank"><span>Maarten Lens-FitzGerald</span></a><span> and <a href="http://www.mamk.net/" target="_blank">Mark A. M. Kramer</a>, respectively</span>.</p>
<p>Recently, both Layar and Wikitude earned a mention in the white paper by Tim O&#8217;Reilly and John Battelle, <a href="http://www.web2summit.com/web2009/public/schedule/detail/10194" target="_blank">Web Squared: Web 2.0 Five Years On</a>. Web Squared is essential reading not only because it covers the underlying technological shifts of &#8220;Web Meets World,&#8221; which augmented reality is a vital part of;Â  but, crucially, Web Squared focuses on how there is a new opportunity for us all:</p>
<p><strong>&#8220;The new direction for the Web, its collision course with the physical world, opens enormous new possibilities for business, and enormous new possibilities to make a difference on the worldâ€™s most pressing problems.&#8221;</strong></p>
<p>I am currently working on a post on Green Tech AR which is one of the areas augmented reality can play an important role &#8220;in solving the world&#8217;s most pressing problems.&#8221; Augmented Reality has a lot to offer Green Tech development.Â  As <a href="http://twitter.com/AgentGav" target="_blank">Gavin Starks</a> of <a href="http://www.amee.com/" target="_blank">AMEE</a> said at <a href="http://wiki.oreillynet.com/eurofoo06/index.cgi" target="_blank">Euro Foo in 2006</a>, &#8220;climate change would be much easier to solve if you could see CO2.&#8221;</p>
<p>But really useful Green Tech AR requires still hard to do markerless object recognition (going beyond feature tracking and modified marker recognition), and a tight alignment of media/graphics with physical objects, in addition to a quite a high level of instrumentation of the physical world.Â  And for Green Tech AR to really shine, we are going to need innovators like Robert Rice who are working on, and solving, multiple really hard problems like:</p>
<p><strong> &#8220;</strong><strong>privacy, media persistence, spam, creating UI conventions, security, tagging and annotation standards, contextual search, intelligent agents, seamless integration and access of external sensors or data sources, telecom fragmentation, privilege and trust systems, and a variety of others</strong><strong>.&#8221;</strong></p>
<p>Recently Robert Rice <a id="ph56" title="presented" href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>presented</span></a><span> at </span><a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a><span> Amsterdam. </span> Here is a drawing of him in action (<a href="http://www.flickr.com/photos/wilgengebroed/3591060729/" target="_blank">picture below</a> from <a title="Link to wilgengebroed's photostream" rel="dc:creator cc:attributionURL" href="http://www.flickr.com/photos/wilgengebroed/"><strong>wilgengebroed</strong></a>&#8216;s Flickr Stream).</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing.jpg"><img class="alignnone size-medium wp-image-4185" title="RobertRiceMoMOdrawing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRiceMoMOdrawing-300x184.jpg" alt="RobertRiceMoMOdrawing" width="300" height="184" /></a></p>
<p>In his Twitter feed Robert Rice ( <a href="http://twitter.com/robertrice" target="_blank">@RobertRice</a> ) Robert reminds us: &#8220;<span><span>By the way folks, what you see out there now as &#8220;augmented reality&#8221; is not what it is going to be in two years.&#8221;Â Â  Robert plans to show the first public demo of his &#8220;platform for platforms&#8221; atÂ  <a href="http://gamesalfresco.com/ismar-2009/ismar-08/" target="_blank">ISMAR 2009</a>. </span></span></p>
<p>Robert is writing up a series of White Papers currently.Â  I got a preview of the first, â€œThe Future of Mobile â€“ Ubiquitous Computing and Augmented Reality.â€Â  Robert points out, <strong>&#8220;AR through the lens of the mobile industry and ubiquitous computing is almost overwhelming compared to AR as marker based marketing campaign.&#8221;</strong></p>
<p>I asked Robert, &#8220;What are the key take-aways for investors interested in the augmented reality field at the moment:</p>
<p><strong><span>&#8220;First, Mobile AR is going to be bigger than the web. Second, it is going to affect nearly every industry and aspect of life. Third, the emerging sector needs aggressive investment with long term returns. Get rich quick start ups in this space will blow through money and ultimately fail. We need smart VCs to jump in now and do it right. Fourth, AR has the potential to create a few hundred thousand jobs and entirely new professions. You want to kick start the economy or relive the golden days of 1990s innovation? Mobile AR is it.</span></strong></p>
<p><strong><span> Donâ€™t be misguided by the gimmicky marketing applications now. Look ahead, and pay attention to what the visionaries are talking about right now. Find the right idea, help build the team, fund them, and then sit back and watch the world change. Also, AR has long term implications for smart cities, green tech, education, entertainment, and global industry. This is serious business, but it has to be done right. Iâ€™m more than happy to talk to any venture capitalist, angel investor, or company executive that wants to get a handle on what is out there, what is coming, and what the potential is. Understanding these is the first step to leveraging them for a competitive edge and building a new industry. Lastly, AR is not the same as last decadeâ€™s VR.&#8221;</span></strong></p>
<p><strong><span><br />
</span></strong></p>
<h3>Talking with Robert Rice</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic.jpg"><img class="alignnone size-medium wp-image-4195" title="RobertRicepic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/08/RobertRicepic-201x300.jpg" alt="RobertRicepic" width="201" height="300" /></a></p>
<p><em><a href="http://www.flickr.com/photos/vannispen/3586765514/in/set-72157619022379089/" target="_blank">Picture of Robert Rice</a> at <a href="http://www.mobilemonday.nl/talks/robert-rice-augmented-reality/" target="_blank"><span>MoMo</span></a> from <a href="http://www.flickr.com/photos/vannispen/"><strong>Guido van Nispen</strong></a>&#8216;s Flickr Stream</em></p>
<p><strong>Tish Shute:</strong> So perhaps we better start with an update on state of play with Neogence?</p>
<p><strong>Robert Rice:</strong> Neogence is doing well actually. We don&#8217;t talk much about the fact that we are still a small startup and we face a lot of the usual obstacles related to that and being a small team. Fundraising has been extra difficult, mostly because people are just now beginning to see the potential in AR, but that is still colored by perceptions based on a lot of the gimmicky AR ad campaigns out there. Still, it is better than it was two years ago the idea of an AR startup was a bit of a joke to a lot of VCs we talked to. However, we do have an agreement from a new venture fund in Europe (which we can&#8217;t talk about yet) for our first round of funding, but we don&#8217;t expect to close that for several months.</p>
<p>If all goes well, we hope to debut our first public demo at ISMAR 2009 in Orlando to select individuals and a few press folks. We might release a few viral videos before then that are conceptual and about what we are building in the long run, <span>but that depends on how things go over the next several weeks</span>.</p>
<p>We are also very active in looking for and building strategic partnerships and relationships with other companies, and this is not restricted to the augmented reality or mobile sector. As I have said before, we are looking at this as a long term business venture and the industry as something that will be bigger than the web itself within ten years. We are doing typical contract work and custom AR solutions to keep the cash flow going and build up the corporate resume a bit. So, if you want something done, and better than the stuff you are seeing now with all of the generic &#8220;look at our brand in AR with markers and a webcam&#8221; you should definitely give us a call.</p>
<p style="margin-left: 0pt; margin-right: 0pt;"><strong>Tish Shute:</strong> Just to clarify because most of the recent press has been about browser type AR like Wikitude and Layar which are not in the purist sense AR &#8216;cos they do not have graphics tightly linked to physical world. Neogence, if I am correct, is focused on building a true AR platform in the sense I just described?</p>
<p><strong>Robert Rice: </strong>Hrm, I<span> </span><span> have argued with a few others about the actual definition of AR. Some</span> people prefer a narrow and limiting view (3D overlaid on video), but I think in terms of the market and the end-user, it is better to have a wider definition. In that sense, AR is purely the blend of real and virtual, with or without full 3D overlaid on video. If we go with that, then Wikitude, Layar, Sekai, NRU, and others all fit into the AR definition.</p>
<p>Anyway, you are correct. We are building a true <span>platform for AR, and this is quite different from what others are marketing as AR browser â€œplatforms.â€</span></p>
<p><span>There are a few problems with the â€œAR Browsersâ€ approach that no one seems to be noticing. </span>One is that they are all trying to get people to build new applications for their browsers, when they should be trying to get people to create content that they can share and browse.</p>
<p>Second, someone using Layar is not going to see anything that is designed for Sekai or Wikitude.</p>
<p>Third the experiences are generally for one user. While I love all of these guys and think each of the teams has some real talent on it, the model is flawed until someone using Wikitude can see the same thing that someone using Layar or Sekai camera is seeing (provided they are in the same physical location).</p>
<p><span>While we are working on our own client side technologies that we hope will be useful and integrated with every mobile device and AR browser out there, our core focus is on connecting everything and everyone together, and facilitating the growth of the industry with the tools to create content, applications, and so forth. We want to solve the really difficult technical problems (some of which most people havenâ€™t even considered yet, because of the perspective they are looking at the potential of AR with), and make it easy for everyone else to do the cool stuff. We want to be the facilitators.</span></p>
<p>If you really want an idea of where we are going or some of what has inspired us, you have GOT to read Dream Park, Rainbows End, and The Diamond Age. If you have heard me speak anywhere or read my blog, you know that I am continually suggesting these and others.</p>
<p>Anyway, short answer, yes, we are building a true <span>platform for </span><span>ubiquitous mobile augmented reality, and we are absolutely the first to be doing so</span>.<span> I hope to demo some of this in October at ISMAR, with a full commercial launch next year (10/10/10 at 1010am Hehe, seriously). We will probably launch a website soon for people to start signing up and building a community now (especially if you want in on the beta testing of the whole kibosh).</span></p>
<p><strong>Tish:</strong> So just to clarify,Â  how will Neogence&#8217;s approach differ and fit into theÂ  growing world of Augmented Reality tools that we have now, e.g.,Â  <a href="http://www.hitl.washington.edu/artoolkit/" target="_blank">ARTookit</a>, <a href="http://www.imagination.at/en/?Projects:Scientific_Projects:MARQ_-_Mobile_Augmented_Reality_Quest" target="_blank">Imagination</a>, <a href="http://www.metaio.com/products/" target="_blank">Unifeye</a>?</p>
<p><strong>Robert:</strong> I guess you could say that we are trying to build the infrastructure for the global augmented reality network. This could be viewed as a service, or even a platform for platforms. If Neogence does its job right, anything you create using ARtoolkit, Unifeye, or Imagination would be applications you could <span>ultimately link to, integrate with, or deploy on or through</span>, what we are building, and not be tied to a specific set of hardware, browser, or walled garden.</p>
<p><strong>Tish: </strong><span>You mention Neogence is going to provide a platform for platforms. Without knowing the details that sounds like a lot of centralization which prompts the inevitable question: &#8220;Who owns the data?&#8221; Do you think other AR applications or provid</span>ers would resist a â€œPlatform for Platforms?â€ I know the potential centralization power of Google Wave has already got people talking about these issues (one of the comments in my recent blog post was about how Google Wave protocol may be interesting for a least some parts of augmented reality communication).</p>
<p><strong>Robert:</strong> It really depends on perception and how we end up <span>building it. We arenâ€™t talking about creating a closed system. As far as who owns the data, it depends on what data we are talking about. For the most part, I think that if the end-user creates something, they should own it and have control over it. They should also be able to do what they want with it, independent of everything else. </span></p>
<p><span>This is one thing that proponents of the smart cloud and the thin/dumb client donâ€™t like to talk about. It sounds great on paper, but when you start thinking about it, all that does is strip away power from the end user. Case in pointâ€¦Amazon recently wiped every copy of George Orwell&#8217;s 1984 from all Kindle devices. They claimed they didnâ€™t have rights to distribute/publish it and it was available on accident. The scary thing though, is that they literally went into every kindle out there, found copies, and deleted them.</span></p>
<p><span> How would you like it if Microsoft suddenly decided to delete every copy of Microsoft Office? Or every file that had a .doc extension? That is a huge violationâ€¦we feel like we own what is on our computers. But with the whole cloud thing, your data is at the mercy of whoever is running the cloud servers. No privacy, no ownership, no control. And if the system breaks, all you will have is a pretty dumb device that canâ€™t do much on its own. Now, that isnâ€™t to say that the technical merits and benefits of a cloud model arenâ€™t worth pursuing, they are.</span></p>
<p><span> But I think there needs to be some hybrid model. Donâ€™t dumb down my computer or my smart phone, letâ€™s keep pushing how much these devices can do. We should take full advantage of centralized and distributed systems, but in a hybrid mashup sense. That is what we are pursuing with our AR platform, while trying to protect ownership and intellectual property rights of the end user.</span></p>
<p><strong>Tish: </strong>Earlier today I was telling you how impressed I was by Google Wave &#8211; it is quite mind blowing to experience massively multiplayer real time interaction on what will be an open internet wide platform &#8211; Wave is breaking new ground here and more than one person has mentioned its potential role in AR to me (see <a href="http://www.ugotrade.com/2009/07/28/augmented-realitys-growth-is-exponential-ogmento-reality-reinvented-talking-with-ori-inbar/" target="_blank">the comments to my recent post on Ogmento</a>).</p>
<p>I know you are a strong advocate of this kind of real time shared experience being part of AR.Â  But we are only just beginning to see it emerge via Wave on the existing web &#8211; what will it take to have this kind of real time shared experience in AR!Â  We got briefly into the thick client, thin client, cloud versus P2P discussions &#8211; what is your approach to delivering a massively shared real time experience that is like Wave not confined to a walled garden?</p>
<p><strong>Robert:</strong> I&#8217;<span>m not a fan of any of those models as being stand alone or mutually exclusive. Again, the hybrid model with the best of both worlds is key. In the early stages of the emerging industry, you are likely to see some walled gardens (or perhaps a walled garden of walled gardensâ€¦). </span></p>
<p><span>No one knows how things are going to turn out in the next five to ten years and few people are thinking about it actively. For us though, I favor Alan Kayâ€™s quote (pardon the paraphrasing): â€œTo accurately predict the future, invent itâ€. Thatâ€™s what we are doing. In the short term, there will be plenty of experimentation in the industry and a lot of model testing.</span></p>
<p><strong>Tish: </strong>Do you think though Wave protocols might be useful as at least part of the picture for AR standards?Â  As you point out open standards and open protocols are going to be vital for shared experiences of AR.Â  Is it important to build off existing protocols to get the ball rolling and what do you see as being the important early protocols for AR?</p>
<p><strong>Robert:</strong> I think for now, we will use a lot of existing protocols for communications and whatnot, as well as the usual standards for things like 3D models, animation, and so forth. This is only natural. However, as the industry and technology evolves, we will need entirely new ones. As far as I know there is no existing market standard for anything like the Holographic Doctor from Star Trek Voyager, and that type of thing is definitely in the pipeline for the future (sooner than you would think).</p>
<p><strong>Tish:</strong> All the excitement at the arrival of the browser like mobile reality developments has been really great &#8211; I feel people are getting a taste for what it means to compute with anyone/anything, anywhere and and anytime.</p>
<p>Wikitude started the ball rolling. And with Wikitude.me it is the first to support user generated content. Now there is Layar, Sekai Camera also. But as you mentioned to me in an earlier chat, with Layar and Wikitude opening up &#8220;their are probably half dozen other apps coming out in short order with similar functionality (even the AR twitter thing has some similarities).&#8221;</p>
<p>What has been most exciting to you about these developments up to this point? What will these apps/platforms need to do to stand out in a crowd.Â  Up to now, these browser like AR experiences do nothing with close by objects. Do you see &#8220;world browsers&#8221; with near object recognition coming out in the near future. Could Wikitude do this with an integration of SRengine or Imagination?</p>
<p><strong>Robert:</strong> Yes, Wikitude<span> or Layar could do this (integrate with something else for &#8220;near&#8221; AR) and it would be a step in the right direction. Tagging things in the real world is the basic functionality that will grow from text tags to photos, videos, 3D objects, and all sorts of other types of data and meta data. This gets really fun when that data is generated by the object itself. First is just giving people the ability to tag something and share that tag with their friends, everything else grows from that. This sort of functionality is probably the most exciting in terms of near future advancement.</span></p>
<p><span>However, I think the idea of a stand-alone</span> browser platform is a bit awkward&#8230;unless you also consider firefox a website browser platform. After all, you can create widgets (applications) for it. Anyway, the point is having access to the same data&#8230;if you put three people in a room, one for each browser, they should see and experience the same content, although the interface might be different (based on what browser and of course which hardware they are using). This means there needs to be some communication between whatever servers they are storing their data on (meaning, user tags) and some standard for how those tags are created.</p>
<p>Of course, if all they are doing is grabbing the GPS coordinates of the nearest subway station and telling you how far it is and in what direction, then they should all be able to see the same thing, regardless of the platform. But then, that isn&#8217;t really interesting is it? I could get the same info on a laptop with google maps.</p>
<p>This is part of the problem right now though&#8230;no one seems to be thinking about the bigger picture much. All of the effort is either on making the next cool ad campaign for a car or a movie, or creating a tool to tell you where the nearest thingamajig is, but in a really cool fashion on a mobile device.</p>
<p>No one is talking much about filtering data, privilege systems, standards, third party tools, interoperability, and so on. There is also little conversation about where hardware is going. Right now everyone is developing software based on what hardware is available. This needs to change where hardware is being developed to take advantage of new software coming out (this happened in the PC industry a while back and growth accelerated dramatically).</p>
<p>These are some of the reasons why I led the effort to start the AR Consortium. We brought CEOs from 8 different AR companies and startups together to start talking about these issues. We are still getting organized and have plans to expand the membership to other companies, but we want to do this right and we aren&#8217;t rushing things. The important thing is that we have started and there is at least a line of communication open now, where there wasn&#8217;t before.</p>
<p>I would expect to see the early movers expanding what they offer very soon, and they will probably lead the way in the short term. Definitely keep an eye on the companies involved in the AR Consortium. There are lots of very smart and motivated people there, and they are far ahead of all the experimental dabbling in AR we are beginning to see on youtube, twitter, and elsewhere.</p>
<p><strong>Tish: </strong>When we had a discussion about what were the basics for an AR platform and an AR browser earlier, you talked about the difference between tools, a platform, and a AR browser &#8211; like Wikitude and Layar which should be about  features/functionality e.g. to create treasure hunts AR geocaching, invisible AR yellow sticky notes you can leave at restaurants you don&#8217;t like, etc. Also you noted it should let you explore (browse) multiple formats, and open content content for AR &#8211; any data, information, or media that is linked to something in the real world and the visualization/interaction with the same.</p>
<p>Wikitude<span> is a stepping stone to a true browser by your definition. But are we also seeing what you would define as an AR platform emerging â€“ Unifeye, Wikitude (you can recap your definition if you like too)?</span></p>
<p>I think Wikitude hopes to provide the lego blocks forÂ  augmented reality readers, browsers, applications, tools, andÂ  platforms?</p>
<p><strong>Robert:</strong> I expect some segmentation among the various AR companies that are out now, as they find their individual strengths and focus on them. Some will emphasize the client software (the browser), others will develop robust tools for creating content, SDKs/APIs will advance and facilitate rapid development of applications, etc. Neogence is ultimately working on the glue in the middle that ties everything together, makes it massively multiuser, persistent, and ubiquitous. Things like Unity3D have the potential to fill a need in the middleware space.</p>
<p><strong>Tish:</strong> I know <a href="http://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/" target="_blank">Blair McIntyre</a> (see my interview with Blair here) and others are using Unity3D as an AR client, Could Unity3D become increasingly important?</p>
<p><strong>Robert:</strong> It has the potential to become a favored middleware for providing the rendering layer. It already works nicely in regular browsers, and on several mobile platforms. Why code all the graphics rendering stuff from scratch when you can just license something and extend its features with AR functionality?</p>
<p><strong>Tish:</strong> Now to ask your own question back to you! There seems to be a lot of reason to think that, eventually, there will be the kind of access to the iphone video API that augmented reality really requires and by that I mean more than we will get with OS 3.1 which is rumored to deliver only about half of what we really need for AR on the iphone &#8211; &#8220;not truly useful when you want to align video. with graphics.&#8221;Â  So:</p>
<p><em>&#8220;The iphone&#8230;future or failure? Seemingly anti-developer stance regarding augmented reality, and only a sliver of the global market share. Are we letting the short term glitz of Apple and the iPhone fad pull us in the wrong direction? Shouldnt we be focusing on symbian devices that have the lion&#8217;s share of the market? or should we be looking more at either other OSs (winmobile, android) or not at all and trying to create a new platform that is more MID and less smart phone with a hardware partner?&#8221;</em></p>
<p><strong>Robert:</strong> Apple and the iphone are a bit problematic right now. There is no way I can go to a venture capitalist (at least in North America) and say hey we are building awesome AR applications for winmobile or symbian&#8230;they would either laugh or they simply wouldn&#8217;t get it. There is this false perception that the iphone is the ultimate mobile device, it is the sexiest, and the only thing that people want. Everyone wants a demo on the iphone, the media is mostly interested in iphone developments, and the apple fanatic market could give a fig about other devices. Other devices may have a larger market share or even better hardware, but we have to focus on the iphone right now at least in the demo stage to get any market attention and traction worth the time and effort.</p>
<p>In the future though, unless Apple changes its stance with their SDK and APIs, and starts adding hardware that is key for mobile AR (beyond what is there now), the market will move on without them. <span>This is a really easy decision to make given Apple&#8217;s draconian policies and the fact that their percentage of the global market is miniscule. The smart companies are looking at the whole picture and not putting all of their eggs in the Apple basket.</span></p>
<p>Of course, once the wearable displays are commercially viable everything changes. Wearable computers with small screens or even no screens are going to be what everyone wants. The interface will go from handheld touch screens to virtual holographic interfaces that you interact with using your bare hands.</p>
<p>So for now, <span>(the immediate short term), </span>its all about the iphone. Taking mobile ubiquitous AR to the global market and building for the future will be based on something else. Hardware risks becoming a commodity or a closed platform. Do you really want to buy the Apple iGlasses and only see AR content that is compatible, where your best friend has a pair of WinGlasses and sees something entirely different? No. The hardware, and the client software (what people are calling the ar browser now) will become common and it won&#8217;t matter what brand you use, they will all be accessing the same content.</p>
<p>But at least for the forseeable future, we are building software for specific hardware, and the sexiest mobile on the block is the iphone. The second someone comes out with something much better and the paradigm shifts (software driving hardware instead of vice versa) everything changes.</p>
<p><strong>Tish:</strong> How is the quest for sexy AR eyewear going.Â  I know we were checking out <a href="http://www.masunaga1905.jp/brand/teleglass/" target="_blank">the Japanese eyewear</a> with Adam Johnson from <a href="http://genkii.com/" target="_blank">Genkii</a> just now.Â  For the Neogence project &#8211; as you are going for a fully developed model of AR doesn&#8217;t this necessitate going beyond the iphone and getting the hardware companies moving on the eyewear?</p>
<p><strong>Robert:</strong> The guys making wearable displays really need to get off the pot and stop paying lip service to mobile AR. If they don&#8217;t do something quick, I,Â <span> and others, are</span> going to be scouring the planet looking for someone capable of building the lightweight stylish wearable displays with transparent lenses we are begging for. We aren&#8217;t going to be waiting around for hardware anymore. The AR Pandora&#8217;s box has been opened. I should note that many of us (AR Consortium members) have had less than pleasant experiences or communications with the half dozen companies or so that are making wearable displays. Either their visual design is terrible, the materials feel flimsy, the field of view is limited, or the companies are preoccupied with other business and government contracts. Any attention to the growing AR market is an afterthought and in a few cases condescending. AR is going to be a billion dollar industry in a very short time, and these guys are just leaving money on the table. If they were smart, they would be begging the CEOs from the AR Consortium to fly out to their offices and collaborate on building a pair of wicked sick glasses. The smart phone manufacturers should be doing the same thing, but I have to say that they at least seem to have some ambition and zeal to create better devices, so I can&#8217;t really complain too much there.</p>
<p>Anyway, to answer the rest of your question, we have to assume that the hardware guys, especially regarding the eyewear, is going to take a long time to develop and release the things we need for the ultimate AR experience. So, our goal is to start building things now for what is available. That means scaling things down and handicapping what AR can do, so it works on the &#8220;sexy&#8221; iphone. The important thing though is to start creating applications -now- so when the glasses are commercially available, there will be a wealth of content for people to access and use on day one.</p>
<p>As long as Apple isn&#8217;t playing nice,<span> </span>it is going to hurt everyone. <span>Is it any surprise that they shut down Google Voice? </span> There is a huge opportunity for someone to step up and leapfrog the rest of the industry. Give us the hardware and we will create amazing software for it. Don&#8217;t compete with the iphone, surpass it.</p>
<p><strong>Tish: </strong>What is the state of play of current AR technology and toolkits?</p>
<p><strong>Robert:</strong> The current crop of AR technology and toolkits is absolutely critical for this stage of the industry, and everyone should be leveraging it as much as possible. I talk down marker and image based tracking a lot, but I also like to point out that it is the necessary baseline that the industry is going to be built on. The problem is that there is only so much you can do with marker driven apps, and as creative people and marketing types start conceptualizing about all sorts of cool stuff for the future, they risk setting the expectations too high. It is one thing to show someone the future, it is another to say this is the future and its happening right now. This is why I cringe everytime I see a conceptual video presented as &#8220;our product DOES this&#8221; instead of &#8220;our product WILL DO this.&#8221; <span>Something that simple can still cause the butterfly effect of raising expectations too high and contribute to overhyping.</span></p>
<p><strong>Tish: </strong>One of the things that seems very exciting about the new <a href="http://ogmento.com/" target="_blank">Ogmento</a> partnership is that experienced content producersÂ  <a id="squu" title="Brad Foxhoven" href="http://www.blockade.com.nyud.net:8080/about/about-blockade" target="_blank">Brad Foxhoven</a> and <a id="odvk" title="Brian Seizer" href="http://brianselzer.com/">Brian Selzer</a> from <a id="xow_" title="Blockade" href="http://www.blockade.com/" target="_blank">Blockade</a> are now taking a leading role in AR.Â  What are the most exciting directions for content that you see emerging for AR in the next 12 months?</p>
<p><strong>Robert:</strong> Virtual (well, augmented) pets, and multiuser mobile AR games (2-4 people) are probably going to lead in the next 12 months for content. Easy, accessible, engaging.</p>
<p><strong>Tish: </strong>And are you at Neogence also involved in content partnerships?</p>
<p><strong>Robert:</strong> Yes, we are in the process of finalizing some content partnerships with an eye for long term relationships. We are specifically looking for partners that want to find substantive ways to leverage AR technology, and not use it as a superficial gimmick or attraction that wears off after five minutes. I&#8217;m still cringing over the Proctor &amp; Gamble Always campaign with AR.</p>
<p><strong>Tish:</strong> So back to your observation about some of the tricky problems re creating a true global massively multiuser, ubiquitous, mobile AR platform &#8211; what are some of the main obstacles to this mission in our view? (aside from getting investment!)</p>
<p><strong>Robert:</strong> Trying to explain it to people. The technical problems we can handle or have already solved. But trying to communicate what exactly we are doing is still tough. Not because it is overly complicated, but rather because it is so new and different. People are having a hard time grasping augmented reality beyond marker/webcam.</p>
<p><strong>Tish: </strong>Which AR tools are most important right now?</p>
<p><strong>Robert:</strong> Content is critical right now to show what the technology is capable of and to continue building the presence of augmented reality in the public mind the big benefit to integrated / unified platforms now is speed of development for content. I think that the flash artoolkit = papervision is rocking the planet right now. It is accessible, easy to learn, and lets people create something very quickly. More tools and middleware are coming out and this increases options for designers and developers.</p>
<p><strong>Tish: </strong>What are your favorite papervision apps?</p>
<p><strong>Robert: </strong>Hrm, I don&#8217;t have a favorite papervision app just yet, although I think the tech is solid. I expect to see a lot of stuff built on that platform in the near future. Especially as more ad agencies get on the bandwagon and start telling their IT guys to learn how to program flash so they can make something. Have you seen www.ronaldchevalier.com Not so much for the actual AR stuff, but because the whole thing is just brilliant. Its exactly like some cult figure spiritual guru would do with AR. I wish I had thought of it first actually. This is probably one of the best -seamless- implementations of AR in marketing where it fits&#8230;it isn&#8217;t just jammed in there for the sake of saying they used AR.</p>
<p><strong>Tish:</strong> Do you think Apple is going open the iphone to the full potential of augmented reality anytime soon &#8211; a lot of expectations have been raised?</p>
<p><strong>Robert:</strong> Apple is like that guy has a party at his house and owns this really awesome state of the art home theater in his basement, but makes everyone watch a movie in the living room on a regular TV with a VCR.</p>
<p>They need to get over themselves and quit being a wet blanket. Otherwise, we are taking the beer and pizza we brought, and going to someone else&#8217;s house. <span>Sorry, the Apple thing is a bit of a sore point with me.</span></p>
<p><strong>Tish:</strong> But will people leave all that candy and soda at the appstore?</p>
<p><strong>Robert:</strong> I tell you what though, there is an opportunity for certain mobile phone manufacturers to give me a call and start talking to Neogence and the other members of the Consortium. We have some ideas and specs that could have a radical impact on the mobile market and stuff the IPhone in a box. Hint hint.</p>
<p><strong>Tish:</strong> So what is your vision for the ARconsortium.Â  I know it kicked off with a letter to Apple about the video API.Â  What is the next step? There was a lot of hope that this year would be big for MIDs but this really hasn&#8217;t happened yet &#8211; do you think there is hope for a MID take off despite the lousy economy?)</p>
<p><strong>Robert: </strong>MIDs? No, not yet. smart phones are too lucrative and too hot. It isn&#8217;t time yet for the MID to go mainstream. For that to happen, there needs to be a driving need (cough ubiquitous AR cough)</p>
<p>The AR consortium is mostly an informal affiliation. I expect that representatives from each member will probably meet at every significant conference to catch up over drinks. We are also going to be planning for our own members conference at least once a year. That will happen after we expand the membership though.</p>
<p>The main idea behind the consortium though was to open up a channel of communication between the CEOs so we could work together on standards, solving problems, collaborating, forming some partnerships, and using the collective to bang on the doors of companies like Apple and others. There is power in a group.</p>
<p><strong>Tish:</strong> You mentioned there is a whole long conversation we can have about getting the eyewear.Â  As you point out true AR eyewear changes everything.Â  Can give a little road map of where this has to go?</p>
<p><strong>Robert: </strong>There are essentially four or five main approaches, depending on whether or not you make the lenses special or if they are just plain. You would normally want them to be plain so people with prescription lenses wouldn&#8217;t have problems and would have the option to switch them out. Some types use a more prismatic approach for top down projection, or a corner piece mounts lasers and bounces them off the lens into the eye.Â  Another approach is embedding OLEDs or something else into the lenses themselves.</p>
<p>I really like the <a href="http://www.lumus-optical.com/" target="_blank">Lumus</a> approach, but their product design isn&#8217;t quite there yet. If the wearables don&#8217;t look cool, people won&#8217;t use them. To be honest, if I had the money, I&#8217;d probably ask the Art Lebedev guys to design them based on someone else&#8217;s optical engineering. They designed the <a href="http://www.artlebedev.com/everything/optimus/" target="_blank">optimus maximus</a> old keyboard&#8230;Â Â  brilliant industrial designers, loaded with engineers too. If these guys couldn&#8217;t build the glasses and make them look damn bad ass, I&#8217;d be shocked. Heck, I bet they could build the next gen MID while they were at it.</p>
<p><strong>Tish: </strong>Getting the hardware innovation and software innovation feeding into each other would be really great.</p>
<p><strong>Robert</strong>: Absolutely.</p>
<p><strong>Tish</strong>: That would push the eyewear forward too wouldn&#8217;t it?</p>
<p><strong>Robert:</strong> All it takes is one, and then the competitive landscape would fire right up.</p>
<p><strong>Tish:</strong> What applications would the accurate gps enable?</p>
<p><strong>Robert:</strong> Everything. for example, you know exactly where the phone is and where it is facing, that means you can put it on a table and hit a button, then move it somewhere else and do the same thing in a few minutes, you have a nearly accurate &#8220;mental&#8221; model of the whole place now you go back and start dropping virtual flower pots everywhere.</p>
<p>This is one area where I think the smart phone guys are missing the boat and taking the cheap route. It is possible to have very accurate GPS (down to a six inch area) with better chips and firmware, but it is cheaper to stick in old tech. Most apps today dont need that hyper accuracy, so they aren&#8217;t bothering. Mobile AR though, thats a different story.</p>
<p>With that level of accuracy, you would know exactly where the mobile device is, so all you would need to know is the direction it is facing (orientation), and you could solve one of the problems with registering exactly where 3D objects and augmented media is (it is more complicated than I am describing it, but we don&#8217;t need to get into that much detail here). You wouldn&#8217;t need markers anymore.</p>
<p><strong>Tish: </strong> Isn&#8217;t Wikitude doing this with Wikitude.me their tagging app.?</p>
<p><strong>Robert:</strong> Not really. That type of approach is on a very large scale using the accelerometers compass and GPS to determine where you are and what is in the distance. They (and others like Layar) don&#8217;t handle &#8220;near&#8221; AR. They effectively poll your GPS and then check a database to see what is nearby and what degree/distance it is and then they draw a representation on the screen. They don&#8217;t even need a mobile device&#8217;s camera at all.</p>
<p>Even if they did things up close, its still based on finding landmarks or on things that are broadcasting their location. For example, if they were standing near me, they might get &#8220;robert, 37 degrees, 15 meters away&#8221; but they wouldn&#8217;t be tracking me exactly as I walk around or have the ability to overlay graphics on ME.</p>
<p><strong>Tish:</strong> I retweeted your <a title="#ar" href="http://twitter.com/search?q=%23ar">#ar</a> marketing using ARToolkit + flash (markers/webcams) = Photoshop pagecurl  &lt;six months. Bad design kills innovation. I know you like <a href="http://ronaldchevalier.com/" target="_blank">Dr Chevalier </a>though!Â  What are some of the other AR marketing projects that you like. What would you like to see in terms of innovation in the next 6 months?</p>
<p><strong>Robert:</strong> The marker/webcam approach is already becoming overused and cliche (tremendously fast). Older readers will remember the ubiquitous photoshop page curl that adorned nearly every website and graphic on the internet back in the day. It was horrible. Yes, the Dr. Chevalier stuff cracks me up.</p>
<p>I want to see some big companies or ad agencies really try to do something different with AR, preferably mobile. Take some risks, do something different. Don&#8217;t follow the crowd. Innovation? I want to see some wearable displays with transparent lenses, I want a mobile device specifically designed for ubiquitous AR, I want to see some experimenting with AR in the green tech sector, and I&#8217;d like to see someone get that GiFi wireless technology from that researcher in Australia and jam it into a smart mobile. I would also like my flying car and lunar vacation now, thank you. It is almost 2010 and no one has found that black obelisk yet.</p>
<p><strong>Tish:</strong> So a few closing thoughts! What do you see as the next big thing? Hopes for the ar consortium?Â  Biggest bstacle for commercial AR?Â  And what is the coolest thing you have seen this year?!</p>
<p><strong>Robert:</strong> The next big thing is what I&#8217;m working on hahaha. I hope the AR Consortium will grow and be the active catalyst in making AR mainstream, practical, and world changing.</p>
<p>The biggest obstacle is making sure that the right funding finds the right developers to develop the right technology and create kick ass applications.</p>
<p>The coolest thing I&#8217;ve seen this year would probably be <a href="http://vimeo.com/5595869 " target="_blank">the facade projection stuff</a> (see below): Now, imagine that, but without the projector. Thats part of what I envision for AR in the future.</p>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="400" height="225" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /><embed type="application/x-shockwave-flash" width="400" height="225" src="http://vimeo.com/moogaloop.swf?clip_id=5595869&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/08/03/augmented-reality-bigger-than-the-web-second-interview-with-robert-rice-from-neogence-enterprises/feed/</wfw:commentRss>
		<slash:comments>20</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Location Becomes Oxygen at Where 2.0 &amp; WhereCamp</title>
		<link>https://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/</link>
		<comments>https://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/#comments</comments>
		<pubDate>Tue, 02 Jun 2009 21:43:49 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Aaron Straup Cope]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[bottom up urban informatics]]></category>
		<category><![CDATA[Brady Forrest]]></category>
		<category><![CDATA[Bruce Sterling]]></category>
		<category><![CDATA[community sensing]]></category>
		<category><![CDATA[curating big data]]></category>
		<category><![CDATA[Dan Catt]]></category>
		<category><![CDATA[Eric Horvitz]]></category>
		<category><![CDATA[everyware]]></category>
		<category><![CDATA[FireEagle]]></category>
		<category><![CDATA[Flickr Corrections]]></category>
		<category><![CDATA[Flickr Nearby]]></category>
		<category><![CDATA[Food Genome]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[geo platform]]></category>
		<category><![CDATA[geo platforms]]></category>
		<category><![CDATA[geoblogging]]></category>
		<category><![CDATA[geoplanet]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[geowanking]]></category>
		<category><![CDATA[GigaPan]]></category>
		<category><![CDATA[gigapanning]]></category>
		<category><![CDATA[Google Wave]]></category>
		<category><![CDATA[googlewave]]></category>
		<category><![CDATA[headmap manifesto]]></category>
		<category><![CDATA[J.G. Ballard]]></category>
		<category><![CDATA[Jo Walsh]]></category>
		<category><![CDATA[Joshua Schachter]]></category>
		<category><![CDATA[location awaeness]]></category>
		<category><![CDATA[location versus place]]></category>
		<category><![CDATA[locative media]]></category>
		<category><![CDATA[machine intelligence and human intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[magic words and microsyntax]]></category>
		<category><![CDATA[Mapping Hacks]]></category>
		<category><![CDATA[Marc Powell]]></category>
		<category><![CDATA[Microsyntax]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[neogeography]]></category>
		<category><![CDATA[Odeo Yokai]]></category>
		<category><![CDATA[OpenGeo]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[paleogeography]]></category>
		<category><![CDATA[Papernet]]></category>
		<category><![CDATA[personal informatics]]></category>
		<category><![CDATA[Placemaker]]></category>
		<category><![CDATA[privacy and community sensing]]></category>
		<category><![CDATA[privacy and sensor networks]]></category>
		<category><![CDATA[psychogeography]]></category>
		<category><![CDATA[psychosynthography]]></category>
		<category><![CDATA[Raven Zachary]]></category>
		<category><![CDATA[real time web based visualization and mapping]]></category>
		<category><![CDATA[reality mining]]></category>
		<category><![CDATA[Rich Gibson]]></category>
		<category><![CDATA[Schuyler Erie]]></category>
		<category><![CDATA[sensor networks]]></category>
		<category><![CDATA[shape files]]></category>
		<category><![CDATA[shapefiles]]></category>
		<category><![CDATA[smart cities]]></category>
		<category><![CDATA[smart phones]]></category>
		<category><![CDATA[social geography]]></category>
		<category><![CDATA[social networks]]></category>
		<category><![CDATA[social reality mining]]></category>
		<category><![CDATA[Sophia Parafina]]></category>
		<category><![CDATA[Stamen Design]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[The Ubiquitous Media Studio]]></category>
		<category><![CDATA[the web in the world]]></category>
		<category><![CDATA[Tom Carden]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[ubicomp hackers]]></category>
		<category><![CDATA[Usman Haque]]></category>
		<category><![CDATA[wearable sensory substitution devices for navigation]]></category>
		<category><![CDATA[Where2.0]]></category>
		<category><![CDATA[WhereCamp]]></category>
		<category><![CDATA[WOEID]]></category>
		<category><![CDATA[yahoo! geotechnologies group]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3567</guid>
		<description><![CDATA[curatingbigdatapost]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime.jpg"><img class="alignnone size-medium wp-image-3578" title="anselmcircletime" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/anselmcircletime-300x199.jpg" alt="anselmcircletime" width="300" height="199" /></a></p>
<p>The biggest news at <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0, 2009</a> came from the<a href="http://developer.yahoo.com/geo/" target="_blank"> Yahoo!</a><a href="http://developer.yahoo.com/geo/" target="_blank"> G</a><a href="http://developer.yahoo.com/geo/">eo Technologies Group</a>. Tyler Bell, announced Yahoo! <a href="http://developer.yahoo.com/geo/placemaker">Placemaker</a> and the opening up of the <a href="http://developer.yahoo.com/geo/geoplanet/" target="_blank">GeoPlanet</a> data set, â€œall of the WOEIDs [<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs] available as a free download under Creative Commons in Juneâ€ (see <a href="http://radar.oreilly.com/brady/" target="_blank">Brady Forrestâ€™s post</a> for more details).</p>
<p><a id="qa9y" title="WhereCamp 2009" href="http://wherecamp.pbworks.com/WhereCamp2009" target="_blank">WhereCamp 2009</a> was held immediately after <a href="http://en.oreilly.com/where2009/" target="_blank">Where 2.0</a> and was a great place to chew on the events and ideas of Where 2.0.Â  In the picture above Anselm Hook addresses the WhereCamp morning circle in the courtyard outside the <a id="i:ij" title="Social Tex" href="http://www.socialtext.com/" target="_blank">Social Tex</a>t offices in Palo Alto. Anselm pointed out to me:</p>
<p><strong>&#8220;there are interesting implications of placemaker in combination with other yahoo assets &#8211; in particular <a href="http://developer.yahoo.com/yql/" target="_blank">YQL</a> &#8211; placemaker by itself is neat &#8211; but placemaker combined with everything else is a natural missing piece that is a big enabler.Â  Yahoo has been impressive.&#8221;</strong></p>
<p><strong> </strong>With all the Geo platform power available to us now, also (also see<a href="http://radar.oreilly.com/2009/05/new-geo-for-devs-from-google-i.html" target="_blank"> New Geo for Devs from Google I/O</a>), there isnâ€™t a shadow of a doubt in my mind Brady is right when he said, just before the Where 2009 conference: &#8220;<strong>Location is no longer a differentiator it&#8217;s going to become oxygenâ€ </strong> <a href="http://www.webmonkey.com/blog/New_Wave_of_Apps_Build__Where__Into_the_Web" target="_blank">(quote from WebMonkey).</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1.jpg"><img class="alignnone size-medium wp-image-3612" title="spatialjunkies1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/spatialjunkies1-300x199.jpg" alt="spatialjunkies1" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41.jpg"><img class="alignnone size-medium wp-image-3614" title="yahoogeo41" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/yahoogeo41-300x199.jpg" alt="yahoogeo41" width="300" height="199" /></a></p>
<p><em>The Yahoo! GeoPlanet team at WhereCamp &#8211; Tyler Bell, (talking to Brady Forrest in picture on the left) is sporting his spatial junkies T-Shirt. Photo on right, Aaron Cope, Tyler Bell, Martin Barnes, Gary Gale.</em></p>
<p>WhereCamp was alive with key figures from the social geography movement who knew the power of these new tools (see <a href="http://www.flickr.com/photos/ugotrade/sets/72157618662411286/" target="_blank">some of my photos of WhereCamp on Flickr here</a>).</p>
<p>The importance of the Yahoo! announcement really became clear to me at <a href="http://www.socialtext.net/wherecamp/index.cgi" target="_blank">WhereCamp</a> where I attended sessions all day Saturday including the Curating Big Data Session led by <a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen Design</a> and <a href="http://www.aaronstraupcope.com/" target="_blank">Aaron Straup Cope</a>, Flickr, (see Aaronâ€™s slides from his<a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank"> Where 2.0 presentation on â€œThe Shape of Alphaâ€ here</a> and video <a href="http://where.blip.tv/file/2167471/" target="_blank">here</a>).</p>
<p>Anselm Hook, a prime mover for WhereCamp, is a leading philosopher of place making and veteran software developer who led <a href="http://platial.com/" target="_blank">Platia</a>l engineering and is now at web consultancy <a rel="nofollow" href="http://makerlab.com/">http://makerlab.com</a><span class="bio">. If you missed Anselm at WhereCamp he will be presenting on, <a href="http://opensourcebridge.org/sessions/246" target="_blank">Ubiquitous Angels</a> at <a href="http://opensourcebridge.org/users/288" target="_blank">The OpenSource Bridge</a>, Portland, Oregon, June 17th -19th, 2009.</span></p>
<p>Anselm describes where he thinks the challenges are:</p>
<p><strong>â€œWe should be mapping information that in some ways has been historically unmappable because it is 1) not valued or is 2) actively seen as threatening or is 3) simply too hard to map using traditional tools.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul.jpg"><img class="alignnone size-medium wp-image-3680" title="wherecampschedul" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/wherecampschedul-300x199.jpg" alt="wherecampschedul" width="300" height="199" /></a></p>
<p><em>The WhereCamp Schedule</em></p>
<p><strong><span style="font-size: medium;">The Shape of Alpha</span></strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57.png"><img class="alignnone size-medium wp-image-3647" title="picture-57" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-57-300x220.png" alt="picture-57" width="300" height="220" /></a></p>
<p><em>Screen capture from Aaron&#8217;s <a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation on â€œThe Shape of Alpha.</a> Original photo from Flickr user <a href="http://www.ï¬‚ickr.com/photos/nickisconfused/3291840240/" target="_blank">&#8220;NickIsConfused&#8221;</a>.</em></p>
<p>Aaron Straup Copesâ€™s work on <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a> puts key questions about curating big data center stage.</p>
<p>Firstly, the exploration of what it means to curate/collaborate over meaning from â€œthe abundance of data produced in the precise but distant language of machinesâ€ (also see <a href="http://www.archimuse.com/mw2009/abstracts/prg_335001944.html" target="_blank">The Interpretation of Bias (and the bias of interpretation)</a>. The Shape of Alpha uses a process of <a href="http://code.flickr.com/blog/2008/09/04/whos-on-first/">reverse-geocoding</a> to translate machine-generated geographic data into place names that people can understand and relate to.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a> are built with nothing but geotagged photos and some code called clustr (written by the brilliantÂ  <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> &#8211; co-author of <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a>). Anyone can make these <a href="http://en.wikipedia.org/wiki/Shapefile" target="_blank">shapefiles</a>. You can get the shapefiles out of theÂ  <a href="http://www.flickr.com/services/api">Flickr API</a>. Aaron has been keying off WOEIDs (<a href="http://developer.yahoo.com/geo/">Where On Earth (WOE)</a> IDs) but as Aaron noted you can key off anything you like &#8211; tags are an obvious choice.</p>
<p>Wow! You can reinvent mapping with this stuff.</p>
<p>Very importantly, <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alpha,â€</a> tells us something about how we relate to place versus location. The emotions, disputes and behavior related to place also emerge through crowd sourced corrections.Â  For more <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">see this very evocative post by Aaron about corrections and treating airports as cities</a>.Â  There is a glorious thread/riff and ode to the genius ofÂ  J. G. Ballard pursued by Aaron and Dan Catt in their posts (also see Dan Catt&#8217;s, <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes</a>, and Aaron pointed me to <a href="http://www.ballardian.com/the-real-concrete-island" target="_blank">this brilliant &#8220;geo-detective work&#8221; </a>on <a href="http://www.ballardian.com/biblio-concrete-island">Concrete Island</a>, by Mike Bonsall <a title="J.G. Ballard, Flickr, naked singularities and 3-letter airportÂ codes" rel="bookmark" href="http://geobloggers.com/2009/05/11/j-g-ballard-flickr-naked-singularities-and-3-letter-airports-code/">.</a></p>
<p>Dan Catt created <a href="http://geobloggers.com/" target="_blank">geobloggers</a> and â€œseeded the geotagging community around the Web.â€ I met Reverend Dan Catt (Twitter @revdancatt ) at Where 2.0 when he was kind enough to share part of his seat so I could join a very interesting discussion with Aaron on The Shape of Alpha.</p>
<p>As <a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#corrections" target="_blank">Aaron points out</a> they decided to treat &#8220;the airport itself <em>as</em> the town&#8230;&#8221;Â  not (only) because they admired the work of <a href="http://www.jgballard.com/airports.htm">J.G. Ballard</a>,Â                      &#8220;but because it is the right thing to do.&#8221;</p>
<p>Dan Catt has excellent <a href="http://blog.flickr.net/en/2008/08/08/introducing-a-new-way-to-geotag/">blog posts</a> &#8220;describing                     the nuts and bolts of how &#8216;corrections&#8217; works.&#8221;Â  Aaron points out,Â  &#8220;in <a href="http://code.flickr.com/blog/2008/08/08/location-keeping-it-real-on-the-streets-yo/">the nerdier of                     the two</a> Dan sums it up nicely by saying&#8221;:</p>
<blockquote class="hier"><p><strong>&#8220;On a slightly more philosophical level, itâ€™s a never                         ending process. Weâ€™ll never reach a point where we can                         say â€œRight thatâ€™s in, all borders between places have                         been decided.â€ But what we should end up with are                         boundaries as defined by Flickr users.</strong></p>
<p><strong>&#8230;</strong></p>
<p><strong> </strong></p>
<p><strong>For us, itâ€™s a first small step into an experiment, and actually a pretty big                         experiment as weâ€™re potentially accepting â€œcorrectionsâ€ from our millions and                         millions of users. Weâ€™re not quite sure how itâ€™ll all turn out, but weâ€™re armed                         with Maths, Algorithms and kitten photos.&#8221;</strong></p></blockquote>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Psychosynthography &#8211; &#8220;Wearing Geography as a Perfume&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59.png"><img class="alignnone size-medium wp-image-3649" title="picture-59" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/picture-59-300x224.png" alt="picture-59" width="300" height="224" /></a><em> </em></p>
<p><em>Psychosynthography screen capture from Aaron Straup Cope&#8217;s </em><a href="http://en.oreilly.com/where2009/public/schedule/detail/7212" target="_blank">Where 2.0 presentation </a><em>. Original photo from Flickr user,Â  <a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">&#8220;</a></em><a href="http://www.ï¬‚ickr.com/photos/nitelynx/44189973/" target="_blank">NiteLynx.&#8221;</a></p>
<p>As I mentioned before, many of the ideas raised at Where 2.0 were unpacked and worked through at WhereCamp. For example, Aaron introduced a word <strong>psychosynthography</strong> in the last 24 seconds of his talk at Where 2.0.</p>
<p>So I spent as much time as I could listening to Aaron at WhereCamp, and asking him about psychosynthography and more (post of this interview upcoming).</p>
<p>Aaron urged the Where 2.0 audience to pay attention to the Psychogeography movement seeded by <a title="Guy Debord" href="http://en.wikipedia.org/wiki/Guy_Debord">Guy Debord</a>, and<strong> â€œto wear geography like a perfume.â€</strong></p>
<p>Joseph Hart writes in a <a href="http://www.utne.com/2004-07-01/a-new-way-of-walking.aspx" target="_blank">â€œNew Way of Walking</a>â€ psychogeography is:<strong> </strong></p>
<p><strong>â€œa whole toy box full of playful, inventive strategies for exploring citiesâ€¦just about anything that takes <span class="mw-redirect">pedestrians</span> off their predictable paths and jolts them into a new awareness of the urban landscape.â€</strong></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Curating Big Data</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden.jpg"><img class="alignnone size-medium wp-image-3625" title="tomcarden" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/tomcarden-300x199.jpg" alt="tomcarden" width="300" height="199" /></a></p>
<p><em><a href="http://stamen.com/studio/tom" target="_blank">Tom Carden, Stamen</a>, (picture above) paired with Aaron for the Curating Big Data session. Tom noted: </em></p>
<p><strong>&#8220;The Curating Big Data session for me was an attempt to learn from other attendees (as opposed to teach/lead, as with the Stamen session, &#8220;Real Time Web-Based Visualization and Mapping&#8221;).Â  Also, it was an excuse to get Aaron to recap parts of the Flickr Shapefile story for WhereCamp folks, and to get *input* on how to do more things like it. I was a bit disappointed that nobody had really good examples for us, but I was happy with Brad Stenger&#8217;s suggestion to look into the upcoming census data as a relevant area.&#8221;</strong></p>
<p>Aaronâ€™s work on the The Shape of Alpha and The Corrections project shows, as Tom noted:</p>
<p><strong>â€œwhat you can do once you have 150 million geotagged photos, and millions of users who are willing to say I took this thing here and my name for that place is â€¦..â€</strong></p>
<p>And part of the significance of opening up the GeoPlanet data set is that now:</p>
<p><strong>â€œwe can try and start talking about the same places, as far as, [for example], these shape files go. So if you are interested in what comes out of the Flickr shape files project and but you also have your own opinion about what shape those places are so the IDs have be open you have to be sure that you are talking about the same thing in the first place.â€</strong></p>
<p>And, as Tom pointed out, collaborating over geo data informs us about curating any big dataset:</p>
<p><strong>â€œit should lead to an overarching discussion about any kind of dataset geo or otherwise and ways in which we can talk about it, and think about patterns for improving that data, for collaborating, even on things like cleanup.â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping.jpg"><img class="alignnone size-medium wp-image-3681" title="realtimewebbased-visualizationandmapping" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/realtimewebbased-visualizationandmapping-300x199.jpg" alt="realtimewebbased-visualizationandmapping" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost.jpg"><img class="alignnone size-medium wp-image-3739" title="curatingbigdatapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/curatingbigdatapost-300x199.jpg" alt="curatingbigdatapost" width="300" height="199" /></a></p>
<p><em>Warp speed geo-genius Andrew Turner, <a href="http://www.fortiusone.com/" target="_blank">Fortius One</a><a href="http://www.fortiusone.com/" target="_blank">,</a> took these excellent notes for the &#8220;Real Time Web-Based Visualization and Mapping&#8221; (on left) and &#8220;Curating Big Data&#8221; (on the right).</em></p>
<p><em> </em></p>
<p>On my way to Where 2.0 I took the train from SFO to San Jose which was a delight but a little slower than I imagined. So, unfortunately, I arrived on Tuesday just after <a href="http://en.oreilly.com/et2009/public/schedule/speaker/3486">Michal Migurski</a> (Stamen Design),  	 		<a href="http://en.oreilly.com/et2009/public/schedule/speaker/40013">Shawn Allen</a> (Stamen Design) presentedÂ  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/20/Maps%20from%20Scratch_%20Online%20Maps%20from%20the%20Ground%20Up%20Presentation.pdf">Maps from Scratch: Online Maps from the Ground Up. </a> This was on my MUST attend list and<em> </em>it was a wonderful opportunity to get into,<em> </em>&#8220;Real Time Web-Based Visualization and Mapping.&#8221;Â Â  I did get a chance to talk to Michal and Shawn a bit later in the conference but I will try to catch up with them soon for an in depth story.Â  Below isÂ  Shawn Allen&#8217;s map of overlapping data sets from, <a href="http://www.flickr.com/photos/shazbot/3282821808/" target="_blank">&#8220;Trees, cabs and crime in San Francisco:&#8221; </a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs.png"><img class="alignnone size-medium wp-image-3743" title="treescrimecabs" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/treescrimecabs-300x273.png" alt="treescrimecabs" width="300" height="273" /></a></p>
<p>Another follow up I am really looking forward to making is with <a href="http://lizbarry.com/s+em/contact.htm" target="_blank">Liz Barry</a> and her work on <a href="http://lizbarry.com/s+em/about.htm" target="_blank">S+EM</a>, &#8220;an environmental mapping and social networking design project          that links New York City trees with the people who care for them&#8221; (also see, <a href="http://fuf.net/" target="_blank">Creating a Greener San Francisco Tree by Tree</a>).Â  Also I got a chance to talk to another fellow New Yorker (we have to travel to the West Coast to find time to chat!), <a href="http://radar.oreilly.com/jgeraci/" target="_blank">John Geraci</a> of <a href="http://diycity.org/" target="_blank">DIY City</a> who presented  	 		 			<a class="attach" href="http://assets.en.oreilly.com/1/event/25/DIY%20City_%20An%20Operating%20System%20for%20Cities%20Presentation.zip">DIY City:Â  An Operating System for Cities.</a></p>
<h3>Machine Intelligence and Human Intelligence</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew.jpg"><img class="alignnone size-medium wp-image-3622" title="aaronandandrew" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/aaronandandrew-300x199.jpg" alt="aaronandandrew" width="300" height="199" /></a></p>
<p><em>Aaron Cope, Flickr, on the left is talking to Andrew Turner on the right the CTO of FortiusOne (see Andrewâ€™s presentation at Where 2.0, <a href="http://blip.tv/file/2167650" target="_blank">â€œYour Own Private Geo Cloudâ€</a>)</em></p>
<p>Many of the most interesting conversations happened in between sessions at WhereCamp and Where 2.0.</p>
<p>I caught this one in which Aaron Cope and Andrew Turner where discussing some of ideas Aaron raised in his presentation, <a href="http://www.slideshare.net/straup/capacity-planning-for-meaning-presentation-637370?type=powerpoint" target="_blank">â€œCapacity planning for meaning in the age of personal informaticsâ€</a> (see Aaronâ€™s blog post, <a href="http://www.aaronland.info/weblog/2008/10/08/tree/" target="_blank">Tree planting and tree hugging in the age of personal informatics</a>). The core question they were discussing was what happens when you wire the world at the scale people are talking about and it breaksâ€¦ Aaron argues that you already have a whole class of people in systems operations that can tell us a lot about how to answer this question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost.jpg"><img class="alignnone size-medium wp-image-3594" title="rossmayfieldsocialtextpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/rossmayfieldsocialtextpost-300x199.jpg" alt="rossmayfieldsocialtextpost" width="300" height="199" /></a></p>
<p><em><span class="bio">Ryan and Anselm shared the pulpit for the morning circle pulpit with <a href="http://ross.typepad.com/" target="_blank">Ross Mayfield</a> of <a href="http://www.socialtext.com/" target="_blank">Social Text </a>who was the generous host to WhereCamp.</span></em></p>
<h3>Social Reality Mining</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1.jpg"><img class="alignnone size-medium wp-image-3651" title="benjaminbratton1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/benjaminbratton1-300x199.jpg" alt="benjaminbratton1" width="300" height="199" /></a></p>
<p><strong>â€œAs it stands today, we have no idea what terms and limits of a cloud based citizenship of the Google Caliphate will entail and curtail. Some amalgam of post-secular cosmopolitanism, agonistic radical democracy, and post-rational actor microecomics, largely driven by intersecting petabyte at-hand datasets and mutant strains of Abrahamaic monotheism. But specifically, what is governance (let alone government) within this?â€ </strong><a href="http://bratton.info/" target="_blank">from Benjamin Brattonâ€™s</a> talk at ETech 2009 (picture above)<strong>, </strong><a href="http://www.bratton.info/emergency.html" target="_blank">Undesigning the Emergency: Against Prophylactic Urban Membranes</a>.</p>
<p>The other big take away from WhereWeek &#8211; Where 2.0 and WhereCamp, was not so much news, but a confirmation of something that has been pretty clear for a while now. (Check out <a href="http://radar.oreilly.com/2008/05/the-results-of-reality-mining.html" target="_blank">Bradyâ€™s posts on reality mining at Where 2.0 last year</a>).</p>
<p>We are moving headlong into the era of reality mining with all its myriad possibilities from: &#8220;hedonistic optimization&#8221; (this term came from <a href="http://brainofstig.ai/" target="_blank">Stig Hackvan</a> when I asked him about some of the ideas central to the <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">HeadMap Manifesto</a> -more about HeadMap later in this post); to new forms of marketing (social reality mining the inside to predict if someone is going to trade business cards in the next 120 seconds &#8211; <a href="http://en.oreilly.com/where2009/public/schedule/speaker/46016" target="_blank">Alex â€œSandyâ€ Pentland, MIT, Where 2.0</a>);Â  to stuff that matters to save us from mass extinction like distributed sustainability &#8211; greening production and consumption and our cities; to open government;Â  empowering indigenous communities (also see Rebecca Moore&#8217;s<a href="http://en.oreilly.com/where2009/public/schedule/speaker/43557" target="_blank"> </a><a class="attach" href="http://assets.en.oreilly.com/1/event/25/Indigenous%20Mapping_%20Emerging%20Cultures%20on%20the%20Geoweb%20Presentation.ppt">Indigenous Mapping: Emerging Cultures on the Geoweb Presentation</a>); and not to be forgotten, the troubling possibility of new forms of social control.</p>
<h3>Smart phones are powerful networked sensor devices in the palm of our hand</h3>
<p>As Sandy Pentland MIT pointed out in his Where 2.0 keynote, <a href="http://en.oreilly.com/where2009/public/schedule/detail/7956" target="_blank">â€œReality Mining for Companies, or, How Social Networks Network Best,â€</a> mobile phones have created an ubiquitous instrumented reality that goes way deeper than location awareness. Smart phones are powerful networked sensor devices in the palm of our hand that know a lot more about us than location. With proximity, motion, (accelerometers), voice, images, call logs, email &#8211; what is enabled is not just knowing where people are but knowing more about them.</p>
<p>Many of the issues raised by <a href="http://speedbird.wordpress.com/" target="_blank">Adam Greenfield</a> in <a href="http://speedbird.wordpress.com/my-book-everyware-the-dawning-age-of-ubiquitous-computing/" target="_blank">Everyware</a> and in <a href="../../2009/02/27/towards-a-newer-urbanism-talking-cities-networks-and-publics-with-adam-greenfield/" target="_blank">my interview with Adam</a> were on my mind during WhereWeek, also questions that were distilled and explored in this presentation by Matt Jones last year, <a href="http://www.slideshare.net/blackbeltjones/polite-pertinent-and-pretty-designing-for-the-newwave-of-personal-informatics-493301" target="_blank">Polite, Pertinent, andâ€¦ Pretty: Designing for the New-wave of Personal Informatics</a> and <a href="http://www.slideshare.net/tmo/the-web-in-the-world-presentation" target="_blank">Timo Arnallâ€™s presentation, The Web in the World</a>.</p>
<h3>Google Wave, PachubeÂ  Feeds, Sensor Networks and Microsyntax!</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="560" height="340" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="560" height="340" src="http://www.youtube.com/v/pi4MhQgGNqI&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p><em><a id="o_ok" title="Visualizing 24 hours of @pachube" href="http://is.gd/IYOj" target="_blank">Visualizing 24 hours of Pachube</a> logs, feeds all around the world -Â  built with Processing.</em></p>
<p>I found myself really wishing <a href="http://www.pachube.com/" target="_blank">Pachube</a> founder Usman Haque had been able to come to Where 2.0 this year &#8211; Usman was originally on the Where 2.0 schedule but had to drop out. My small contribution to WhereCamp was to discuss <a href="http://www.pachube.com/" target="_blank">Pachube</a>, <a href="http://www.haque.co.uk/naturalfuse.php" target="_blank">Natural Fuse</a> and <a href="http://www.shaspa.com/" target="_blank">OpenShaspa</a> in the, Urban Eco-Managment session (<a href="../../2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/" target="_blank">see my interview with Pachube Founder, Usman Haque here</a>).</p>
<p>Pachube announced &#8211; <a id="du7_" title="mapping mobile feeds in realtime" href="http://is.gd/BjJT" target="_blank">mapping mobile feeds in realtime</a>, with 3d datastream value time &amp; location based graphing just before Where 2.0.</p>
<p>And, as I was writing up this post, I was delighted to see <a href="http://www.wired.com/beyond_the_beyond/2009/05/spime-watch-pachube-feeds/" target="_blank">this post by Bruce Sterling on Pachube Feeds</a> and his challenge, offering:</p>
<p><strong>&#8220;(((Extra credit for eager ubicomp hackers: combine this [pachube feeds] with Googlewave, then describe it in microsyntax. Hello, 2015!)))&#8221;</strong></p>
<p>Also Anselm Hook, who has an extensive background in video game development, made an interesting point about Google Wave to me:</p>
<p><strong>&#8220;btw &#8211; there is a preexisting metaphor for the wave &#8211; the wave is notable in that it is making the web like a videogame &#8211; its bringing real time many participant shared interaction to the web&#8221;</strong></p>
<div id="a9iz" style="text-align: left;">And see <a href="http://radar.oreilly.com/2009/05/google-wave-what-might-email-l.html" target="_blank">Tim Oâ€™Reillyâ€™s post</a> for more on the significance of Wave, which <a href="http://www.techcrunch.com/2009/05/28/google-wave-drips-with-ambition-can-it-fulfill-googles-grand-web-vision/">Google previewed for developers at its I/O conference</a>:</div>
<p><strong>â€œJens, Lars, and team re-imagined email and instant-messaging in a connected world, a world in which messages no longer need to be sent from one place to another, but could become a conversation in the cloud. Effectively, a message (a wave) is a shared communications space with elements drawn from email, instant messaging, social networking, and even wikis.â€ </strong></p>
<p>For more on microsyntax see <a href="http://www.microsyntax.org/" target="_blank">microsyntax.org</a></p>
<p>Aaron pointed out to me re microsyntax:</p>
<p><strong>&#8220;This is ultimately the &#8220;magic word&#8221; problem, which is essentially the semweb vs. google-is-smarter-than-you problem.&#8221;</strong></p>
<p>I will have some more questions for Aaron on the the &#8220;magic word&#8221; problem in my upcoming interview post.Â  At the moment I am busy studying some of the thoughts in these links.</p>
<p><a href="http://delicious.com/straup/magicwords" target="_blank">http://delicious.com/straup/magicwords</a></p>
<p><a href="http://www.slideshare.net/straup/the-papernet/22" target="_blank">http://www.slideshare.net/straup/the-papernet/22</a></p>
<p><a href="http://www.xml.com/pub/a/2005/02/16/edfg.html" target="_blank">http://www.xml.com/pub/a/2005/02/16/edfg.html</a></p>
<p><a href="http://xtech06.usefulinc.com/schedule/paper/135" target="_blank">http://xtech06.usefulinc.com/schedule/paper/135</a></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3>Privacy: Towards a Win Win and Community Sensing</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21.jpg"><img class="alignnone size-medium wp-image-3659" title="erichorvitz21" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/erichorvitz21-300x199.jpg" alt="erichorvitz21" width="300" height="199" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing.jpg"><img class="alignnone size-medium wp-image-3655" title="communitysensing" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/communitysensing-300x199.jpg" alt="communitysensing" width="300" height="199" /></a></p>
<p>While a key element ofÂ  Yahoo! Geo Technologies portfolio of platforms, <a href="http://fireeagle.yahoo.net/" target="_blank">FireEagle</a>, not only gives an important set of tools to allow people to &#8220;share their location with sites and services through the Web or a mobile device&#8221; but also offers up some vital privacy tools, the community sensing work of Eric Horvitz takes privacy and data sharing into new terrain.</p>
<p>Eric didnâ€™t have time to discuss his privacy work in his Where 2.0 presentation, <a href="http://en.oreilly.com/where2009/public/schedule/detail/8911" target="_blank">Where, When, Why, and How: Directions in Machine Learning and Reasoning about Location</a>, &#8211; it came up in his very last slide. But I ran up after his talk with my trusty old ipod recorder in hand, and got the part we missed! Fascinating stuff that will be the subject of an upcoming interview post. Hereâ€™s a little taste of what is to come. Eric describes one of the directions his team will be exploring.</p>
<p><strong>â€œOne thing I want to do, on our research team, Iâ€™d like to develop something very simple for people to use. A challenging problem with privacy is usability and controls. Aunt Polly and Uncle Herbie just donâ€™t get all these authentication controls and sliders, nor do they want to invest in figuring them out. They also donâ€™t get why theyâ€™re being asked with pop up windows to yes or no to various questions and so on. One Idea is having a useable privacy lens, that you can hold up anywhere and it tells you what youâ€™re showing anybody or any organization, what does the world know about you. And you would like to have buttons to turn sharing off for some items. You&#8217;d also like to have a way to go back in time and view prior sharing and logging over periods of time, and to have buttons to push to say erase that segment of your logs.â€</strong></p>
<p><strong> </strong></p>
<p>Understanding the social implications of what it means to live in an instrumented world is a topic that we cannot afford not think about. But luckily there are lot of people who have been thinking pretty deeply about this for a while now.</p>
<p>And I did my best at both Where 2.0 and WhereCamp to seek out as many of geothinkers as I could, and do interviews wherever possible (I have not had time to mention everyone I talked to in this post but hopefully all the interviews will get on Ugotrade soon!)</p>
<p><span style="font-family: Arial,Helvetica,sans-serif; font-size: x-small;"> </span></p>
<h3>HeadMap Manifesto</h3>
<p>In the bar of The Fairmont on the last night of Where 2.0, I heard some of the history of Where 2.0, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking</a>, and <a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> from Sophia Parafina, Director of Operations for <a href="http://opengeo.org/" target="_blank">OpenGeo</a> and <a href="http://testingrange.com/" target="_blank">Rich Gibson</a>, programmer, <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanker</a>,Â <a href="http://gigapan.org/index.php" target="_blank"> Gigapanner</a> and co-author of <a href="http://mappinghacks.com/" target="_blank">Mapping Hacks </a>with <a href="http://iconocla.st/cv.html" target="_blank">Schuyler Erie</a> and <a href="http://frot.org/" target="_blank">Jo Walsh</a> (Jo did a lot <a href="http://frot.org/s/semantic_city.html" target="_blank">of key early work on bottom up urban informatics </a> but unfortunately couldn&#8217;t make it to WhereWeek this year).</p>
<p>Check <a id="zaq4" title="Gigapan.org" href="http://www.gigapan.org/index.php" target="_blank">Gigapan.org</a> out! <strong>&#8220;The GigaPan<span class="trademark">SM</span> process allows users to upload, share, and explore brilliant gigapixel+ panoramas from around the globe.&#8221;</strong></p>
<p>Also I interviewed Paul Ramsey, Senior Consultant, OpenGeo, so more on OpenGeo is upcoming (see Paulâ€™s <a href="http://blog.cleverelephant.ca/2009/05/where-re-cap.html" target="_blank">Where ReCap</a>). <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43773"> Justin Deoliveira</a> (OpenGeo) andÂ   	 		<a href="http://en.oreilly.com/where2009/public/schedule/speaker/59688">Sophia Parafina</a> did a session, <a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165">GeoServer, GeoWebCache + OpenLayers: The OpenGeo Stack,</a><span class="url uid"> which unfortunately I missed as it </span><span class="url uid">was before I arrived Tuesday.</span><a class="url uid" name="session7165" href="http://en.oreilly.com/where2009/public/schedule/detail/7165"></a></p>
<div id="page_title"><strong> </strong></div>
<p><span class="bio"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich.jpg"><img class="alignnone size-medium wp-image-3631" title="sophiaandrich" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/sophiaandrich-300x199.jpg" alt="sophiaandrich" width="300" height="199" /></a></span></p>
<p>I met Rich Gibson <a href="http://www.flickr.com/photos/ugotrade/sets/72157615022689427/" target="_blank">at Etech 2009 playing Werewolf</a> and Rich introduced me to his co-author on <a href="http://search.barnesandnoble.com/Mapping-Hacks/Schuyler-Erie/e/9780596007034" target="_blank">Mapping Hacks</a> and alpha geek supreme, Schuyler Erie, who also wrote the clustr code that The Shape of Alpha uses.</p>
<p><a href="http://joshua.schachter.org/" target="_blank">Joshua Schachter</a> founder of Delicious and the <a href="http://geowanking.org/mailman/listinfo/geowanking_geowanking.org" target="_blank">GeoWanking mailing list</a>, [and <a href="http://geourl.org/" target="_blank">GEOURL </a>- and <a href="http://memepool.com/" target="_blank">MemePool!] </a> now at Google came to WhereCamp and was mobbed by a small crowd eager to get their hands on one of the developer G Phones he was handing out from a large box.</p>
<p>GeoWanking, which is now run by Oâ€™Reilly Media, has been the incubator for all things location aware and â€œneogeographyâ€ discussions since 2003 &#8211; check out â€˜<a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html" target="_blank">sproke</a> for a <a href="http://sproke.blogspot.com/2009/05/paleogeography-vs-neography.html">Paleogeography vs Neogeography </a>(which, as Sophia notes, was a common topic of discussion at Where 2.0) smack down in which geowanking rules in the form of a list traffic comparison.</p>
<p>Sophia and Rich shared some of their perspective on the early days of GeoWanking and the creation of the HeadMap Manifesto with me and pointed me to many other people to talk to. The prime mover of the Headmap manifesto, Ben Russell, has retired from the scene &#8211; perhaps bored by seeing a radical vision gone thoroughly mainstream, or exhausted by the rigors of carrying an idea through the early blue sky years, or just s simply doing something else? I donâ€™t know.</p>
<p><a href="http://docs.google.com/tecfa.unige.ch/%7Enova/headmap-manifesto.PDF" target="_blank">The HeadMap Manifesto</a> is still vibrant today even as much of what it envisaged has already been realized. HeadMap assembled the future in a poetry of fragments:</p>
<p><strong>â€œyou can search for sadness in new york people within a mile of each other who have never met stop what they are doing and organize spontaneously to help with some task or other.â€</strong></p>
<p>Anselm explained to me what powered all this social cartography revolution, from his POV, was actually IRC.</p>
<p><strong>&#8220;We had a channel on IRC called &#8220;#geo&#8221;. Â And many of us met there.Â  I met Ben Russell at MathEngine in the UK. Ben and I were fascinated by the future of maps.Â  Ben, Jo and I met Schuyler, Dav, Dan Brickley (who worked for Tim Berner&#8217;s Lee who invented the Web), Rich Gibson, Joshua Schachter (who was just a geek at Morgan Stanley at the time ) &#8230; and the snowball took off&#8230;. Â many others.</strong></p>
<p><strong>We stormed ETECH ( Schuyler met Jo there). Â We got invited to FooCamp. Schuyler was married to Jo by Marc Powell (Food Genome) and lived at his house. Â We pushed so hard on the social cartography revolution.</strong></p>
<p><strong>I did a spinny globe for geourl &#8211; a project by some hacker named Joshua Schachter&#8230; Â we were all friends for years and we had never even met.&#8221;</strong></p>
<p><strong></strong></p>
<p><strong></strong></p>
<h3>â€œCan AR researchers harness these new approaches to index reality?â€</h3>
<p><object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="425" height="344" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0"><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" /><param name="allowfullscreen" value="true" /><embed type="application/x-shockwave-flash" width="425" height="344" src="http://www.youtube.com/v/y_LXpqmdk9U&amp;hl=en&amp;fs=1" allowscriptaccess="always" allowfullscreen="true"></embed></object></p>
<p>Radioheadâ€™s laser (as opposed to video) clip made using <a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a></p>
<p><a id="t7u3" title="If you have read my interview with Ori Inbar," href="../../2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">If you have read my interview with Ori Inbar,</a> you will know how excited I was to attend The Mobile Reality panel.Â  <a href="http://en.oreilly.com/where2009/public/schedule/detail/7197" target="_blank">The video is up</a> and it is really awesome to hear <a href="http://en.oreilly.com/where2009/public/schedule/speaker/35457">Raven Zachary</a> (on twitter @<a href="http://www.twitter.com/ravenme">ravenme</a>) get into the fray with augmented reality.</p>
<p>The main take away for me from the Mobile Reality panel was that we shouldn&#8217;t get too hung up on the difficulties of achieving fully immersive visual augmented reality and twiddle our thumbs waiting for the long anticipated sexy lightweight eyeware &#8211; which is still in a coming soon phase (for more on immersive augmented reality see my upcoming interview with <a href="http://www.cc.gatech.edu/%7Eblair/home.html" target="_blank">Blair MacIntyre</a>). Because, in the meantime, there are plenty of delightful and useful ways to augment our experience of the world &#8211; and not all of these augmented realities rely soley on smart phones as John S. Zeleck showed in his presentation on <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigation.â€</a> Also I had an interesting discussion at lunch with Ori Inbar about the use of audio for augmented reality projects.</p>
<p>Where 2.0 clearly demonstrated that we have an unprecedented amount of information from mapping our world, <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar noted in his conference roundup. </a> Ori writes:</p>
<p><strong>&#8220;My point is not a shocker: all we need is to tap into this information and bring it, in context, into people&#8217;s field of view.&#8221;</strong></p>
<p>As Ori noted <strong><a href="http://www.earthmine.com/" target="_blank">Earthmine</a></strong> and <strong><a href="http://www.velodyne.com/lidar/" target="_blank">Velodyne&#8217;s Lidar</a></strong> showed off two new approaches to mapping the world that have potential to create new opportunities for augmented reality:</p>
<p><strong><strong><a href="http://www.earthmine.com/" target="_blank">&#8220;Earthmine</a></strong> uses its own camera-based device to index reality, at the street level, one pixel at a time. They have just announced <a href="http://wildstylecity.com/wsc/" target="_blank">Wild Style City</a> an application that allows anyone to create virtual graffitis on top of designated public spaces. However, at this point, you can only experience it on a pc!&#8221;</strong></p>
<p><a href="http://www.velodyne.com/lidar/" target="_blank">Lidar</a>, Ori notes, has also embarked on a mission to map the outdoors. But, the question Ori highlights is:</p>
<p><strong>â€œCan AR researchers harness these new approaches to index reality?â€</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest.jpg"><img class="alignnone size-medium wp-image-3660" title="johnzelekandbradyforrest" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/johnzelekandbradyforrest-300x199.jpg" alt="johnzelekandbradyforrest" width="300" height="199" /></a></p>
<p>Brady Forrest inspects John S. Zelekâ€™s <a href="http://en.oreilly.com/where2009/public/schedule/speaker/43786" target="_blank">â€œWearable Sensory Substitution Device for Navigationâ€</a> at Where Fair before putting it on and being guided by sensory nudges at the cardinal points in the belt.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost.jpg"><img class="alignnone size-medium wp-image-3661" title="bradyforrestpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/bradyforrestpost-199x300.jpg" alt="bradyforrestpost" width="199" height="300" /></a></p>
<h3>Coolest Mobile Locative Media App. at Where Fair</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png"><img class="alignnone size-full wp-image-3682" title="picture-61" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-61.png" alt="picture-61" width="176" height="269" /></a></p>
<p><a href="http://www.sonycsl.co.jp/person/shio.html" target="_blank">Atsushi Shionozaki </a>of<strong> <a href="http://www.placeengine.com/en" target="_blank">Place Engine</a></strong> &#8211; &#8220;<strong>a core technology that enables a device equipped with Wi-Fi such as a laptop PC or smart phone to determine its current location,&#8221; </strong>demoed the coolest location aware mobile app in Where Fair &#8211; <a id="uwuf" title="Oedo Yokai" href="http://service.koozyt.com/oedo/" target="_blank">Oedo Yokai</a>. Working with ethnologist, Dr. Hiro Kubota and artist Atsushi Morioka, &#8220;Oedo Yokai&#8221; is <a id="gtb2" title="Koozyt's" href="http://www.koozyt.com/" target="_blank">Koozyt&#8217;s</a> <strong>&#8220;first attempt to cross IT (Location Information) and Folkloristics.&#8221; </strong></p>
<p><strong>&#8220;The Japanese &#8220;Yokai&#8221; are known to dwell and appear at specific locations. They can frequently be seen within the grounds of shrines and temples, believed to be the border between this world and the afterlife, or in more common places like on a hill or at a crossroads. If the &#8220;Yokai&#8221; symbolize the mystery, legend, and lore associated with places, as our interests fade from actual locations, the rol, es they play in modern day society will diminish, and the &#8220;Yokai&#8221; might then cease to appear at all.&#8221;</strong></p>
<p><strong></strong>I love this idea of bringing the ancient spirits of place back into our lives with our new tools of location awareness.</p>
<p>Odeo Yokai also reminds me of Aaron Straup Cope&#8217;s work on &#8220;<a href="http://www.aaronland.info/weblog/2008/07/27/invisible/#historybox" target="_blank">the idea of every spot being a &#8220;history box&#8221;</a> which he explained is &#8220;one of the threads behind<a href="http://blog.flickr.net/en/2009/02/24/an-abundant-present/" target="_blank"> the &#8216;nearby&#8217; project at Flickr</a>.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai.jpg"><img class="alignnone size-medium wp-image-3683" title="oedoyokai" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/oedoyokai-300x199.jpg" alt="oedoyokai" width="300" height="199" /></a></p>
<h3>The Food Genome</h3>
<p>I cannot end this roundup of WhereWeek without a mention of <a href="http://www.foodgenome.com/home" target="_blank">The Food Genome</a>.</p>
<p><strong>&#8220;Food Genome is a big hungry brain that scours the internet, trying to learn everything there is to know about food.&#8221;</strong></p>
<p>Watch out for the upcoming launch of this project, it stole the show with an exciting presentation at WhereCamp. You can follow <a href="http://twitter.com/foodgenome">@foodgenome on Twitter</a> now.</p>
<p>To get one of the gorgeous Food Genome brochures you had to ask Mark Powell a good question. Notice an eager hand reaching out in the picture below. I asked, â€œhow would the basic building blocks of the food genome be licensed?â€ I got my brochure and a rain check on an answer to my question.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost.jpg"><img class="alignnone size-medium wp-image-3664" title="foodgenomepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/05/foodgenomepost-199x300.jpg" alt="foodgenomepost" width="199" height="300" /></a></p>
<h3>The Ubiquitous Media Studio</h3>
<p><strong></strong>Another highlight of WhereCamp was hearing from <a id="nfup" title="Gene Becker" href="http://lightninglaboratories.com/about.html" target="_blank">Gene Becker</a> about his new project, <a id="bs9-" title="Ubiquitous Media Studio" href="http://ubistudio.org/" target="_blank">Ubiquitous Media Studio</a> which will be located in Palo Alto. The project is still in the early stages of devlopment but it sounds really exciting. I am looking forward to being involved from East Coast.Â  If you&#8217;re curious where this is going, <strong><a href="http://twitter.com/ubistudio">follow @ubistudio on Twitter</a></strong> to stay updated.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene.jpg"><img class="alignnone size-medium wp-image-3684" title="gene" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/gene-300x300.jpg" alt="gene" width="300" height="300" /></a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/feed/</wfw:commentRss>
		<slash:comments>14</slash:comments>
		</item>
	</channel>
</rss>
