<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; Unity3D</title>
	<atom:link href="https://www.ugotrade.com/tag/unity3d/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Vision Based Augmented Reality (AR) in Smart Phones &#8211; Qualcomm&#8217;s AR SDK: Interview with Jay Wright</title>
		<link>https://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/</link>
		<comments>https://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/#comments</comments>
		<pubDate>Thu, 05 Aug 2010 22:56:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[Mobile Reality]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Anselm Hook]]></category>
		<category><![CDATA[AR eyewear]]></category>
		<category><![CDATA[AR HMDs]]></category>
		<category><![CDATA[AR standards]]></category>
		<category><![CDATA[AR version of Rock'em Sock'em]]></category>
		<category><![CDATA[AR Wave]]></category>
		<category><![CDATA[are2010]]></category>
		<category><![CDATA[ARWave]]></category>
		<category><![CDATA[augmented reality event]]></category>
		<category><![CDATA[augmented reality standards]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Chokkan Nabi]]></category>
		<category><![CDATA[Christian Doppler Handheld AR LAB in Graz]]></category>
		<category><![CDATA[Davide Carnovale]]></category>
		<category><![CDATA[Gene Becker]]></category>
		<category><![CDATA[going beyond compass/gps based AR]]></category>
		<category><![CDATA[google goggles]]></category>
		<category><![CDATA[InsideAR]]></category>
		<category><![CDATA[Junaio]]></category>
		<category><![CDATA[Junaio glue]]></category>
		<category><![CDATA[Karma Augmented Reality Mobile Architecture]]></category>
		<category><![CDATA[Kooaba]]></category>
		<category><![CDATA[Layar]]></category>
		<category><![CDATA[Maarten Lens-FitzGerald]]></category>
		<category><![CDATA[markerless tracking]]></category>
		<category><![CDATA[Markus Strickler]]></category>
		<category><![CDATA[Metaio]]></category>
		<category><![CDATA[Ogmento]]></category>
		<category><![CDATA[open Android JPCT 3D engine]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[Patrick O'Shaughnessey]]></category>
		<category><![CDATA[point and find]]></category>
		<category><![CDATA[Qualcomm]]></category>
		<category><![CDATA[Qualcomm AR Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Competition]]></category>
		<category><![CDATA[Qualcomm Augmented Reality Developer Challenge]]></category>
		<category><![CDATA[Qualcomm Augmented reality SDK]]></category>
		<category><![CDATA[Qualcomm Developer Challenge]]></category>
		<category><![CDATA[Simulation3D]]></category>
		<category><![CDATA[Snapdragon]]></category>
		<category><![CDATA[Thomas Alt]]></category>
		<category><![CDATA[Thomas Wrobel]]></category>
		<category><![CDATA[Total Immersion]]></category>
		<category><![CDATA[Unifeye Mobile SDK]]></category>
		<category><![CDATA[Unifeye SDK]]></category>
		<category><![CDATA[Unity for AR]]></category>
		<category><![CDATA[Unity for augmented reality]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Upliq 2010]]></category>
		<category><![CDATA[vision based AR]]></category>
		<category><![CDATA[vision based augmented reality]]></category>
		<category><![CDATA[visual search]]></category>
		<category><![CDATA[Yohan Baillot]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=5593</guid>
		<description><![CDATA[Recently, Qualcomm announced an SDK for vision based augmented reality &#8211; currently in private beta and open to the public this fall. The Qualcomm augmented reality (AR) bonanza will launch with a $200,000 developer challenge and a SDK that will put vision based augmented reality into the hands of developers without licensing fees. This is [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank"><img class="alignnone size-medium wp-image-5616" title="Screen shot 2010-08-05 at 6.07.36 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-6.07.36-PM-300x212.png" alt="Screen shot 2010-08-05 at 6.07.36 PM" width="300" height="212" /></a></p>
<p>Recently, <a href="http://www.qualcomm.com/" target="_blank">Qualcomm</a> announced <a href="http://qdevnet.com/ar" target="_blank">an SDK for vision based augmented reality</a> &#8211; currently in <a href="http://qdevnet.com/dev/augmented-reality/private-beta-program" target="_blank">private beta</a> and open to the public this fall.  The Qualcomm augmented reality (AR) bonanza will launch with a <a href="http://qdevnet.com/dev/augmented-reality/developer-challenge" target="_blank">$200,000 developer challenge</a> and a SDK that will put vision based augmented reality into the hands of developers without licensing fees.</p>
<p>This is a big step forward for augmented reality and a very important move made by an industry giant to support the rapidly evolving AR industry.  Innovation at all levels of the AR stack, particularly at the hardware level (CPU/GPU optimization) is vital for the full vision of augmented reality &#8211; media tightly registered to physical space, to take center stage.   Vision based AR takes mobile AR beyond compass/GPS based AR post-its, which are only loosely connected to the world (but the staple of most current AR apps), towards the holy grail of AR &#8211; markerless tracking with the whole world as the platform.</p>
<p>Click on the image above or <a href="http://www.qualcomm.com/videos/explore?search=mattel&amp;sort=&amp;channel=All" target="_blank">see here</a> for a video demo of an  AR version  of Rock&#8217;em Sock&#8217;em Robots game.Â  <a href="http://www.mattel.com/">Mattel</a>, one of the first companies  working with the SDK demoed AR Rock&#8217;em Sock&#8217;em, at the <a href="http://uplinq.com/">Uplinq 2010</a> conference (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>).</p>
<p>The Qualcomm AR stack, which reaches from the metal to developer APIs, will give Android developers an important edge in AR development.   And, when vision based AR starts getting integrated with visual search capabilities, and combined with cool tools like <a href="http://unity3d.com/" target="_blank">Unity</a>, we will start to see the augmented world get really interesting.</p>
<p>Visual search is already an area of AR getting a lot of attention, with <a href="http://www.google.com/mobile/goggles/#text" target="_blank">Google Goggles</a>, <a href="http://europe.nokia.com/services-and-apps/nokia-point-and-find" target="_blank">Point and Find</a>, <a href="http://www.cnet.com.au/augmented-reality-taking-off-on-japanese-smartphones-339304998.htm" target="_blank">Japan&#8217;s NTT DoCoMo set to launch &#8220;chokkan nabi,&#8221;</a> or &#8220;intuitive navigation,&#8221; in September, and the <a href="http://www.layarnews.com/2010/07/kooaba-meets-layar.html" target="_blank">recent partnership between Layar and Kooaba</a>.  <a href="http://www.metaio.com/" target="_blank">Metaioâ€™</a>s mobile augmented reality platform <a href="http://www.metaio.com/products/junaio/" target="_blank">Junaio</a> is already integrated with <a href="http://www.kooaba.com/" target="_blank">Kooabaâ€™s</a> computer vision capabilities.</p>
<p>And, of course, I am particularly excited about including open distributed real time communications for AR in this stack, which is why I asked a group of developers who have been inputting into the <a href="http://arwave.org/" target="_blank">ARWave</a> project if they had questions for Jay Wright, Qualcomm.Â  Thank you <a href="http://www.linkedin.com/in/yohanbaillot" target="_blank">Yohan Baillot</a>, <a href="http://lightninglaboratories.com/" target="_blank">Gene Becker</a>, <a href="http://www.hook.org/" target="_blank">Anselm Hook</a>, <a href="http://patchedreality.com/about/" target="_blank">Patrick  O&#8217;Shaughnessey</a>, <a href="http://www.lostagain.nl/" target="_blank">Thomas Wrobel</a>, <a href="http://twitter.com/kusako" target="_blank">Markus Strickler</a>, and <a href="http://twitter.com/need2revolt" target="_blank">Davide Carnovale</a> for your input.Â  [Note: see my upcoming post, about the future of <a href="http://arwave.org/">ARWave</a> and real time distributed communications for AR following <a href="http://googleblog.blogspot.com/2010/08/update-on-google-wave.html" target="_blank">this Google announcement</a>.]</p>
<p><a href="http://www.linkedin.com/in/jaywright" target="_blank">Jay Wright</a>, â€œis responsible for developing and driving Qualcommâ€™s augmented reality commercialization strategy.â€ He â€œhandles partnerships with leading innovators in industry and academia and leads Qualcommâ€™s efforts in enabling augmented reality within the mobile ecosystem.â€  In the interview below, Jay very generously answers our questions in detail.</p>
<p>A key contributor of questions for this interview is Yohan Baillot.  Yohan is working on a full vision of AR &#8211; integrating computer vision, visual search, open distributed real time communications and AR eyewear.  Yohan Baillot is founder of <a href="http://www.simulation3d.biz/" target="_blank">Simulation3D</a>, a consulting and system integration company specializing in interactive visualization systems and eyewear-based AR systems.  (I hope to bring you an interview with Yohan soon!).</p>
<p>Qualcomm was the title sponsor for <a href="http://augmentedrealityevent.com/" target="_blank">are2010, Augmented Reality Event</a>, and  played a vital role in making this event an historic gathering of the talent and creative minds at the heart of the emerging AR industry.  Watch out for the videos of the are2010 sessions to be posted at the end of August.  My are2010 co-chair, <a href="http://ogmento.com/team" target="_blank">Ori Inbar</a>, is preparing them to go online while kicking his newly funded start up, <a href="http://ogmento.com/" target="_blank">Ogmento</a>, into high gear! Ogmento is also one of the start ups pioneering vision based AR.</p>
<p><a href="http://www.metaio.com/" target="_blank">Metaio</a>, (with <a href="http://www.t-immersion.com/" target="_blank">Total Immersion</a>, they are one of the first augmented reality companies), has played a key role in bringing a vision component to smart phone augmented reality apps with their <a href="http://www.metaio.com/products/" target="_blank">Unifeye mobile SDK</a>.Â  Junaio, Metaioâ€™s own mobile augmented reality platform has gone beyond location based AR with â€œjunaio glueâ€ &#8211; â€œthe camera&#8217;s eye is now able to identify objects and &#8220;glue&#8221; object specific real-time, dynamic, social and 3D information onto the object itself,â€Â (see my upcoming interview with Metaio founder, Thomas Alt).Â   Also, recently, Layar &#8211; who continue to innovate at a breathtaking pace, announced a partnership with the computer vision company Kooaba.</p>
<p>Both Maarten Lens-FitzGerald, Layar, and Thomas  Alt, Metaio, when I spoke to them recently,  saw the Qualcomm SDK as a very positive development for AR, and they look forward to exploring its capabilities and integrating it where appropriate with their AR tools.Â  See more about <a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">Layar&#8217;s  upcoming visit, to the US here &#8211; </a><a href="http://site.layar.com/company/blog/layar-will-visit-the-us/" target="_blank">August  10th NYC, and August 12th SF</a>.Â  Also save the date, Sept 27th, Munich, for <a href="http://www.metaio.com/index.php?id=1103" target="_blank">InsideAR,</a> Metaio&#8217;s  upcoming conference.</p>
<p>It is clear that vision based AR will be driving the next wave of AR apps.  And, as Maarten and Thomas both pointed out, it will be interesting to see which use cases capture the imagination of users the most.  Having more tools freely available to AR developers will certainly be a boost to creativity.  And, Qualcommâ€™s SDK is going to give Android developers, in particular, a big opportunity to take the lead.</p>
<p><strong><br />
<h3>Interview with Jay Wright, Director, Business Development, Qualcomm</h3>
<p></strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright.jpg"><img class="alignnone size-medium wp-image-5598" title="JayWright" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/JayWright-300x255.jpg" alt="JayWright" width="300" height="255" /></a><br />
</strong></p>
<p><strong>Tish Shute:</strong> Before I start with questions on the new Qualcomm vision based augmented reality SDK, I want to briefly look ahead to what many people feel is vital for the full realization of augmented reality &#8211; head mounted displays, or more specifically, comfortable, sexy AR eye wear.  Is Qualcomm going to be involved in the development of augmented eye wear and wearable displays?</p>
<p><strong>Jay Wright:   I think thereâ€™s some core technology that needs to come together so we can have what we think needs to be a see-through head mounted display with a decent field of view.  And that looks like something that is quite possibly further than a three to five year horizon.</strong></p>
<p><strong>Tish Shute:</strong> Gene Becker asked some interesting general questions about the Qualcomm AR initiatives.  He said,  â€œIâ€™m unclear exactly what Qualcommâ€™s goal is.â€  It would be interesting to hear from you the Qualcomm view, from the top down.</p>
<p><strong>Jay Wright:</strong> <strong> Our largest revenue stream comes from sales of chipsets.    And we see augmented reality as a technology that drives demand for increasing amounts of processing power.  So we want to create demand for chips, higher-end chips, and augmented reality does that.  Specifically vision based augmented reality because it is so computationally intensive.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  And I think that is why people are very excited by the Qualcomm SDK.  It is not only the first free toolkit for developers to build vision apps from, isnâ€™t it?  Thereâ€™s been nothing freely available before this, has there?  But also Qualcomm is paying attention to the complete AR stack to support vision based AR development, from the chips to game/app development tools like Unity.</p>
<p><strong> </strong><strong>Jay Wright:  Thatâ€™s really the goal.  Weâ€™re not here to be in the augmented reality applications business.  Qualcommâ€™s role in the ecosystem has been to serve as an enabler.  And thatâ€™s what we want to do with augmented reality: provide the enabling technology that allows the entire ecosystem to flourish.</strong><br />
<br /></br><br />
<h3>&#8220;Augmented Reality has a number of attributes that make it a  great fit for Qualcomm&#8217;s core competencies&#8221;</h3>
<p></br><br />
<strong>Augmented Reality has a number of attributes that make it a great fit for Qualcomm&#8217;s core competencies. </strong><strong>Itâ€™s very computationally intensive, algorithmically complex, requires tight integration of hardware and software, and benefits from tight integration of multiple hardware components.  And thatâ€™s the kind of problem we like here, where we can apply our core competence of really optimizing complex systems for performance, while at the same time minimizing power consumption. </strong></p>
<p><strong> And as you know Tish, mobile AR is really extremely power sensitive.  We sometimes talk about it as a batteryâ€™s worst nightmare.  Itâ€™s roughly equivalent to playing a 3D game and recording a video all at the same time.</strong></p>
<p><strong>Whenever there is something that takes a lot of power, thatâ€™s a definite opportunity for us to optimize it.</strong></p>
<p><strong>Tish Shute:</strong> Right.  One of the core business is chips right, but for Qualcomm thereâ€™s basically a lot of profit in licensing.  When I talked to the developer community about the Qualcomm SDK developers first question was, â€œWhatâ€™s the licensing?  Whatâ€™s this going to cost us in the long run to develop on this SDK re licensing?â€  And they had all different takes on this.  So everyone had different ideas about what your approach to licensing might or might not be.  Could you clarify the approach to licensing, as I think this is a core concern for developers.</p>
<p><strong>Jay Wright:   Anytime you see something for free, you kind of say, â€œHey, whatâ€™s the hook?â€  So yes, itâ€™s definitely a logical question.  Our intent is not to generate licensing revenue from application developers using the SDK.  So the SDK will be made available free of charge for development, and it will also be free of charge for developers to deploy applications.</strong></p>
<p><strong>Tish Shute:</strong> Now, this is another question.  You also include not just image recognition capabilities but Unity in the package you are offering developers.  Unity products usually involve a license.  They do have some free products too, I think.  But how does this work?  And how do you separate your part from their part, or donâ€™t you?</p>
<p><strong>Jay Wright:  Thatâ€™s a good question.  What weâ€™re trying to do with the platform is incorporate it into tools that people already know how to use.  So weâ€™re actually going to have the SDK support two different tool chains.  One of them is the Android SDK and NDK.  And then the other one, is Unity.</strong></p>
<p><strong>Weâ€™re working with Unity to create an extension to the Unity environment that will be available as part of the Unity installer when you install Unity from the Unity website.  Developers will still be paying whatever license fees are associated with Unityâ€™s products on their existing pricing schedule.</strong></p>
<p><strong>Tish Shute:</strong> One of Thomas Wrobelâ€™s question is whether developers can just use the image recognition without Unity?  Your answer is yes, you can work with the computer vision component of the SDK separate from Unity?</p>
<p><strong>Jay Wright:  Yes, you can.</strong></p>
<p><strong>Tish Shute:</strong> Good because we would like to build a completely open Android client for ARWave, and not tie it to Unity unless people choose to.  Heâ€™s using the <a href="http://www.jpct.net/" target="_blank">open Android JPCT 3D engine</a>, which heâ€™s adapting for AR.  So he could actually use the part of the SDK that does image recognition and association with that, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.  You are not required to use Unity.  Unity is just one option for building the application.</strong></p>
<p><strong>Tish Shute:</strong> Great! Thatâ€™s very good.  But Iâ€™m sure many developers are going to jump on the chance to use Unity.  But I mean itâ€™s nice to be flexible because itâ€™s so early for AR that people have different ideas and new use cases coming up all the time.  I think itâ€™s excellent youâ€™ve divided that.</p>
<p>Another of Thomasâ€™s questions was, â€œCan developers use their own positioning data sharing solution?â€  Heâ€™s really talking about AR blips.</p>
<p><strong>Jay Wright:  With data sharing solutions, I am assuming that by data he means referring to augmentation data or graphics?</strong></p>
<p><strong>Tish Shute:</strong> Yes, and Iâ€™ll ask him to elaborate.  But, at the moment, everyone is using different ideas for POI, arenâ€™t they?<br />
<br /></br><br />
<h3>&#8220;The goal with our platform is to make it just as easy for a  developer to create 3D content for the real world as it is for a game  world or a virtual world.&#8221;</h3>
<p></br><br />
<strong>Jay Wright:  Yes.  So let me answer it this way, Tish.  The goal with our platform is to make it just as easy for a developer to create 3D content for the real world as it is for a game world or a virtual world.  So all weâ€™re really trying to do is provide the computer vision piece that makes the real world look like a bunch of geometric surfaces and potentially some meta data that is associated with this so you know what you are looking at.</strong></p>
<p><strong>So that means from a developerâ€™s perspective, you are still doing all of the 3D content, all of the animations, all of the game logic, all of the rendering.  You are still doing that all yourself.  So if you think about doing an AR game, you are doing everything you used to do, except you are not creating a virtual terrain.  You are just going to map it in the real world.</strong></p>
<p><strong>So if you want to do a browser that is doing POIâ€™s, your POI data, or augmentation, or meta data, or whatever it is, that can be in your application, it can be in the cloud, it can be wherever you want to put it.  Weâ€™re not putting any constraints on what that content is or where itâ€™s stored.</strong></p>
<p><strong>Tish Shute:</strong> Right, and thatâ€™s what I hoped for.  And I think that does answer the question.  People are interested to know how far Qualcomm is going with this.  For instance, Gene Becker asked: â€œdo they see a business at a certain level in the AR stack?â€  As you said AR development basically feeds into the core business of chip development, right?  But does Qualcomm also see some new business models developing?</p>
<p><strong>Jay Wright:   I think itâ€™s foreseeable that Qualcomm could identify other business opportunities down the line.  But weâ€™re certainly not there today.  Today, our motivation for the investment in AR is to create technology that is going to advance the chipset business.</strong></p>
<p><strong>Tish Shute:</strong> When the news came out about Qualcommâ€™s support of a game development studio at Georgia Tech at the same time as the SDK I think I wondered what was the scope of Qualcommâ€™s interest [for more on using Unity for AR development see <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">Vision-Based Augmented Reality Technical Super Session  video</a> from <a href="http://uplinq.com/">Uplinq 2010</a>].Â  For example, I am interested to know how the Qualcomm initiative in developing an AR stack connects to the effort to introduce an AR browser based on web standards, i.e., the <a href="https://research.cc.gatech.edu/polaris/content/home" target="_blank">Kharma/Kamra KML/HTML Augmented Reality Mobile Architecture from Blair MacIntyre and the Georgia Tech team</a> (image below)?  Are you supporting the open standards based browser development too?</p>
<p><strong>Jay Wright:   Blair is going to continue to work on the browser effort.  And itâ€™s our expectation that he will use our SDK and technologies for vision pieces of the browser effort where appropriate.  So they are certainly not mutually exclusive.  I would just think about our technology as one element of what may be used in that browser, as I expect it would be an element of what any other app developer would put in their application, whether it be browser, or game, or whatever.</strong></p>
<p><strong>Tish Shute:</strong> Yes Now, this is an interesting question, which is sort of connectedâ€¦Iâ€™m trying to keep some form of narrative for this!  It follows from the question about Blairâ€™s web-based standards browser.  A few people have asked me why we havenâ€™t heard more from Qualcomm in all these various standard discussions that are starting to come up.  I mean is it just too early, or are you too busy, or what?</p>
<p><strong>Jay Wright:  No, let me explain.  The type of standards that have come up so far have been around how HTML should be extended for geo-browser type applications.  And while thatâ€™s interesting, I think the standards efforts that Qualcomm would be more likely to be associated with in the near term are those related to APIâ€™s that are hardware accelerated.</strong></p>
<p><strong>So one of the things that we are in the process of doing right now, Tish â€“ because as you know, Qualcomm is a company that adheres to standards and strives to produce a leading implementation of those standards on our hardware and software â€“ is we are in the process of determining what API set within the existing SDK should be standardized.</strong></p>
<p><strong>Tish Shute:</strong> Right.</p>
<p>Now, my next question is, â€œWho are the other players at this level of the AR stack in the standards conversation? Who else is working at that level?â€  Obviously, the AR Lab in Graz was, but now they are Qualcomm, right?</p>
<p><strong>Jay Wright:   They are still independent.  Qualcomm is the exclusive industrial partner of the Christian Doppler Handheld AR LAB in Graz.</strong></p>
<p><strong>Tish Shute:</strong> Does this compete with, say, the work that other AR start ups are doing?</p>
<p><strong>Jay Wright:  Our intent is not to compete with companies that have done augmented reality technology.  Our intent is to enable the entire ecosystem.  So we would like to work with both Metaio and Total Immersion to find ways that they can benefit from our technology.  That would be the hope &#8211; that our technology can kind of lift and float all boats in the ecosystem.</strong></p>
<p><strong>Tish Shute: </strong>There are not many implementations of vision based AR right now?  I mean obviously Microsoft is doing stuff because they have <a href="http://www.robots.ox.ac.uk/~gk/" target="_blank">Georg Klein</a> now, right, and there is Google Goggles, Total Immersion, Metaio, and it will be interesting to see where Layarâ€™s partnership with Kooaba will lead?</p>
<p><strong>Jay Wright:  Yes.  I think there are relatively few commercial implementations of vision based AR stacks.</strong></p>
<p><strong>Tish Shute:</strong> One of Patrick O&#8217;Shaughnessey&#8217;s question is he wants to understand what features are going to be in the vision component, very specifically.  Patrick Oâ€™Shaughnessy, <a href="http://patchedreality.com/" target="_blank">Patched Reality</a>, working with <a title="Circ.us" href="http://circ.us/" target="_blank">Circ.us</a>,  <a title="Edelman" href="http://edelman.com/" target="_blank">Edelman</a>,   and <a title="metaio" href="http://metaio.com/" target="_blank">Metaio</a> used the Unifeye SDK to do <a href="http://mashable.com/2010/07/09/ben-and-jerrys-iphone-app/" target="_blank">a vision based AR app for Ben and Jerryâ€™s</a> thatâ€™s been getting all the attention lately. He was a speaker at are2010.</p>
<p>He very specifically wants to know what features will be included in the computer vision component.  He says, â€œIâ€™m most interested in understanding what features are going to be in the vision component.  Is it marker based?â€  Well I know itâ€™s more than marker  based.  I saw some of it in <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameronâ€™s ReadWriteWeb write-up</a> on <a href="http://uplinq.com/">Uplinq 2010</a>.  Is it â€œNFT?  PTAM? other?  Also, are you are integrating any backend services.â€  That is an interesting question!</p>
<p><strong>Jay Wright:  So letâ€™s get to the features on the client side, the vision based features.  Thereâ€™s support for, what AR aficionados would know as natural feature targets, or image based targets.  And we use those to represent, obviously, 2D planar surfaces.</strong></p>
<p><strong>The other thing that we are trying to do to set expectations, Tish, about where these can be used is to let people know that they work best in what weâ€™re calling near-field environments.  So the idea isnâ€™t that you use the system to create a large scale AR system that can recognize buildings indoors and outdoors.  Itâ€™s the idea where I can recreate 3D experiences that take place on surfaces that are in my immediate field of view, whether that be on the table in front of me, or on the floor, or on the wall, or on the shelf.</strong></p>
<p><strong>Also, when you talk about near field experiences, there are some other constraints that are implied.  Like, if itâ€™s in front of me and my immediate field of view is probably going to be pretty well lit.  And lighting, of course, is an important requirement.</strong></p>
<p><strong>So weâ€™ll support these natural feature targets, or image targets.  And we also have support for sort of a hybrid marker image type.  Itâ€™s something called a frame marker, which has kind of a black border with some dots on it.</strong></p>
<p><strong><a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank"><img class="alignnone size-medium wp-image-5610" title="Screen shot 2010-08-05 at 5.13.50 PM" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2010/08/Screen-shot-2010-08-05-at-5.13.50-PM-300x166.png" alt="Screen shot 2010-08-05 at 5.13.50 PM" width="300" height="166" /></a><br />
</strong></p>
<p>Click on the image above or <a href="http://www.qualcomm.com/partials/service/video/14230?primary=0x319cb5&amp;secondary=0xffffff&amp;simple_endScreen=true&amp;disable_embed=false&amp;disable_send=false&amp;send_mailto=http://www.uplinq.com/&amp;disable_embedViewMore=true&amp;simple_infoPanel=true" target="_blank">here to view Vision-Based Augmented Reality Technical Super Session video</a> from <a href="http://uplinq.com/">Uplinq 2010</a></p>
<p><strong>Jay Wright:  So thereâ€™s this additional type.  And the reason for this additional hybrid marker type is it has a lower computational requirement than a natural feature target.  So the idea is these things can be used as game pieces or elements of play where I want to have a large number of them detected and tracked simultaneously.</strong></p>
<p><strong>So you can have, for example, one big natural feature target that serves as a game board or game surface, and you can use these other things as smaller game pieces.  And when you put them out, different types of content can appear on them and do different things.</strong></p>
<p><strong>Tish Shute:</strong> Yes, thatâ€™s nice!  And the other thing I noticed was the virtual buttons.  How well developed is that?</p>
<p><strong>Jay Wright:  The idea behind virtual buttons is, in addition to supporting augmentation, we want to support interaction.  And we think there are going to be different types of user interaction with augmented reality content.  It may be hand tracking and finger tracking, but another compelling form weâ€™ve identified so far is the ability for me to touch particular surfaces and have an event fire within the application..</strong></p>
<p><strong>So virtual buttons are rectangular areas on image targets that a developer can define, and they serve as buttons.  So you can create a target that is a game board, for example, and define certain regions.  And when the user covers that region with his hand, like pushing a button, your application can detect that event and take some action.</strong></p>
<p><strong>Tish Shute:</strong> Nice!  And what is the documentation on these capabilities that is offered by Qualcomm&#8230;For example Yohan Baillot, who is interested in integrating eyewear-based AR systems with smartphones asked. How deep does this go?  Will there be full documentation on <a href="http://www.qualcomm.com/products_services/chipsets/snapdragon.html" target="_blank">Snapdragon</a>, people who want to work at that level? Is there a chip SDK?</p>
<p><strong>Jay Wright:   . Qualcommâ€™s model is to work with providers of the operating systems and deliver functionality of the chip through the operating system. So many operating systems APIs will take advantage of functionality thatâ€™s in the chip. But there is no separate chip SDK per se.</strong></p>
<p><strong>Tish Shute:</strong> I suppose that does come up a little bit with one of Anselm Hookâ€™s questions, because there is some overlap with Google Goggles here, isnâ€™t there, in terms of what youâ€™re doing, right? Are you going to work closely with Google Goggles ?</p>
<p><strong>Jay Wright: Google Goggles is performing what weâ€™ve described â€˜visual searchâ€™. So the idea is you take a picture, send it to the cloud and identify it and the results come back. I think if we see Google Goggles go in a direction where thereâ€™s an AR experience that would be a good area for us to collaborate with Google.</strong></p>
<p><strong>Tish Shute:</strong> <a href="http://www.ugotrade.com/2010/01/17/visual-search-augmented-reality-and-a-social-commons-for-the-physical-world-platform-interview-with-anselm-hook/" target="_blank">Anselm Hook</a> is very interested in having some kind of open standard around this physical tagging of the world, right, &#8211; the physical world as a platform. But I suppose thatâ€™s down the road but is there a plan to start talking about open standards here &#8211; visual search with image recognition? Thatâ€™s a very powerful combination. (see my interview with Anselm Hook here).</p>
<p><strong>Jay Wright:    I think it is. And weâ€™re very interested to hear from developers and others that have ideas about how they would want to integrate with the functionality that we have to best enable those kinds of combined experiences.</strong></p>
<p><strong>Tish Shute:</strong> Well, I know Anselm has a lot of very important ideas on that.</p>
<p><strong>Jay Wright: Iâ€™d be very interested in hearing those because we want to do everything we can to enable the maximum number of applications and best user experience for anything that people want to do.</strong></p>
<p><strong>Tish Shute:</strong> Letâ€™s go back to some specific questions about the platform, right? For example Yohan Baillot asked, â€œIs it arbitrary image/tag recognition supported? Is the tag / image specifiable by user? Is face recognition supported?â€  Not yet, face recognition, right?</p>
<p><strong>Jay Wright:    Not yet.</strong></p>
<p><strong>Tish Shute:</strong> What are the plans with that?</p>
<p><strong>Jay Wright:    I think weâ€™ve identified it as an interesting area and something that thereâ€™s some interest in, but have not made a decision on a particular technology direction.</strong></p>
<p><strong>Tish Shute:</strong> Youâ€™ve answered some of these but 3D model based vision tracking. Yohanâ€™s question was, â€œIs 3D model based vision tracking supported (that is recover the pose of the camera using a known 3D model and a 2D camera view of this model)?â€</p>
<p><strong>Jay Wright:    Thatâ€™s something weâ€™re looking at very closely, but again, donâ€™t have a plan, or donâ€™t have a future date for.</strong></p>
<p><strong>Tish Shute:</strong> And you said with the natural landmark tracking thatâ€™s not supported, right?</p>
<p><strong>Jay Wright:    I donâ€™t know if I know what that means, Tish. But we donâ€™t have any APIs that provide compass or GPS functionality other than already exists in the operating system. So if you want to take advantage of the compass or other sensors, you can absolutely do that, but the SDK does not currently provide anything different or anything more than already exists in the OS.</strong></p>
<p><strong>Tish Shute:</strong> This is an interesting question, â€œIs Snapdragon offloading some processing to the GPU, if any?â€</p>
<p><strong>Jay Wright:    Certainly  rendering functionality that utilizes OpenGL is being offloaded to the GPU. Weâ€™re currently in the process of determining multiple methods for offloading functionality between both symmetric and heterogeneous cores on Snapdragon. Which would include the GPU, the apps processor, and  DSPs.</strong></p>
<p><strong>Tish Shute: </strong> No one has truly solved optimizing the GPU/CPU for mobile AR yet have they?</p>
<p><strong>Jay Wright:    That really gets to the heart of the optimization here. Which pieces ought to be operating on which cores and when, and why? And thatâ€™s something that weâ€™re looking at very closely.</strong></p>
<p><strong>Tish Shute: </strong> Right.  The only AR &#8211; that is truly 3D media tightly registered to the physical world has been done for military and medical (and that has often been with a locked of camera!).  But to take mobile AR to the next level I think many developers would like access to the CPU/GPU, for example a developer interested in the future of eyewear like Yohan?</p>
<p><strong>Jay Wright:     Weâ€™re very interested in hearing what kinds of tools developers would like to see.</strong></p>
<p><strong>Tish Shute:</strong> What is the best forum for discussing feature specifics?</p>
<p><strong>Jay Wright:    To provide feature requests to us?</strong></p>
<p><strong>Tish Shute:</strong> Yes. And discuss them.</p>
<p><strong>Jay Wright:    if people go to <a href="http://qdevnet.com/ar" target="_blank">qdev.net/AR</a> thereâ€™s an application up there for the private beta program. So if people do have ideas about features or other things they would like to see, theyâ€™re welcome to submit [their requests and ideas] there.</strong></p>
<p><strong>Tish Shute:</strong> I also have some questions about the specifics of the competition.  Some people are a little confused about some things.  Yohan asked, â€œWhat is the expected form of the project?  Lab demonstration?  Specific capability?  Complete end to end system?â€</p>
<p><strong>Jay Wright:  The only requirement is that they submit an Android application that we can then get running on a device.  So if it has a backend component or backend server that it works against, great.  If it does, it does.  If it doesnâ€™t, it doesnâ€™t.  But thatâ€™s really it. Thereâ€™s no limit to the application category.  It can be a game, it can be a museum tour, it can be a childrenâ€™s learning game or learning experience.  It can really be anything.  The idea is we want to find experiences for which AR delivers some unique value. Weâ€™ll be announcing more specifics about the competition in the near-future.</strong></p>
<p><strong>Tish Shute:</strong> Right, because some people werenâ€™t sure about the Unity being separated whether it was biased towards games.  And itâ€™s not really, is it?</p>
<p><strong>Jay Wright:  Unity is a bias toward just rapid development for 3D, I think.  Itâ€™s most commonly associated with games, but there are also a lot of Unity customers that use it for medical simulations and other types of applications that arenâ€™t really games at all.</strong></p>
<p><strong>Tish Shute:</strong> Yes.  Itâ€™s very flexible, I know.  You did bring up the backend services again.  Are you thinking of offering any of that?</p>
<p><strong>Jay Wright:  There is a backend tool that we offer.  And the backend tool is what you use to generate your targets.  So if you want to create or use a particular image for a target in your application, you upload it to our target management application, and then it will evaluate that target and tell you how well it will work.  So as you know, certain images are more likely to be recognizable than others.  And so thereâ€™s metrics in that application that will give you some feedback.</strong></p>
<p><strong>And then you can download your target resource from the website that you can then incorporate into your application project.</strong></p>
<p><strong>Tish Shute:</strong> So this is available at the moment to people who are in the private beta and not to&#8230;you know, all of this information and documentation, right?</p>
<p><strong>Jay Wright:  Thatâ€™s correct.</strong></p>
<p><strong>Tish Shute: </strong>So thatâ€™s an incentive.  Now, just to encourage people to submit to the private beta is the other thing that people seem confused about.  In one part you say 25 developers.  And some people have thought that meant it was limited to 25 individuals.  And some people have like maybe four people on their team, so they were going, â€œWell, are we going to be accepted because we have four developers, or do we count as one because we are all working at the same project?â€</p>
<p><strong>Jay Wright:   itâ€™s just 25 companies.</strong></p>
<p><strong>Tish Shute: </strong> OK.  I think weâ€™ve gone through the questions.  Just to clarify and maybe give some incentive for people to apply to the private beta&#8230;the big advantage of getting in the private beta, aside from getting a monthâ€™s start on the competition, is that you get a chance to input, right?</p>
<p><strong>Jay Wright:  Yes.  A chance to provide feedback, get early access to the technology.  And then we are also providing a free HTC phone.</strong></p>
<p><strong>Tish Shute:</strong> Oh, yes.  I forgot the phone.  Yes, right.  In the requirements, though, you basically seem to be asking for sort of a full app&#8230;some people get reticent about delivering their full application plan, right?</p>
<p><strong>Jay Wright:  Yes.  I understand that.  People should just reveal what they are comfortable talking about.  Just so you understand the constraint on this end, this is early technology and weâ€™re trying to understand exactly what the support requirement is going to be.  And we have limited supported resources at this time, so we want to make sure that we can focus the resources that we have on folks that are really going to use the technology and have a sound plan to actually build something.  So thatâ€™s really the motivation behind limiting the size of the private beta.</strong></p>
<p><strong>Tish Shute:</strong> OK.  Yes, itâ€™s good to reiterate that.  Weâ€™re down to the last question that I have, and then Iâ€™ll ask you if there is anything that I missed.  You say you are partnering with Mattel.  Who are the developers?  Because I mean Mattel isnâ€™t an augmented reality development team.</p>
<p><strong>Jay Wright:  Mattel used a subcontractor, <a href="http://www.aura.net.au/">Aura Interactive</a>.</strong></p>
<p><strong>Tish Shute: </strong> Nice.  But thatâ€™s your only partner that I saw, right?  Why Mattel?</p>
<p><strong>Jay Wright:  Well, to launch a new technology, companies will often find showcase partners to demonstrate compelling uses of it.  And we thought Mattel and the Rockâ€™em Sockâ€™emâ„¢ toy was a great example of combining augmented reality with an existing toy.</strong></p>
<p><strong>Tish Shute:</strong> And I think people agree with you on Rockâ€™em Sockâ€™em (see <a href="http://www.readwriteweb.com/archives/qualcomm_launching_mobile_sdk_for_vision-based_ar_on_android_this_fall.php" target="_blank">Chris Cameron&#8217;s RWW post</a>).</p>
<p><strong>Jay Wright:  And thereâ€™s other showcase partners and applications that we will continue to work on to kind of spur the ecosystem and show what is possible.</strong></p>
<p><strong>Tish Shute: </strong>OK.  Now, is there anything Iâ€™ve left out that you think?  Whatâ€™s the core of this narrative that we need to get across, and if Iâ€™ve left anything out that is a key piece?</p>
<p><strong>Jay Wright:  I think youâ€™ve done an excellent job of covering all the bases, Tish.</strong></p>
<p><strong>Tish Shute: </strong> [laughs]</p>
<p><strong>Jay Wright:  I think the important overriding message to get across is that we really see ourselves in an enablement role here, and that we are trying to provide&#8230;.weâ€™d like to provide fundamental technology that helps all developers build content for the real world.</strong></p>
<h3><strong><strong><br />
</strong></strong></h3>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2010/08/05/vision-based-augmented-reality-ar-in-smart-phones-qualcomms-ar-sdk-interview-with-jay-wright/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
	</channel>
</rss>
