<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>UgoTrade &#187; Virtual Worlds</title>
	<atom:link href="https://www.ugotrade.com/category/virtual-realities/open-source-virtual-worlds/virtual-worlds/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ugotrade.com</link>
	<description>Augmented Realities at the Edge of the Network</description>
	<lastBuildDate>Wed, 25 May 2016 15:59:56 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=3.9.40</generator>
	<item>
		<title>Mobile Augmented Reality and Mirror Worlds: Talking with Blair MacIntyre</title>
		<link>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/</link>
		<comments>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/#comments</comments>
		<pubDate>Fri, 12 Jun 2009 05:07:01 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[mobile meets social]]></category>
		<category><![CDATA[new urbanism]]></category>
		<category><![CDATA[online privacy]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[ubiquitous computing]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[3D mirror world]]></category>
		<category><![CDATA[Android]]></category>
		<category><![CDATA[Android and augmented reality]]></category>
		<category><![CDATA[ARhrrrr]]></category>
		<category><![CDATA[Art of Defense]]></category>
		<category><![CDATA[augmented reality on the gphone]]></category>
		<category><![CDATA[augmented reality on the iphone]]></category>
		<category><![CDATA[augmented reality shooter games]]></category>
		<category><![CDATA[Aware Home Research]]></category>
		<category><![CDATA[Blair Macintyre]]></category>
		<category><![CDATA[Bragfish]]></category>
		<category><![CDATA[Dark Star]]></category>
		<category><![CDATA[geolocation]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[google earth]]></category>
		<category><![CDATA[handheld AR games]]></category>
		<category><![CDATA[handheld augmented reality]]></category>
		<category><![CDATA[Immersive augmented reality]]></category>
		<category><![CDATA[Information Landscapes]]></category>
		<category><![CDATA[instrumented homes]]></category>
		<category><![CDATA[instrumented world]]></category>
		<category><![CDATA[iphone 3Gs]]></category>
		<category><![CDATA[iphone games]]></category>
		<category><![CDATA[ISMAR]]></category>
		<category><![CDATA[ISMAR 2009]]></category>
		<category><![CDATA[location aware applications]]></category>
		<category><![CDATA[minimally immersive augmented reality]]></category>
		<category><![CDATA[MMO of the real world]]></category>
		<category><![CDATA[mobile augmented reality]]></category>
		<category><![CDATA[MS Virtual Earth]]></category>
		<category><![CDATA[NVidia Tegra devkits]]></category>
		<category><![CDATA[Open Sim]]></category>
		<category><![CDATA[OpenSim and Augmented Reality]]></category>
		<category><![CDATA[Ori Inbar]]></category>
		<category><![CDATA[outdoor tracking and markerless AR]]></category>
		<category><![CDATA[parallel mirror worlds]]></category>
		<category><![CDATA[persistent immersive mirror worlds]]></category>
		<category><![CDATA[photosynth]]></category>
		<category><![CDATA[Sun's Wonderland]]></category>
		<category><![CDATA[Texas Instrument's OMAP3 devkits]]></category>
		<category><![CDATA[the shape of alpha]]></category>
		<category><![CDATA[ubicomp]]></category>
		<category><![CDATA[Unity3D]]></category>
		<category><![CDATA[Unity3D and Augmented Reality]]></category>
		<category><![CDATA[virtual pets]]></category>
		<category><![CDATA[Wikitude]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3691</guid>
		<description><![CDATA[Blair MacIntyre is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his Augmented Environments Laboratory at Georgia Tech &#8211; see YouTube videos here.Â  The screenshot below is from, ARhrrrr, a very impressive augmented reality shooter game created at Georgia Tech Augmented Environments Lab and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf.jpg"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg"><img class="alignnone size-full wp-image-3732" title="arf2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/arf2.jpg" alt="arf2" width="259" height="239" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg"><img class="alignnone size-full wp-image-3725" title="droppedimage1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/droppedimage1.jpg" alt="droppedimage1" width="271" height="240" /></a></p>
<p><a href="http://www.cc.gatech.edu/~blair/home.html" target="_blank">Blair MacIntyre</a> is one of the original pioneers ofÂ  augmented reality and an extraordinary amount of creative work is coming out of his <a href="http://www.cc.gatech.edu/ael/" target="_blank">Augmented Environments Laboratory</a> at Georgia Tech &#8211; see <a href="http://www.youtube.com/user/AELatGT" target="_blank">YouTube videos here</a>.Â  The screenshot below is from, <strong>ARhrrrr</strong>, a very impressive augmented reality shooter game created at Georgia Tech <span class="description">Augmented Environments Lab </span>and <span class="description"> Savannah College of Art and Design, </span>(SCAD- Atlanta), and produced  on the <strong>NVidia Tegra devkits</strong> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo here</a>.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63.png"><img class="alignnone size-medium wp-image-3799" title="picture-63" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-63-300x169.png" alt="picture-63" width="300" height="169" /></a></p>
<p>Blair has spent much of his career working on immersive augmented reality and more recently the integration of augmented reality with mirror worlds. Blair explains:</p>
<p><strong>&#8220;</strong><strong>I am interested in the intersection of mobile devices &#8211; whether they are head mounts or handhelds &#8211; and parallel mirror worlds&#8230;I think that parallel mirror worlds are a direct manifestation of the intersection of the virtual world we now live in (the web) and geotagging. Â As more and more information is tied to place, and as more of our searching become place-based, we will want to do those searches about places we are not at. Â A 3D mirror world may provide one interface to that data. Â Want to plan your trip to London; Â go their virtually and look around, see what is there (both physically and virtually), teleport between areas you want to learn about, and so on. Â More interestingly, talk to people who are there now, and retrieve your location-based notes when you are on your trip.&#8221;</strong></p>
<p>But, at a time when many augmented reality developers are focusing on AR apps for smart phones, including Blair (the picture on left opening this post is Blair&#8217;s augmented reality <a href="http://www.youtube.com/watch?v=_0bitKDKdg0&amp;feature=channel_page" target="_blank">iphone app ARf)</a>, I was interested in finding out from Blair what the state of play was for the real deal Rainbow&#8217;s End style AR, as well as the potential he sees in smart phones to mediate meaningful AR experiences.</p>
<p>There is enormous amount ofÂ  innovation in mapping our world, see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen at Where 2.0 and WhereCamp,</a>&#8221; andÂ  <a href="http://gamesalfresco.com/2009/05/26/where-2-0-the-world-is-mapped-now-use-it-to-augmented-our-reality/" target="_blank">Ori Inbar&#8217;sÂ  Where 2.0. conference roundup. </a>But as Ori notes, to move augmented reality forward:</p>
<p><strong>My point is not a shocker: all we need is to tap into this information and bring it, in context, into peopleâ€™s field of view.</strong></p>
<p>And this is what Blair MacIntyre&#8217;s work is all about.</p>
<h3>Talking With Blair MacIntyre</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62.png"><img class="alignnone size-medium wp-image-3728" title="picture-62" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/picture-62-300x257.png" alt="picture-62" width="300" height="257" /></a></p>
<p><strong>Tish Shute:</strong> There do seem to be broader implications to augmented reality today than when this term was first coined. I am interested to have your perspective on how augmented reality may go beyond some of our early definitions?</p>
<p><strong>Blair MacIntyre: I still think the original definition of the term is useful: Â media (typically graphics) tightly registered (aligned) with the physical world, in real time. Â Many people talk about many things that relate virtual worlds to places, spaces, objects and people. Â There is room for many of them, and they don&#8217;t all have to &#8220;be&#8221; augmented reality. Â I like using Milgram&#8217;s definition of Mixed Reality as everything from the physical world (at one end) to the virtual world at the other; Â it&#8217;s a spectrum, and augmented reality just sits at one point.</strong></p>
<p><strong>The reason I like the old definition is I believe there is something special about graphics that are tightly, rigidly aligned with the physical world. Â When things appear to stick to the world, and an obviously identifiable location, people can start leveraging their natural perceptual, physical and social abilities and interact with the mixed world as they do the physical world. Â We&#8217;ve found this with the two studies we&#8217;ve done of tabletop AR games (<a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> and </strong><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank"><strong></strong></a><strong><a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a></strong><strong>); Â one key to those games is that the graphics were tightly aligned with identifiable landmarks in the physical world (gameboard).</strong></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15.png"><img class="alignnone size-medium wp-image-3729" title="aod-sandbox-video-15" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/aod-sandbox-video-15-300x225.png" alt="aod-sandbox-video-15" width="300" height="225" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2.jpg"><img class="alignnone size-medium wp-image-3782" title="imgp0782-2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/imgp0782-2-300x225.jpg" alt="imgp0782-2" width="300" height="225" /></a></p>
<p><em><a href="http://www.augmentedenvironments.org/lab/research/handheld-ar/artofdefense/" target="_blank">Art of Defense</a> (pic on left) <a href="http://www.youtube.com/watch?v=w3iBrj_zfTM&amp;feature=channel_page" target="_blank">Bragfish</a> (pic on right)<br />
</em></p>
<p><strong>Tish:</strong> I know that you are involved with <a id="b-c6" title="ISMAR 2009" href="http://www.ismar09.org/" target="_blank">ISMAR 2009</a> which is the key US augmented reality conference.Â  What do you think will be the hot themes, applications, innovations at this year&#8217;s conference? Do you think this will be the year that AR really breaks out of eye candy into truly useful and sustained experiences?</p>
<p><strong>Blair:  Unfortunately, I won&#8217;t be involved this year. Â I was supposed to be helping run the technical program, as well as the art/media program, but sickness in my family prevented me from having the time, so I am not helping this year.</strong></p>
<p><strong>First, I would not agree with the implication of the last question &#8212; I don&#8217;t think AR has just been eye candy up to now. Â I do agree that the &#8220;high profile&#8221; uses of it have largely been that, which is mostly because of the limits of the technology. Â I don&#8217;t think we&#8217;ll see huge changes in that regard by ISMAR this year. Â However, we will hopefully see a mixing of communities that hasn&#8217;t happened at ISMAR before, and I do believe that this year (independent of ISMAR) we will see more and more AR apps. Â Whether they go beyond eye candy is still a question. Â I&#8217;m hoping that some folks (including myself and other ISMAR folks!) will help push AR in new directions. Â But I also expect many folks new to ISMAR and AR to play a big role, because it is this new blood, especially those folks with real problems to solve, new art and game ideas, and a fresh perspective, that will open new doors.</strong></p>
<p><strong>Tish:</strong> You have been working on integrating augmented reality with virtual worlds. You mentioned that the way you use <a href="https://lg3d-wonderland.dev.java.net/" target="_blank">Sun&#8217;s Wonderland</a> is really about pulling the virtual world into the real world, i.e., Wonderland, &#8220;is just a place to put data.&#8221;Â  How is your use of the persistent virtual space different from what we have become accustomed to call virtual worlds?</p>
<p><strong>Blair: The approach we are taking in our project at Georgia Tech is to use the virtual world as the central hub of the information space, and allow the virtual world to be the element that enables distributed workers to collaborate more smoothly. Â This is work we are doing with Sun and Steelcase (and the NSF), and is an outgrowth of a project (the InSpace project) that&#8217;s been going on for a few years.</strong></p>
<p><strong>What we are trying to do is use mixed reality and ubicomp techniques to pull as much of the physical activity into the virtual world, and then reflect that activity back out to the different participants as best suits their situation. Â So, folks in highly instrumented team rooms will collaborate in one way, and their activity will be reflected in the virtual world; Â remote participants (e.g., those at home, or in a cafe or hotel) may control their virtual presence in different ways, but the presence of all participants will be reflected back out to the other sides in analogous ways. Â We may see ghosts of participants at the interactive displays, or hear their voices in 3D space around us; Â everyone will hopefully be able to manipulate content on all displays and tell who is making those changes.</strong></p>
<p><strong>A secondary benefit, I hope, is that by putting the data in the virtual world and making that the place that gives you more powerful and flexible access to the data (e.g., by leveraging space and giving access to history), distributed teams will begin to have the virtual space become a place they go to work, bump into each other and have those casual contacts co-located workers take for granted.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Creating the Information Landscape of the Future</strong></h3>
<p><strong></strong></p>
<p><strong>Tish: </strong>At the end of <a href="http://www.ugotrade.com/2009/05/06/composing-reality-and-bringing-games-into-life-talking-with-ori-inbar-about-mobile-augmented-reality/" target="_blank">my interview with Ori Inbar</a> he said, in order to have a ubiquitous experience <em>&#8220;youâ€™ll need to 3d map the world. Google earth like apps are going to help but it is not going to be sufficient. So letâ€™s leverage people. Google became successful in part by making people work with them.Â  Each time you create a link from your blog to my blog their search engines learn from it.Â  So letâ€™s find ways to make people create information that can be used for AR.&#8221;</em> What ways do you think people can create information that can be used for AR?</p>
<p><strong>Blair: I think the big part of that is the creation of models and environments, the necessary &#8220;baseline&#8221; for specifying experiences. Â Google and Microsoft are clearly working toward this; Â recent videos from Microsoft show them starting to move the photosynth work toward Virtual Earth. Â Similarly, I came across a page where people are finally starting to mine geotagged Flickr [see my post, <a href="http://www.ugotrade.com/2009/06/02/location-becomes-oxygen-at-where-20-wherecamp/" target="_blank">&#8220;Location Becomes Oxygen,&#8221;</a> and <a href="http://www.ugotrade.com/2009/05/17/creating-the-information-landscapes-of-the-future-locative-media-and-the-shape-of-alpha/" target="_blank">here</a> for more on the <a href="http://code.flickr.com/blog/2008/10/30/the-shape-of-alpha/" target="_blank">â€œThe Shape of Alphaâ€</a></strong><strong> project from Flickr]Â  images to create models. Â It&#8217;s that kind of thing that will be useful first; Â using the data we all create to enable modeling and (eventually) vision-based tracking in the real world.</strong></p>
<p><strong>After that, it&#8217;s a matter of time till more of what we &#8220;create&#8221; (e.g., Tweets and blog posts and so on) are all geo-referenced; Â these will become the information landscape of the future, the kinds of things people think about when they read &#8220;Rainbow&#8217;s End&#8221;. Â  The big problem will be filtering, searching and sorting. Â And, of course, safety and security.</strong></p>
<p><strong>Tish: </strong>You are working with <a href="http://unity3d.com/" target="_blank">Unity3D</a> to research the integration of mobile location based AR with persistent mirror world like spaces.Â  What has attracted you to Unity? What is the difference between this and your Wonderland project? I know you mentioned. you will be using head-mounted displays are part of this Unity project. What are your goals for this project?</p>
<p><strong>Blair:</strong> <strong>We started to use <a href="http://unity3d.com/" target="_blank">Unity3D</a> because it gave us what we wanted in a game engine. Â Most importantly, it&#8217;s very open and let us trivially expose AR technologies into the editor. Â Similarly, it can target the iPhone, so we can begin to work with it on that platform, too. Â The biggest problem with creating compelling experiences is content; Â and a show stopper for creating content is not getting it into your engine. Â Unity has a nice content workflow.</strong></p>
<p><strong>Unity3D is a front end engine, for creating the game; Â Wonderland is both a front end, and a backend. Â We are actually looking into using the Wonderland backend with Unity as well. Â Wonderland also has growing support for doing &#8220;real work&#8221; in a virtual world, which is key to our other projects.</strong></p>
<p><strong>Eventually, we&#8217;ll be using HMD&#8217;s. Â The goal for the Unity3D project, initially, was to explore what you can do with an AR/VR mirror-world; this is a project are working on with Alcatel-Lucent, and demo&#8217;d at CTIA this year. Â It&#8217;s continuing to grow, though, and now includes a number of our projects, including some work on mobile social AR and soon, some performance and experience design projects in the area of AR ARG&#8217;s. Â It&#8217;s really quite interesting to imagine what you can do when you have an &#8220;MMO of the real world&#8221; (which we now have for part of campus) that supports both VR-style desktop access simultaneously with mobile AR access.</strong></p>
<p><strong>Tish: </strong>Have you taken another look at <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> as a possible backend for augmented reality?Â  Recently I talked to David Levine, IBM and he is thinking about some possibilities to optimize OpenSim to dynamically load a large amount of objects at once (i.e how fast OpenSim can bulk load into an existing sim) and make it better suited to augmented reality/mirror world type projects.</p>
<p><strong>Blair: I haven&#8217;t looked at OpenSim recently. Â We will probably look at it this summer.</strong></p>
<p><strong>Tish:</strong> Why did you select Unity as a good client for augmented reality?</p>
<p><strong>Blair: Unity is a 3D game authoring environment so at some level it is no different from using Ogre, if all the associated stuff was just as well done. It has integrated physics, scripting, debugging, etc. &#8211; you can write code in javascript or C# or whatever. Â  It has a good content pipeline, as well, and supports a range of platforms.</strong></p>
<p><strong>It has simple networking built in, so multiple unity engines can talk to each other but it is not a virtual world platform out of the box &#8211; there is no back end &#8230;</strong></p>
<p><strong>Tish: </strong>Someone described Unity to me as a great client waiting for a great backend? So what are you going to use as a back end?</p>
<p><strong>Blair: There is no real processing except in the client right now.Â  We will eventually have to create a back end.Â  We are thinking of using Dark Star because someone on the Sun Wonderland community forums has already built a set of scripts connecting Unity to Darkstar.</strong></p>
<p><strong>But for us, we are not proposing right now to build a real product.Â  This is research to demonstrate what you could do if you actually had the back end.</strong></p>
<p><strong>Tish:</strong> What are the most important aspects of the backend from your POV?</p>
<p><strong>Blair: We want to simulate a variety of the interesting aspects of the back end.Â  So I very much care about notions of privacy and security and how these sorts of AR/VR Mirror Worlds would work in practice.Â  But I care about how those things as they impact user experience, not really about how we would really implement them.</strong></p>
<p><strong>Tish:</strong> So looking at some of the big problems from the perspective of user experience? Are we are going to go through the same growing pains that the web and VWs have seen, for example, will we have to type in passwords to get into everyone&#8217;s little worlds&#8230;.</p>
<p><strong>Blair: Well you know the SciFi background to this, you&#8217;ve mentioned it in other posts on your blog. Â Because when you look at the Rainbow&#8217;s End model where you have security certificates flying around, that is in effect what cookies and so on are now.Â  You can authenticate yourself once and then have those certificates hang around. So you can easily imagine how it could be done.Â  But the big question is how does that change user experience.Â  There are all kinds of things that start coming into play &#8211; like what happens if nearby people see different things &#8211; it goes on and on!</strong></p>
<p><strong>Tish:</strong> Sounds Like this is very valuable research.Â  It seems to me that there will be a lot of investment soon in putting the pieces together to do location based markerless AR and it would be nice if we knew more about it from the user experience POV.</p>
<p>Isn&#8217;t it vital for a productive intersection between mobile AR and persistent mirror world spaces for us to have markerless AR?Â  Aren&#8217;t we right at the beginning of people really saying yeah markerless AR is doable now? But it seems to me not many people researching or working on fully immersive AR and its integration with mirror worlds?</p>
<p><strong>Blair: I think some of the AR community is thinking about this. There&#8217;s probably people who are doing stuff in some other non technical communities. It wouldn&#8217;t surprise me to find out that people in the digital performance or ARS electronica world who are thinking a little bit about these sorts of things. Although not necessarily at the level of actually trying to build it, because they probably can&#8217;t right now. Â But experimenting with the precursors. Â My colleagues in digital media like to point out that this is often the purpose of digital art, to point out new directions and push the boundaries.</strong></p>
<p><strong>Obviously Science Fiction has explored the possibilities because that is what Rainbow&#8217;s End and the Matrix were all about.</strong></p>
<p><strong>Tish:</strong> and <a href="http://en.wikipedia.org/wiki/Denn%C5%8D_Coil" target="_blank">Denno Coil</a>&#8230;</p>
<p><strong>Blair: There has been some research &#8211; people like my adviser Steve Feiner up at Columbia, Mark Billinghurst in New Zealand, myself and people at Graz University in Austria .Â  But partly it has been so hard to do mobile AR up to now &#8211; so many people mock head worn displays and can&#8217;t get past current technology &#8211; you have hadÂ  to be willing to ignore the bulky back packs and cables and batteries and so on.Â  That is changing which is good.</strong></p>
<p><strong>My current response to the anti-head-mounted display people is if 5 years ago you told me you told me that fabulously dressed people who care about their looks and wear stylish clothes would have had big things hanging from their ears that blink bright blue light, so they could talk on the phone, many of us would have said you were crazy, because it would be ugly and so on.Â  But because there is an intersection of demonstrable need and benefit&#8230;Bluetooth headsets are really useful and the sort of early gestalt feeling that grew up around them &#8211; that people who use them are so important that they always have to be in touch, they wear these things &#8211; so people accept them.</strong></p>
<p><strong>It will likely be a similar thing with head mounted displays. And I don&#8217;t know if it will be that people wearing them so that they can read their mail while driving, god forbid. But it will be something.Â  And when we get the 2nd generation of the wrap glasses that look more like sun glasses and are not bulky and so on, we will have the potential for them catching on because you will look at them and you will think that the person is wearing because they are doing x&#8230;</strong></p>
<p><strong>X might be surfing a virtual world or reading their email or keeping in touch, or being aware. It will happen. But they have to get unbulky enough and there has to be moreÂ  than one important application, not just watching TV.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix.jpg"><img class="alignnone size-medium wp-image-3787" title="karmablair-fix" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/karmablair-fix-300x227.jpg" alt="karmablair-fix" width="300" height="227" /></a><br />
</strong></p>
<p><em>Picture above showsÂ  an outside view of the KARMA AR system; Â the knowledge based maintenance system Blair built in his first year of grad school (<strong>&#8220;first AR system Steve Feiner, Doree Seligmann, and I worked on&#8221;</strong>).Â  Blair noted, &#8220;<strong>The Communications of the ACM paper on it (from 1993) is a pretty widely cited AR paper.&#8221;</strong></em></p>
<p><strong>Tish:</strong> I think the need forÂ  full on transparent, immersive, wraparound, Gucci stylish eyewear with a decent field of view are the elephant in the room in terms of realizing the full potential of augmented reality.Â  There are a few new players in the field <a href="http://www.sbglabs.com/" target="_blank">Digilens</a>,Â  <a href="http://www.vuzix.com/home/index.html" target="_blank">Vuzix</a>, others?Â  What is the progress in this area and what do you hope for in terms of near term solutions?</p>
<p><strong>Blair: I agree with that sentiment. Â I think that, in the near term, there is a lot we can do with handhelds, as we&#8217;ve been doing in the lab. Â However, because it&#8217;s awkward and tiring to hold up a device, even a small one, for any length of time, handhelds will only be good for &#8220;focused&#8221; uses of AR. Â Such as the table-top games we&#8217;ve been doing, or the constellation viewing app that I heard came our recently for the Android G1. Â I don&#8217;t even see something like Wikitude as that compelling (beyond the &#8220;gee wiz&#8221; factor) for a handheld form factor. Â  Many proposed AR apps only really become compelling when users have constant awareness of them, and that requires a see-through head-worn display.</strong></p>
<p><strong>I&#8217;ve seen the mockups of the Vuzix ones; Â they seem pretty interesting, and are getting to were early adopters could use them (they will be cheap enough, and will hopefully be good enough). Â Microvision&#8217;s virtual retinal display is also promising; Â the contact lens displays will be the most interesting, if anyone can ever make them work. Â  I don&#8217;t know of anything else out there.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> While location based services are accepted now and people are understanding that it is something that opens up a new relationship to everything, we still haven&#8217;t found the experience that will get everyone holding up their mobile devices?</p>
<p><strong>Blair: Well that is actually the killer problem. Â Gregory Abowd is one of my colleagues who does ubiquitous computing research here at Tech. Â  Way back when we started the Aware Home project (<a href="http://www.awarehome.gatech.edu/">Aware Home Research Institute at Georgia Tech</a>) when I first got here about ten years ago, there was always this question of what is the killer app.Â  So Gregory comment in a meeting once that its not really a killer app you care about, it is the killer existence that all of the technology and small applications taken together facilitate. It is not that any one of these AR demos we see right, whether it is seeing your photos in the world or whatever, is important. Its that when taken together, there is enough of a benefit that you would use the whole environment.</strong></p>
<p><strong>In the original context we were talking about an instrumented home, but it is the same thing here with AR.</strong></p>
<p><strong>The problem with the mobile phone as a AR device is that problem of awareness. If I have a head mount on and I walk down the street and there is bunch of probably-not-useful-but-potentially-useful information floating by me, that&#8217;s a good thing, because I may see something that is useful or makes me think of something else.Â  But if I have to hold up my phone to see if something might be interesting nearby, I will never hold up my phone because at the time there is a high probability that there won&#8217;t be anything particularly important there.Â  You might imagine you can get around this by using alerts or something like that, but then you overload whatever alert channel you use. Â For example, I forward maybe 5 or 6Â  people&#8217;s updates from Facebook to my phone &#8211; started with my wife, a few friends, my brother, and the net result of that is I never get SMSs&#8217; anymore because when my phone buzzes, usually I ignore it because it is probably just somebody&#8217;s random Facebook update. So if we start overloading channels like that with &#8220;oh there might be something useful here in the real world, if you pick up the phone and look through it you will see it &#8230; and I will buzz you.&#8221; PeopleÂ  just start ignoring the buzzes.</strong></p>
<p><strong>So it is a very hard problem if you think about the kinds of applications that people always imagine with global AR &#8212; names over peoples heads and other random information floating in the world &#8212; until you have a head mount and all that information is around you all the time. That is when those sort of applications will actually happen.</strong></p>
<p><strong>Tish:</strong> <a href="http://curiousraven.squarespace.com/" target="_blank">Robert Rice</a> notes: <strong>&#8220;AR is inherently about who YOU are, WHERE you are, WHAT you are doing, WHAT is around you, et</strong><em><strong>c.&#8221; </strong></em>(see my interview with Robert,<em> </em><a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;Is it &#8216;OMG Finally&#8217; for Augmented Reality?</a>)<em>. </em>And I think the iphone experience has laid the foundation for the increasing desire to experience the network wherever we are &#8211; and not be stuck behind a pc.Â  We cannot perhaps do all we want to do yet. But even in the range of things we can do know, we are not even sure exactly what it is we want to do where yet is it?</p>
<p><strong><br />
</strong></p>
<h3><strong>&#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221;</strong></h3>
<p><strong></strong></p>
<p><strong>Blair: Yes that is a huge problem. I have been lucky to be able to teach two fun classes this year that let the students and I start to explore some of the potential that handheld AR might bring. Â Last fall I taught a handheld AR game design class &#8212; coordinated with a class at the Savanna College of Art and Design&#8217;s Atlanta campus &#8212; and we had the students build a sequence of prototype handheld AR games, which was a lot of fun. Â  This spring I taught a mixed reality/augmented reality design class with Jay Bolter (a professor in the School of Literature, Communication, and Culture here at GT). Â Jay and I have been teaching this class off and on for about 9 years; this semester we decided to say to the students &#8220;imagine your iphone Facebook client supports AR and that all data on Facebook might be georeferenced &#8211; pictures, status updates etc&#8230;&#8230;.&#8221; and have them do projects aimed at such an environment.</strong></p>
<p><strong>Tish: </strong>Not many of our favorite social media today have much sense of location do they? But FlickrÂ  areÂ  utilizing the geo-referenced pictures to create vernacular maps&#8230;..The Shape of Alpha</p>
<p><strong>Blair:Yes that is because lots of cameras put geo location data into the exif data so they can extract it&#8230;</strong></p>
<p><strong>Some mobile Twitter clients like the one I use in my iphone will let you add your location.Â  But in general Facebook and other sites don&#8217;t have any notion of location. But if you look at all the things people do in Facebook, such as sending gifts and other games, its easy to imagine what these might look like with geo-reference data. Â So, the high level project for the class is the groups have to design experiences people might have using mobile AR Facebook. Â We told them to assume Facebook as it stands now, but add geolocation and AR to the client. Â The class boiled down to &#8220;What would you imagine people doing?&#8221; So it has been kind of fun.</strong></p>
<p><strong>And we are using Unity for the class too &#8211; the same infrastructure I am working on in my research linking mobile AR to persistent immersive mirror world type spaces &#8211; and we having the students mock up what a mobile AR Facebook experience would be like.</strong></p>
<p><strong>Tish: </strong>Can you describe some of the ideas you class came up with that you think have potential? I know Ori mentioned that from the games class he liked <a href="http://www.youtube.com/watch?v=Rqcp8hngdBw&amp;feature=channel_page" target="_blank">Candy Wars.</a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6.png"><img class="alignnone size-medium wp-image-3693" title="candywars-6" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/candywars-6-300x225.png" alt="candywars-6" width="300" height="225" /></a></p>
<p><em>Candy Wars</em></p>
<p><strong>Blair: In the end, they had a nice range of projects in the Spring class. Â One created tag clouds out of status messages over spaces, others looked at analogies to virtual pets and gift giving out in the world, one looked at leveraging geolocation to help with crowd-sourced cultural translation, and three groups did straight-up social games.</strong></p>
<p><strong>[See <a href="http://www.youtube.com/user/AELatGT" target="_blank">all of the projects from the handheld AR games class on YouTube here</a>]</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>iphone, Android, or </strong><strong>NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits?</strong></h3>
<p><strong>Tish:</strong> Is anyone in the class working on Android?</p>
<p><strong>Blair: Nobody is using Android because no-one in the class has the phones. We have ATT microcell infrastructure on campus. Â Some ATT people joke that we are better off than them because we have a head office on campus so we can build in the network applications which people even at ATT research can&#8217;t do.Â  But becauseÂ  we have this infrastructure on campus, and a great relationship with ATT and the other sponsors, we have the ability to provision our own phones without having to pay for long-term contracts, which is vital for research and teaching.</strong></p>
<p><strong>Tish:</strong> So does this lock you into the iphone?</p>
<p><strong>Blair: Well the G1 is of course not AT&amp;T but it is GSM so we could probably buy them unlocked and put them on our AT&amp;T network. But the students I work with are much more interested in the iphone right now.</strong></p>
<p><strong>Tish:</strong> Is that because the iphone has the market?</p>
<p><strong>Blair: For me the reason I am not interested in the G1 is because you can&#8217;t do AR on it &#8211; there is <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> and a few other apps, but it is all hideously slow. Â Worse, because the Java code isn&#8217;t compiled like it would be on the desktop, you can&#8217;t do computer vision with it, so you can&#8217;t do anything particularly interesting on the current commercial G1s.Â  We could probably take the NVidia Tegra devkits or the Texas Instrument&#8217;s OMAP3 devkits (both are chipsets for next gen phones &#8212; high end graphics, fast processing),Â  and install Android on those and we may actually do that yet. Â But, it seems like a lot of work right now, for not much benefit.</strong></p>
<p><strong><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic.jpg"><img class="alignnone size-medium wp-image-3730" title="pastedgraphic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/06/pastedgraphic-300x166.jpg" alt="pastedgraphic" width="300" height="166" /></a><br />
</strong></p>
<p><em>Augmented Reality shooter game <strong>ARrrrr</strong> from<strong> </strong></em><em>Georgia Tech and SCAD Atlanta on the <strong>NVidia Tegra devkits</strong></em><em> &#8211; <a href="http://www.youtube.com/watch?v=cNu4CluFOcw" target="_blank">watch the demo on YouTube here</a></em><em>. </em><strong> </strong></p>
<p><strong>Tish: </strong>Everyone seems very excited about the iphone OS 3.0 and the addition of compass. Compass is pretty essential for AR right?</p>
<p><strong>Blair: It is necessary if you can&#8217;t do other forms of outdoor tracking, but the problem is that the compass on the G1 isn&#8217;t very good, relatively speaking and the iPhone one probably won&#8217;t be much better. It does not have very high accuracy, nor is it very fast (compared to, say, the high end 3D orientation sensors we use, from Intersense and MotionNode). As far as I can tell, it doesnâ€™t even give full 3D orientation. I donâ€™t have a G1 (although I have pre-ordered an iPhone 3Gs), but people have told me it only has absolute 2D orientation, so you can only line things up if you are careful.Â  Your can&#8217;t look around arbitrarily&#8230;</strong></p>
<p><strong>Tish: </strong>You can&#8217;t sweep your phone?</p>
<p><strong>Blair: You can look left and right, but if it doesn&#8217;t have full 3D orientation, you can&#8217;t go up and down. You can&#8217;t tilt it in weird directions. It is not fast in the form that you would want to look around quickly.Â  So it is nice demo.Â  And it is good for what the Android people use it for which is to let you do your Google street view by looking around, which is actually really useful.</strong></p>
<p><strong>I think there are lots of really useful things you can do with such a compass.</strong></p>
<p><strong>And, it is clear that compass is a necessary feature if we want to do AR. Â It&#8217;s just not sufficient.</strong></p>
<p><strong><br />
</strong></p>
<h3><strong>Outdoor Tracking and Markerless AR<br />
</strong></h3>
<p><strong></strong></p>
<p><strong>Tish:</strong> Isn&#8217;t it essential for markerless AR?  I guess not I just saw this post about <a href="http://artimes.rouli.net/2009/04/srengine-in-english.html" target="_blank">SREngine on Augmented Times</a>!</p>
<p>This wasn&#8217;t up when we spoke so perhaps you have some comments about what it brings to the table?</p>
<p><strong>Blair: Maybe. The folks at Nokia are working on outdoor tracking, they demoed some stuff at ISMAR last year on the N95 handsets that is all image based.Â  We are trying to do some work with them, one of my students is working on it.Â  And probably Microsoft is going to do more on this as well, they had a video up showing that they are also working on vision based techniques.Â  If you give the phone the equivalent of those panoramic Google Street View images (assuming they are up-to-date) and you are standing at the right place, you don&#8217;t really need a compass, you can figure out which way you are looking by looking at the camera video.  Ulrich Neumann (USC) did some work on tracking from panorama&#8217;s years ago, I don&#8217;t know what ever became of it.</strong></p>
<p><strong>Regarding SREngine, that project appears to be a pretty simple first step, but is probably just a demo at this point, and limitations like &#8220;only works on static scenes&#8221; and &#8220;doesn&#8217;t work for simple scenes&#8221; means it&#8217;s probably extracting some simple features out of the image and then matching those to some database. Â The trick would be getting this to work on a large scale, where the world changes a lot. Â  It&#8217;s not obvious how to get there.</strong></p>
<p><strong>Tish:</strong> So forget RFID for AR&#8230;</p>
<p><strong>Blair: RFID is not really useful.</strong></p>
<p><strong>Tish:</strong> not at all?</p>
<p><strong>Blair: RFID is useful for telling you what things are near you.Â  The problem is it doesn&#8217;t give you any directional information &#8211; it just tells you you&#8217;re in range of the tag. So can use it to tell you when you are near a certain product for example.Â  So it is useful in terms of telling you what thing you are near, and then you can load up a vision system or something else that will recognize that thing.</strong></p>
<p><strong>In that way, it could be useful as a good starting point.</strong></p>
<p><strong>Similarly for computer vision, the compass and the gps are very useful for giving you an initial guess at what you may be looking at that can then speed up the rest of the process. Â But, computer vision by itself will not be a complete solution because if I have my panoramic Google Street view (or whatever image database I use for tracking) and you are standing between me and the building -Â  I am not going to see what I expect to see, I am going to see you.</strong></p>
<p><strong>So I think it is all going to be part of one big package &#8211; you are going to see accelerometers, digital compasses, and gps and then combine that with computer vision and other sensors, and then maybe we are going to start getting the things that we have always dreamed about.Â  I like to show <a href="http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html" target="_blank">this video </a>from the U. of Cambridge (work done by Gerhard Reitmayr and Tom Drummond) of an outdoor tracking demo because it gives a sense of what will be possible.Â  Techniques like this will be an ingredient in the future of things.Â  It becomes especially interesting when you have these highly detailed mirror worlds.Â  It is sort of one of those chicken and egg problems where if I have an highly detailed model of the world then techniques like they have can be used to track.Â  But that mirror world needs to be accurate or you can&#8217;t use it for tracking, and why would you create the mirror world if you couldn&#8217;t track?</strong></p>
<p><strong>Tish:</strong> I noticed in your comment to <a href="http://www.ugotrade.com/2009/01/17/is-it-%E2%80%9Comg-finally%E2%80%9D-for-augmented-reality-interview-with-robert-rice/" target="_blank">&#8220;my interview with Robert Rice&#8221;</a> that you said you thought that is was important not to collapse AR into ubicomp &#8211; &#8220;forgetting what originally inspired us about AR&#8221; is, I think if I remember correctly, the suggestion you made. But aren&#8217;t ubiquitous computing and AR basically coextensive?</p>
<p>The <a href="http://www.ugotrade.com/2009/03/18/dematerializing-the-world-shadows-subscriptions-and-things-as-services-talking-with-mike-kuniavsky-at-etech-2009/" target="_blank">vision of ubicomp Mike Kuniavsky describes</a> &#8211; &#8220;sharing data through open APIs and the promise of embedded information processing and networking distributed through the environment&#8221; demonstrates how much can be done with very little processing power.&#8221; In its most immersive form augmented reality requires a lot of processing power. I think we have all become very conscious about trying minimize levels of consumption.Â  Can you explain why you think people shouldn&#8217;t see AR as the Hummer (energy squandering indulgence) of Ubiquitous Computing?</p>
<p><strong>Blair:Â  I think there will be a hierarchy of interfaces. You are going to have the rich Rainbow&#8217;s End like experience &#8211; you are totally submerged in a mixed environment, if you have a head mount on (its not going to be Rainbow&#8217;s End for while) but if you don&#8217;t have the headmount on that information might be available to you other ways, whether it is a 3D overlay using your handheld or just a 2D mashup with Google maps.Â  But there will be some circumstances and people who will want to get the compelling experience you can only get with the headmount.</strong></p>
<p>Tish:Â  Are you doing any research on how all these hierarchies of experiences will fit together &#8211; what aspects of this are you looking at?</p>
<p><strong>Blair: The thing that really needs to happen is you need to have this backend architecture that allows you to collect your data from different sources and aggregate it much like the web. Right now Google Earth and Microsoft&#8217;s Virtual Earth are much like the old pre-web hyper-text systems that were all centralized. And what we really need is to have the web equivalent where Georgia tech can publish their building models and I.B.M. can publish their building models and their campus models, and your client can aggregate them, as opposed to Microsoft or I.B.M. puts their building models into Google Earth and then somehow you get them out with Google&#8217;s google earth browser. That&#8217;s just not going to fly.</strong></p>
<p>Tish: so what does it take then to get us to this backend architecture, because I&#8217;m in total agreement?</p>
<p><strong>Blair: The nice thing about augmented reality versus virtual reality is that you don&#8217;t need everything modeled. You can do interesting AR apps like <a href="http://www.mobilizy.com/wikitude.php" target="_blank">Wikitude</a> with absolutely no world model.</strong></p>
<p><strong>Tish:</strong> So that means we can start with what we have &#8211; utilize cloud services without a full blown backend architecture?</p>
<p><strong>Blair: It may very well be that Google Earth and MS Virtual Earth act as a portal because people go and build models and link them with KML, and they can see them in google earth but they can also download the KML&#8217;s through some some other channel. So it may be that those things end up being something that feeds some of this along. Then people start seeing a benefit to having these highly accurate models so then you start integrating the Microsoft photosynth stuff and leveraging photographs to generate models.</strong></p>
<p><strong>It&#8217;s just keeping up with it and building it in real time is the challenge. A lot of folks think it will be tourist applications where there&#8217;s models of times square and models of central park and models of Notre Dame and the big square around that area in paris and along the river and so on, or the models of Italian and Greek history sites &#8211; the virtual Rome. As those things start happening and people start building onto the edges, and when Microsoft Photosynth and similar technologies become more pervasive you can start building the models of the world in a semi-automated way from photographs and more structured, intentional drive-by&#8217;s and so on. So I think it&#8217;ll just sort of happen. And as long there&#8217;s a way to have the equivalent of Mosaic for AR, the original open source web browser, that allows you to aggregate all these things. It&#8217;s not going to be a Wikitude. It&#8217;s not going to be this thing that lets you get a certain kind of data from a specific source, rather it&#8217;s the browser that allows you to link through into these data sources.</strong></p>
<p><strong>So it&#8217;s that end that interests me. It&#8217;s questions like &#8220;what is the user experience&#8221;, how do we create an interface that allows us to layer all these different kinds of information together such that I can use it for all my things. I imagine that I open up my future iphone and I look through it. The background of the iphone, my screen, is just the camera and it&#8217;s always AR.</strong></p>
<p><strong>I want the camera on my phone to always be on, so it&#8217;s not just that when I hold it a certain way it switches to camera mode, but literally it&#8217;s always in video mode so whenever there&#8217;s an AR thing it&#8217;s just there in the background.</strong></p>
<p><strong>When we can do that I can have little alerts so when I have my phone open I can look around and see it independent of the buttons and things that I&#8217;m tapping and pushing to use the phone. That&#8217;ll be a really a different kind of experience.</strong></p>
<p><strong>Of course it is not known yet if the next gen iphone will have an open video API. Â And of course, the current camera is pretty low quality, so why would they give it an open API until they put in a better camera? Â I am not expecting anything one way or the other until the 3Gs comes out and people start using it.</strong></p>
<p><strong>But there are many things about the iphone 3.0 OS that are hugely important, like the discovery API that allows people to play games with other people nearby, that don&#8217;t have much to do with AR.</strong></p>
<p><strong>Tish:</strong> You have an iphone AR virtual pet application ARf.</p>
<p><a href="http://www.macrumors.com/2009/04/08/video-in-and-magnetometers-could-introduce-interesting-iphone-app-possibilites/" target="_blank">Macrumors wrote it up</a> and suggested that the neg gen iphone will have compass and open video API.Â  What are your plans for ARf?</p>
<p><strong>Blair: ARf is just a demo right now. Â I know what we&#8217;d like to do with it, but it would require tons of work; Â imagine what it would take to do a multiplayer, social version of Nintendogs? Â It&#8217;s not clear what we&#8217;d really learn by doing that, but there are lots of other game ideas we have that we want to explore.</strong></p>
<p><strong>Tish:</strong> I think it was on Twitter where Tim O&#8217;Reilly said, &#8220;saying everything must have a RFID tag is like saying we can&#8217;t recognize each other unless we wear name tags. Look at what&#8217;s happening with speech recognition, image recognition et.al. and tell me you really think we need embedded metadata.&#8221; What would you say to that?</p>
<p><strong>Blair: I think that whatever extra data is there will be used. So if we put machine readable labels on some objects then they&#8217;ll be used if they make the identification and tracking problem easier. But it&#8217;s pretty clear that people are already working on tracking and so on.</strong></p>
<p><strong>A lot of these mobile AR apps are clearly putting ideas in people&#8217;s minds things that won&#8217;t really be doable in the near future. Like being able to look down the aisle of the store and it recognize all of the products. Given the distances and complexity of the scene, the number of pixels devoted to each of those objects, and so on &#8211; you just can&#8217;t recognize things in that context. But if I&#8217;m standing in front of a small set of objects, or looking at one thing, or I&#8217;m standing in front of a building, or if I&#8217;m in the store and because of the location API &#8212; imagine an enhanced location API that can tell me within a few feet where I am, and then combine that with some use of the discovery API that allows the store to tell your device you&#8217;re in the toothpaste section. Now you only have to look for different brands of toothpaste. So now you can recognize the big letters &#8220;Crest&#8221; or whatever. It&#8217;s all about constraining the problem.</strong></p>
<p><strong>That&#8217;s why I like that particular piece of Drummond&#8217;s work, the tracking web site I mentioned above. The general tracking problem of looking around and recognizing objects and tracking is still impossible. But if I know roughly what direction I&#8217;m looking in and I have a good estimate of my position, and I have models of what I should be seeing when I look in that direction, then it becomes a tractable problem. And so it&#8217;s not that a compass and a GPS are 100% necessary. But if you have them it certainly makes things possible that you wouldn&#8217;t otherwise be able to do.</strong></p>
<p><strong>Imagine for exampleÂ  if there&#8217;s a new version of GPS, I just noticed that some of the new satellites going up have this new L5 channel. There&#8217;s the L1 &amp; L2 signalsÂ  that the military and civilian ones use and they added this civilian L5 signal, which should make GPS more accurate. I haven&#8217;t found anything online that says how much more accurate.</strong></p>
<p><strong>But someday, hopefully, all GPS will get to be the quality of survey-grade GPS. Right now, if you get an RTK GPS from one of these companies that make the survey grade GPS systems, they give you position estimates in the range of two centimeters, and update 10 to 20 times a second. When you have that kind of positional accuracy combined with the kind of orientational accuracy you get from the orientation sensors we use in the lab from Intersense and MotionNode, everything is easier because you&#8217;ve pretty much got absolute position. You put that into a phone and now when I look up, it&#8217;s still not perfectly aligned because there will still be errors (especially in orientation, since the compasses are affected by metal and other magnetic noise). But it does mean if you and I are standing 5 feet apart from each other and look at each other, I can pretty much put a little smiley face above your head. Whereas now, with GPS, if I look at you and we&#8217;re 5 feet apart our GPS&#8217;s might think we&#8217;re on the opposite side of each other because they&#8217;re only accurate to two to five meters.</strong></p>
<p><strong>And that depending on the time of day and weather!</strong></p>
<p><strong>Putting RFID tags everywhere is easy; the problem is the readers &#8211; they currently require lots of power and they have a limited range.Â  Sprinkling RFID tags everywhere is fine. But you have to be able to activate those tags and read back the signal.Â  In certain contexts it works.</strong></p>
<p><strong>Tish:</strong> And one final question!Â  What do you think can be done re beginning to think about standards for AR.Â  Is there a meaningful discussion going on yet? Thomas Wrobel left this comment on my blog rcently and I was wondering what your position was on some of the ideas he raises?</p>
<p>Wrobel wrote, <em>&#8220;The AR has to come to the users, they cant keep needing to download unique bits of software for every bit of content! We need an AR Browsing standard that lets users log into an out of channels (like IRC) and toggle them as layers on their visual view (like Photoshop).Channels need to be public or private, hosted online (making them shared spaces) or offline (private spaces). They need to be able to be both open (chat channel) or closed (city map channel) as needed. Created by anyone anywhere. Really IRC itself provides a great starting point. Most data doesn&#8217;t need to be persistent, after all. I look forward too seeing the world though new eyes.I only hope I will be toggling layers rather then alt+tabbing and only seeing one â€œreality additionâ€ at a time.&#8221;<br />
</em></p>
<p><strong>Blair:  I agree with him, in principle. Â But, I&#8217;m not sure there&#8217;s a point yet. Â It can&#8217;t hurt to try, of course, from a research perspective, and I&#8217;m interested in the experience such an infrastructure would enable (as we&#8217;ve talked about already).</strong></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/06/12/mobile-augmented-reality-and-mirror-worlds-talking-with-blair-macintyre/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>HomeCamp 2: Home Energy Management and Distributed Sustainability</title>
		<link>https://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/</link>
		<comments>https://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/#comments</comments>
		<pubDate>Fri, 24 Apr 2009 19:14:16 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[Bar Camp]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[internet of things]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[Add new tag]]></category>
		<category><![CDATA[distributed sustainability]]></category>
		<category><![CDATA[electricity 2.0.]]></category>
		<category><![CDATA[green technology]]></category>
		<category><![CDATA[home energy management]]></category>
		<category><![CDATA[intelligent energy management]]></category>
		<category><![CDATA[living greener]]></category>
		<category><![CDATA[Pachube]]></category>
		<category><![CDATA[sustainable interaction design]]></category>
		<category><![CDATA[TweetaWatt]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=3423</guid>
		<description><![CDATA[HomeCamp is a home hacking, automation and green technology community that will be gathering in London tomorrow, Saturday 25th April 2009, 10am until 6pm BST (GMT + 1), and in an OpenSim event running alongside for virtual participation, to brainstorm new possibilities for distributed sustainability, creative smart meters, monitoring, graphing and visulaizing energy usage. More [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-31.png"><img class="alignnone size-medium wp-image-3424" title="picture-31" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-31-299x300.png" alt="picture-31" width="299" height="300" /></a></p>
<p><a rel="nofollow" href="http://homecamp.org.uk/">HomeCamp</a> is a home hacking, automation and green technology community that will be <a href="http://maps.google.co.uk/maps?f=q&amp;source=s_q&amp;hl=en&amp;geocode=&amp;q=65+-+71+Scrutton+Street,+London,+EC2A+4PJ&amp;sll=51.509912,-0.129361&amp;sspn=0.100214,0.30899&amp;ie=UTF8&amp;ll=51.524379,-0.080895&amp;spn=0.006582,0.019312&amp;z=16&amp;iwloc=addr" target="_blank">gathering in London</a> tomorrow, Saturday 25th April 2009, 10am until 6pm BST (GMT + 1), and in an <a href="http://homecamp.pbwiki.com/Virtual-Home-Camp">OpenSim event running alongside for virtual participation</a>, to brainstorm new possibilities for distributed sustainability, creative smart meters, monitoring, graphing and visulaizing energy usage.</p>
<p class="MsoNormal">More details and videos on the <a href="http://homecamp.org.uk" target="_blank">blog.</a> <a href="http://homecamp.pbwiki.com/" target="_blank">The wiki, which includes signup</a>, is the main portal to all the online activity.<a href="http://homecamp.pbwiki.com/"></a></p>
<p>As James Governor notes <a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">here</a>:</p>
<blockquote><p><span lang="EN-GB">there has been a huge amount of code and applications released focused purely on using technology for home energy monitoring and automation.Â  We have an active google group and quite a few videos and content showcasing the various applications and hardware currently being used by geeks to save money and live greener.</span></p></blockquote>
<p><span lang="EN-GB">Now the challenge is to see how this seedling home energy management movement</span><span lang="EN-GB"> can </span><span lang="EN-GB">really grow into widely adopted distributed sustainability solutions that </span><span lang="EN-GB">everyone can use, and participate in.</span></p>
<p>Both <a href="http://www.yellowpark.net/cdalby/index.php/about/" target="_blank">Chris Dalby</a> (<a href="http://www.yellowpark.net/cdalby/index.php/2009/04/23/homecamp-2-is-this-saturday/" target="_blank">see here)</a>, <a href="http://andypiper.wordpress.com/2009/04/24/home-camp-mark-2/" target="_blank">Andy Piper</a>, James Governor of <a href="http://www.redmonk.com/jgovernor/" target="_blank">Monkchips</a> (<a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">see here</a>),Â  and Tom Raftery of <a href="http://greenmonk.net/" target="_blank">GreenMonk</a> (<a href="http://greenmonk.net/homecamp-ii/" target="_blank">see here</a>), have posted on tomorrow&#8217;s <a href="http://homecamp.pbwiki.com/" target="_blank">Ho</a><a href="http://homecamp.pbwiki.com/" target="_blank">meCamp</a> event. So I am just going to add some quick notes, especially to highlight some of what will be going on virtually for those of you, like me, who canâ€™t make it to London.</p>
<p>You can tune in either on the live video ustream, or sign up on <a href="http://reactiongrid.com/">ReactionGrid </a>and join the <a href="http://homecamp.pbwiki.com/Virtual-Home-Camp">OpenSim event</a>. Also, you can keep up on what is happening on Twitter #homecamp. I highly recommend that you catch Tom Raftery&#8217;s talk which will be streamed from Spain live into the London meeting, the OpenSim event on ReactionGrid, and Ustream. Tom Raftery, a leading Green technology analyst at <a href="http://redmonk.com/" target="_blank">RedMonk</a> <a href="http://greenmonk.net/" target="_blank">(see also GreenMonk</a>), will be picking up, in depth, on some themes raised in his brilliant ETech 2009 presentation, <a href="http://en.oreilly.com/et2009/public/schedule/detail/5655" target="_blank">&#8220;Electricity 2.0: Applying the Lessons of the Web to Our Energy Networks.&#8221;</a></p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt.jpg"><img class="alignnone size-medium wp-image-3425" title="tweetawatt" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt-300x162.jpg" alt="tweetawatt" width="300" height="162" /></a></p>
<p class="MsoNormal">There will be homecampers dropping in to virtual homecamp in ReactionGrid throughout the day, including <a href="http://blogs.ipona.com/chris/" target="_blank">Chris Hart (the awesome &#8220;girl-geek&#8221;@dstrawberrygirl)</a>, <a href="http://mikethebee.mevio.com/" target="_blank">MiketheBee</a>, and <a href="http://www.cminion.com/wordpress/" target="_blank">Cminion</a>, who has a number of cool projects to demo, including <a href="http://www.cminion.com/wordpress/?p=43" target="_blank">his energy turbines</a>.Â  <a href="http://www.gomaya.com/glyph/" target="_blank">Dave Pentecost</a> (pictured above with his <a href="http://twitter.com/tweetawatt" target="_blank">Tweetawatt</a>, <a href="http://www.pachube.com/" target="_blank">Pachube</a> Orb) and I (<a href="http://docs.google.com/Presentation?id=dhj5mk2g_214g48q37hj" target="_blank">see our presentation for EarthWeek SL here</a>) plan to be at Virtual Homecamp on ReactionGrid between 9am and 10.30am EST. Dave has done a number of cool energy monitoring hacks including a <a href="http://www.pachube.com/" target="_blank">Pachube</a> link to and from <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a>.</p>
<p><span class="title">Also keep your eye on Dave&#8217;s blog, <a href="http://www.gomaya.com/glyph/" target="_blank">The Daily Glyph</a>, for what&#8217;s new in distributed sustainability. Dave just posted some great links on Sustainable Interaction, design</span> and work by ITP researchers and others in sustainable use of technology.</p>
<p><a title="Sustainable Interaction | Main / Papers" href="http://itp.nyu.edu/sustainability/interaction/Main/Papers">Sustainable Interaction | Main / Papers</a></p>
<p><a title="Sustainable interaction design | Sustainable Minds" href="http://www.sustainableminds.com/category/categories/sustainable-interaction-design">Sustainable interaction design | Sustainable Minds</a></p>
<p><a title="Design For the Other 90% | Cooper-Hewitt, National Design Museum" href="http://other90.cooperhewitt.org/">Design For the Other 90% | Cooper-Hewitt, National Design Museum</a></p>
<p class="MsoNormal">If you are in London, look out for Oliver Goh of <a href="http://www.shaspa.com/" target="_blank">Shaspa</a> as Oliver will be at Homecamp in London. As I mentioned in <a href="http://www.ugotrade.com/2009/04/19/sensor-networks-and-sustainability-connecting-real-virtual-mobile-and-augmented-reality/" target="_blank">my previous post</a>, Oliver will soon be launching both Shaspa commmunity and enterprise hardware and software packages for &#8220;Intelligent Energy Management.&#8221;</p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-35.png"><img class="alignnone size-medium wp-image-3428" title="picture-35" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-35-300x229.png" alt="picture-35" width="300" height="229" /></a></p>
<p>For a bit of homecamp history, James Governor (picture below from <a href="http://chinposin.com/home/monkchips" target="_blank">Chinposin)</a>, recapsÂ  some of the successes ofÂ  the first HomeCamp <a href="http://www.redmonk.com/jgovernor/2009/04/24/homecamp-returns/" target="_blank">here</a>.</p>
<p>And last but not least, a big thanks to sponsors, <a href="http://currentcost.co.uk/">CurrentCost</a>, <a href="http://greenmonk.net/">Greenmonk</a>, <a href="http://www.pachube.com/">Pachube</a>, <a href="http://www.onzo.co.uk/" target="_blank">Onzo</a>, and <a href="http://reactiongrid.com/">ReactionGrid</a>,Â  and media partner <a href="http://theattick.tv/" target="_blank">theattick.tv</a> who are making the London and virtual homecamp events possible.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-33.png"><img class="alignnone size-medium wp-image-3426" title="picture-33" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/picture-33-294x300.png" alt="picture-33" width="294" height="300" /></a></p>
<p class="MsoNormal"><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/04/tweetawatt.jpg"></a></p>
<p class="MsoNormal"><a href="http://homecamp.pbwiki.com/"></a></p>
<p class="MsoNormal">
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/04/24/homecamp-2-home-energy-management-and-distributed-sustainability/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>People Meet People Meet Big Data: ScienceSim Explores Collaborative High Performance Computing</title>
		<link>https://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/</link>
		<comments>https://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/#comments</comments>
		<pubDate>Wed, 11 Feb 2009 22:40:02 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[nanotechnology]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[science outreach in virtual worlds]]></category>
		<category><![CDATA[scientific simulation in virtual worlds]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[Virtual Realities]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[virtual worlds in Japan]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[big data]]></category>
		<category><![CDATA[collaboration and big data]]></category>
		<category><![CDATA[collaborative visualization]]></category>
		<category><![CDATA[haptic interfaces for virtual worlds]]></category>
		<category><![CDATA[Hypergrid]]></category>
		<category><![CDATA[linked data]]></category>
		<category><![CDATA[modelling complex systems]]></category>
		<category><![CDATA[n-body simulation]]></category>
		<category><![CDATA[Piet Hut]]></category>
		<category><![CDATA[rapid data movement in virtual worlds]]></category>
		<category><![CDATA[ScienceSim]]></category>
		<category><![CDATA[scientific simulation]]></category>
		<category><![CDATA[steering big data simulations from virtual worlds]]></category>
		<category><![CDATA[steering virtual worlds with brain waves]]></category>
		<category><![CDATA[super computing conference]]></category>
		<category><![CDATA[supercomputing]]></category>
		<category><![CDATA[Wilf Pinfold]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2855</guid>
		<description><![CDATA[Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the Supercomputing Conference general chair, is working with some Intel colleagues to make a project called ScienceSim the centerpiece of a special workshop event at the SC09 conference (see Supercomputing Conference, an ACM and IEEE Computer society sponsored event). Recently, I interviewed Wilf Pinfold (see interview [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg"><img class="alignnone size-full wp-image-2861" title="gwave_lg" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gwave_lg.jpg" alt="gwave_lg" width="540" height="540" /></a></p>
<p>Wilfred Pinfold, Director, Extreme Scale Programs for Intel, and the<em> </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a></em> general chair, is working with some Intel colleagues to make a project called <a href="http://www.sciencesim.com/">ScienceSim</a> the centerpiece of a special workshop event at the SC09 conference (<em>see </em><em><a href="http://sc08.supercomputing.org/">Supercomputing Conference</a>, an ACM and IEEE Computer society sponsored event)</em>.</p>
<p>Recently, I interviewed Wilf Pinfold (see interview below), Mic Bowman (also <a href="../../2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/">see my previous interview here</a>), and John A. Hengeveld (see interview below). I wanted to find out what are the underlying goals of this SC conference program?Â  Why are members of the SC community being encouraged to participate with the ScienceSim environment? What projects are beginning to emerge?  And, what are Intel&#8217;s goals in giving infrastructure support to further the conversation between high performance computing and collaborative virtual worlds?</p>
<p>The vision of creating new ways to collaborate and interact with big data does seem to be one of the more significant steps we can take at a time when we find many of our most complex systems roiling and threatening total collapse. As Tim O&#8217;Reilly has pointed out &#8211; from financial markets to the climate, the complex systems we depend on for our survival seem to be reaching their limits.</p>
<p>But,Â  how can we get from the place we are now &#8211; <a href="http://www.youtube.com/watch?gl=GB&amp;hl=en-GB&amp;v=gM4fmL6dLdY" target="_blank">see this example of an n-body simulation in OpenSim</a>, to the point where we can collaboratively steer from our visualizations big data simulations of climate change, financial markets, or the depths of the universe.Â  The picture opening this post is a:</p>
<blockquote><p><em>Frame from a 3D simulation of gravitational waves produced by merging black holes, representing the largest astrophysical calculation ever performed on a NASA supercomputer. The honeycomb structures are the contours of the strong gravitational field near the black holes. Credit: C. Henze, NASA</em></p></blockquote>
<p>Wilf Pinfold explained to me part of the reason to begin a dialogue on collaborative visualization at SC &#8217;09 is that super computing communities (that tend to be highly skilled and visionary) have played key roles in internet development in the past. Wilf pointed out,Â  key browser technologyÂ  developed out of these communities in the early days of the internet &#8211; see <a href="http://en.wikipedia.org/wiki/Mosaic_(web_browser)" target="_blank">this wikipedia entry</a> that givesÂ  a background on the role of NCSA (National Center for Supercomputer Applications).</p>
<p>The hope is, while there are many obstacles to overcome, the super computing community has both the skills and motivation to find solutions to creating collaborative environments capable of the kind of rapid data movement that scientific/big data visualization needs. Solving the problems of realtime collaborative interaction with big data willÂ  have many ramifications for the way we understand virtual reality, the metaverse, virtual worlds (all these terms are becoming increasingly inadequate for cyberspace in the age of ubiquitous computing, an argument I will make in another post!).</p>
<p><em></em></p>
<p>There have already been a number of blogs on ScienceSim (see <a href="http://www.virtualworldsnews.com/2008/11/intel-creating-sciencesim-on-opensim.html" target="_blank">Virtual World News</a>, <a href="http://nwn.blogs.com/nwn/2009/02/intel-outside-.html" target="_blank">New World Notes</a>, <a href="http://www.vintfalken.com/intel-using-opensim-for-immersive-science-project/" target="_blank">Vint Falken</a>, and <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a>). There have also been Intel blogs &#8211; <a href="http://blogs.intel.com/research/2009/01/sciencesim.php" target="_blank">see this post</a> by John A. Hengeveld (a senior business strategist working with Intel planners and researchers to accelerate the adoption of Immersive Connected Experiences). And Intel CTO <a href="http://blogs.intel.com/research/2008/11/immersive_science.php" target="_blank">Justin Rattner&#8217;s pos</a>t announcing the project this November.</p>
<p>But to blow my own horn a little, I think i was the first to blog the encounter between <a href="http://opensimulator.org/">OpenSim</a> and Supercomputing (an encounter I to some degree provoked by making the introductions) <a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/ " target="_blank">see this post</a>.Â  So I have been following the ScienceSim initiative with great interest.</p>
<p>Very shortly after N-Body astrophysicicsts Piet Hut and Jun Makino, creators ofÂ  &#8211; GRAPE (an acronym for â€œgravity pipelineâ€ and an intended pun on the Apple line of computers) &#8211; a super computer that will <a href="http://grape.mtk.nao.ac.jp/grape/news/ABC/ABC-cuttingedge000602.html" target="_blank">become one of the fastest super computers in the world (again)</a>, met <a href="http://www.genkii.com/" target="_blank">Genkii</a> &#8211; a Tokyo based strategic company working with OpenSim, the first N-body simulation appeared in OpenSim.Â  And in a matter of weeksÂ  <a href="http://www.youtube.com/watch?v=gM4fmL6dLdY" target="_blank">this video went up on YouTube</a> &#8211; the result of a collaboration between MICA and Genkii.Â  But the nirvana of being able to create visualizations using real time data from super computers that can be steered from a collaborative environment is still a ways off.</p>
<p>Super computing communities tend to be geographically very dispersed and researchers often find themselves far from simulation facilities so there is both the motivation and skills to pioneer new tools for collaborative visualization. I know that astrophysicists certainly see their value (Piet Hut has some profound ideas on this). Astrophysicist Piet Hut and othersÂ  (<a href="http://www.ugotrade.com/2008/07/19/astrophysics-in-virtual-worlds-implementing-n-body-simulations-in-opensim/b" target="_blank">see here for more</a>) have been pioneering the use of VWs for collaboration.Â  There are two Virtual World organizations, both founded by <span class="nfakPe">Piet</span> Hut and collaborators, that are currently exploring the use of OpenSim for scientific visualizations. Â One is specifically aimed at astrophysics, MICA, the<a href="http://www.mica-vw.org/" target="_blank"> Meta Institute for Computational Astrophysics</a>, and the other is aimed broadly at interdisciplinary collaborations in and beyond science, <a href="http://www.kira.org/" target="_blank">Kira</a>, a 12-year old organization focused on `science in context&#8217;. Â As of last week, there are two weekly workshops sponsored jointly by Kira and MICA that explore the use of OpenSim, ScienceSim, and other virtual worlds. Â One of them is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=124&amp;Itemid=154" target="_blank">&#8220;Stellar Dynamics in a Virtual Universe Workshop&#8221; </a>and the other is <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=119&amp;Itemid=149" target="_blank">&#8220;ReLaM: Relocatable Laboratories in the Metaverse.&#8221;</a></p>
<p>MICA was founded two years ago by <span class="nfakPe">Piet</span> Hut within the virtual world of <a href="http://qwaq.com" target="_blank">Qwaq Forums</a> (see the paper <a href="http://arxiv.org/abs/0712.1655" target="_blank">&#8220;Virtual Laboratories and Virtual Worlds&#8221;</a>). The Kira Institute is much older: it was founded in 1997. Â Later this month, on February 24, Kira will celebrate its 12th anniversary with a presentation of talks, a panel discussion, and a series of workshops. Â See the <a href="http://www.kira.org/index.php?option=com_content&amp;task=view&amp;id=83&amp;Itemid=113" target="_blank">Kira Calendar</a> for the main event, and the Kira Japan branch for a <a href="http://www.kirajapan.org/event/" target="_blank">special mixed RL/SL</a> event in Tokyo. Â During both events, Junichi Ushiba will give a talk about his research in which <a href="http://nwn.blogs.com/nwn/2007/10/the-second-life.html" target="_blank">he let paralyzed patients steer avatars using only brain waves</a>.</p>
<p>Other early adopters of ScienceSim include Tom Murphy, who teaches computer science at a Contra Costa College. Prior to teaching, Tom spent 35+ years working for supercomputer manufacturers. Tom said:</p>
<blockquote><p>it is very natural for me to find significantly new ways to visualize and interact with scientific mathematical models via ScienceSim and the OpenSim software behind it. ScienceSim also allows us to interact with each other and teach students in new ways.</p></blockquote>
<p>Also Charlie Peck, chair of the SC09 Education Program, (his day job is teaching computer science at Earlham College in Richmond, IN), is working with Wilf Pinfold, Tom Murphy and others &#8220;to explore how 3D Internet/metaverse technology can be used to support science education and outreach.&#8221;</p>
<p><a href="http://www.ics.uci.edu/~lopes/" target="_blank">Cristina Videira Lopes</a>, University of Irvine, is doing very interesting workÂ  on road and pedestrian traffic simulations. Crista is also the creator of <a href="http://opensimulator.org/wiki/Hypergrid" target="_blank">hypergrid in OpenSim</a>,</p>
<h3>People Meet People Meet Data: A Conversation With Mic Bowman</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png"><img class="alignnone size-full wp-image-2908" title="sciencesim_002_thumb1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/sciencesim_002_thumb1.png" alt="sciencesim_002_thumb1" width="404" height="239" /></a><em></em><br />
<em>Screenshot of ScienceSim from <a href="http://daneel-ariantho.blogspot.com/2009/02/sciencesim.html" target="_blank">Daneel Ariantho</a></em></p>
<p><strong>Tish:</strong> How does this work on ScienceSimÂ  fit into a wider dialogue on linked data? Where people meet people meet data, and where data meets data?</p>
<p><em><strong>Mic:</strong> Yeahâ€¦ thatâ€™s hard by the way.Â  Open integration of data (and more interestingly the functions on data) is very hard if it comes from multiple, independent sources.</em></p>
<p><em>Thatâ€™s the people part. For example, if Crista can build a model of the UCI campus somebody else builds an accurate model of several cars and another expert provides the simulation that computes the pollution generated by those cars in that environmentâ€¦its bringing people together to solve real problems, no matter how far apart physically.</em></p>
<p><strong>Tish:</strong> You mention three different simulations here. Could you explain why it is difficult to integrate data from multiple sources?</p>
<p><em><strong>Mic:</strong> integrating data from multiple sources has always been one of understanding &amp; interpreting both the syntax &amp; semantics of the data. Even relatively simple things like multiple date formats require explicit translation. More complex formats, like the many formats data is represented for urban planning, are barely computable independently let alone in conjunction with data from other sources (each with its own representation for data). Its often the expertise &amp; the collaboration of bringing people (and their bag of tools) together that solves these problems.</em></p>
<p><strong>Tish:</strong> and in this case the bag of tools is high performance modeling..?</p>
<p><em><strong>Mic:</strong> high performance modeling, rich visualizations and data. Its the three that matterâ€¦ data, function, and interface.</em></p>
<p><strong>Tish:</strong> Some people have a very hard time wrapping their head aropund the fact that anything that seems related to Second Life can do this.Â  Can you explain more about the difference between SL and OpenSim?</p>
<p><em><strong>Mic:</strong> OpenSim potentially improves data &amp; function because it can be extended through region modules. Region modules hook directly into the simulator to provide additional functionality. For example, a region module could be implemented to drive the behavior of objects in a virtual world according based on a protein folding model.</em></p>
<p><em>We need to work on additional viewer capabilities to address the user interface limitations.</em><br />
<strong><br />
Tish:</strong> Yes Rob Smartâ€™s (IBM) recent data integrations with OpenSim (<a href="http://robsmart.co.uk/2009/01/22/visualizing-live-shipping-data-in-opensim-isle-of-wight-ferries/" target="_blank">see here</a>) are impressive. Re viewers one of the biggest objections to virtual worlds is the mouse pushing and pc tied interface.</p>
<p><em><strong>Mic:</strong> There are great opportunities for improving the interface</em></p>
<p><strong>Tish:</strong> Yes I really like where the Andy Piperâ€™s experiments with Haptic Interfaces for OpenSim lead, <a href="http://andypiper.wordpress.com/2009/02/06/haptic-user-interfaces/" target="_blank">see Haptic Fantastic</a>! And I think that we will have cyberspace ubiquitous in our environment, not just stuck on a pc screen, sooner than we think.</p>
<p><em><strong>Mic:</strong> Micâ€™s opinion (not Intel): until we get souped up sunglasses with HD screens embedded (or writing directly into the eye) there will always be a role for the PC/Console/TV).Â  But, it isnâ€™t about the deviceâ€¦ its about the services projected through the deviceâ€¦ sometimes youâ€™ll want a very rich experienceâ€¦ sometimes youâ€™ll want an experience NOW wherever you are.</em></p>
<p><strong>Tish:</strong> I think people are only just realizing that VWs will be a now and wherever you are experience very soon.</p>
<p><em><strong>Mic:</strong> Thatâ€™s the critical observation the virtual world is not an application you runâ€¦ its a â€œplaceâ€â€¦ and you interact with it where you are or maybe interact through it. Speaking for Intelâ€¦ it is the spectrum of experiences that are critical to support.</em></p>
<h3>Interview with Wilfred Pinfold</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg"><img class="alignnone size-full wp-image-2860" title="gustav_h" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2009/02/gustav_h.jpg" alt="gustav_h" width="416" height="200" /></a></p>
<p><em>Picture from National Science Foundation &#8211; <a href="http://www.nsf.gov/news/news_summ.jsp?cntn_id=112166" target="_blank">&#8220;Climate Computer Modeling Heats Up.&#8221;</a></em></p>
<p><strong>Tish Shute:</strong> I know your day job for Intel is in High Performance computing.  Could you explain to me a little bit more about what you are working on in this regard &#8211; a mini state of play for high performance computing from your perspective?</p>
<p><em><strong>Wilfred Pinfold:</strong> My title is Director, Extreme Scale Programs. This program drives a research agenda that will put in place the technologies required to make an Exa (10^18) scale systems by 2015. The current generation of high performance computers are Peta (10^15) scale so this is a 1000x increase in performance and this increase will require significant improvements in power efficiency, reliability, scalability and new techniques for dealing with locality and parallelism.</em></p>
<p><strong>Tish:</strong> The nirvana in terms of linking supercomputers to the collaborative spaces of immersive virtual worlds is to be able to create visualizations using real time data from super computers in collaborative VW environments, and ultimately for researchers to be able to collaborate and steer their simulations from their visualizations.Â   Where are we at now in terms of scientific data visualization in VWs? And what are the current obstacles to using realtime data from super computers?</p>
<p><em><strong>Wilf: </strong>Being able to steer a simulation from a visualization requires both a visualization interface that allows interaction and a simulation that operates at a speed that is responsive in interactive timeframes. For example a weather model that predicts the path of a hurricane would need to operate at something close to 1000x real time. This would run through a day in ~1.5 minutes allowing an operator to run the simulation over several days multiple times with different parameters in a single sitting to understand the likelyhood of certain outcomes?</em></p>
<p><strong>Tish:</strong> Do you see a networked online collaborative virtual world being capable of being a visualization interface that allows meaningful interaction with the hurricane scenario you describe in the near future (next 6 to 18 months)?</p>
<p><em><strong>Wilf: </strong>I was using the hurricane example to explain the usage model not an imminent capability. Hurricane Simulation: Accurate hurricane simulations require multiscale models able to resolve the global forces working on the storm as well as the microforces that define precipitation. We can build useful weather models today that run faster than real time (anything slower is not useful for prediction) but we are a long way from the ideal.<br />
Visualization: There are excellent visualizations of weather systems but I have not yet seen a virtual world that can track a simulation and allow the scientist or team of scientists to see what is going on at both the macro scale and zoom in to see precipitation conditions. Today&#8217;s supercomputers are much better at this than they were a few years ago but they are a long way from ideal.</em></p>
<p><strong>Tish:</strong> Open Source Virtual World technologies are pretty diverse in their approaches, Croquet, Sun&#8217;s Wonderland and OpenSim are quite different and have different strengths and weaknesses. As you have become more familiar with OpenSim, what have you found about the technology that particularly lends itself to this project &#8211; ScienceSim (Mic mentioned Crista&#8217;s hypergrid code for example, modularity is another feature often cited).</p>
<p><em><strong>Wilf: </strong>We have found OpenSim&#8217;s client server model is well suited to the visualization model and the ability to put the server next to the supercomputer producing the visualization data is critical. We are however very interested in other environments and encourage papers, demonstrations and research on any of these platforms at the conference.</em></p>
<h3>Interview with John A. Hengeveld</h3>
<p><strong>Tish Shute:</strong> OpenSimâ€™s dependence on Second Life based viewers is sometimes cited as a limitation, and sometimes as a strength. What are your views on this?Â  What would a strong open viewer project directed at science applications bring to the picture?</p>
<p><em><strong>John Hengeveld:</strong> There may be more than one strong open viewer project required for opensim compatible experiences.Â  The strength of the Hippo viewer, for example, is availability and its weakness is the size of the client.Â  We would love a ubiquitous, client.. that runs on all platforms, but each hardware platform brings tradeoff and restrictions of its own.Â  Today, probably all of the folks innovating in the space can deal with the size of a very fat rich client ap.. they have big computers anyway.Â  But as we get into more 3D entertainment and augmented reality applications.. virtual mall, collaboration apps.. etcâ€¦ there is a great deal of room to optimize for the specific experience.Â  Balancing visual experience with bandwidth and compute performance available .. tying into standard browsers, etcâ€¦ people have done some of this work.. and I think all of it adds to the usefulness of these worlds.</em></p>
<p><strong>Tish:</strong> Integrating highend game engines and OpenSim opens up new possibilities. But licensing issues have been an obstacle. Could a project like ScienceSim get a non-commercial license on a high end game engine?Â  What would that bring to the picture?</p>
<p><em><strong>John: </strong>Anything is possible. Game engines can give a great deal of design power for high value experiences, but the programming of these experiences must be simplified.Â  Mainstream adoption in enterprise can&#8217;t be premised on the programming model of studio gamesâ€¦ thatâ€™s a big step to get over I think.Â  There are very interesting possibilities when we take that step tho.Â  Simulation, training, agents of various types (I just finished watching â€œThe Matrixâ€ for like the billionth timeâ€¦ I think agents are coolâ€¦)</em></p>
<p><strong>Tish:</strong> Where does Larabee fit into the picture of ScienceSim and next generation virtual worlds?</p>
<p><em><strong>John:</strong> We are all very excited about the Larrabee architecture and its application to work loads like next generation virtual worlds, both in the client.. delivering immersive reality.. and someday potentially in a distributed architecture simulating and producing these worlds.Â  For Intel CVC is an all play.Â  Atom will be used in strong mobile clients.Â  Core will be used in Enterprise PCs, Laptops and DesktopsÂ Â  Xeon will be simulating these environments and handling the data communication, and Whatever we brand Larrabeeâ€¦ will be enabling compelling visual experiences. Oh.. and our software products (Havoc, tools and others) will be building blocks in knitting all this together.Â  Larrabee is a part, but there are a lot of other pieces in our visionâ€¦</em></p>
<p><strong>Tish:</strong> If the kind of rapid data movement that scientific visualization needs is achieved in virtual worlds, this will be quite a game changer for business applications of VWs too. Also it will blurr the boundaries between what we call virtual worlds and mirror worlds. It seems to me this kind of rapid data movement is a vital step towards what Mic described to me as Intelâ€™s vision of CVC: â€œConnected Visual Computing is the union of three application domains: mmog, metaverse, and paraverse (or augmented reality).â€ It almost seems to me that if you achieve your goals for ScienceSim you will change how we think about virtual worlds in general? What do you think?</p>
<p><em><strong>John:</strong> I certainly hope so..Â  Part of our goal is to stimulate innovation in the technology and usage models that will enable broad mainstream adoption of CVC based applications (what we categorize as immersive connected experiences).Â Â  By tackling the scientific visualization problem, we hope to find the key technology barriers and encourage the ecosystem to solve them.</em></p>
<p><strong>Tish: </strong>To me virtual worlds and augmented reality should be complimentary and connected experiences. How do you see this connection evolving?</p>
<p><em><strong>John:</strong> We certainly see them as related.Â  In the long term, there are many common building blocks.. but they arenâ€™t united per se.Â  Its about the user experience, and in some usages these two are almost identicalâ€¦Â  in some.. they donâ€™t look or feel at all alikeâ€¦ the viewer is distinct by a lot.Â  Our approach is to enable building blocks that people can quickly build out usages that are robust.</em></p>
<p><strong>Tish: </strong>What is Intelâ€™s vision for ubiquitous mobile computing and an internet of objects?Â  How can high performance computing be an enabler for this vision?</p>
<p><em><strong>John: </strong>Mobile computing is a central part of our life, culture and community in economically enabled economies.Â  It feeds the data of our decisions, it connects us to entertainment, it is the access point to our soapboxes, pulpits, economy and families.Â  This creates a massive increase in data, a massive increase in interactions, transactions and visualizations.Â  While many HPC applications will be behind the scenes (finance, health, energy, visual analytics and others), HPC will emerge as a part of a scale solution to serving some of this increaseâ€¦ particularly that part where interactions and visualizations are complex or compelling.. or where scale enables the usage per se .. I talked about my love of agents earlier.. and some of that comes in here.Â  Compute working behind the scenes to help managed the data complexity, manage some of the base interactions between ourselves and technology.Â  The other thing we talk internally about the â€œHannah Montana usageâ€ where millions of people use their mobile devices to access and participate (using the sensors in the device) with an interactive live concert.Â  When Mylie hears the applause of a virtual interactive audienceâ€¦ and can scream back at them.. weâ€™re there.Â  Access to ubiquitous compute will be mobile, and interactive experiences will be complex.. and HPC can help make that real.Â  Watch out for the mental trap that HPC is always high end super compute clusters thoâ€¦ the â€œmainstream HPCâ€.. smaller clustersâ€¦ high threads, etcâ€¦ will play a key part in all of this as well.</em></p>
<p>Interesting that John ended on this point as this just came in from <a href="http://blog.wired.com/gadgets/2009/02/intel-fights-re.html" target="_blank">Wired. </a><em><br />
</em></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2009/02/11/people-meet-people-meet-big-data-sciencesim-explores-collaborative-high-performance-computing/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Hacking the World in 2009: Google Street View, &#8220;Smart Stuff,&#8221; and Wikiculture.</title>
		<link>https://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/</link>
		<comments>https://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/#comments</comments>
		<pubDate>Mon, 29 Dec 2008 19:20:11 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Ambient Devices]]></category>
		<category><![CDATA[Ambient Displays]]></category>
		<category><![CDATA[architecture of participation]]></category>
		<category><![CDATA[Architecture Working Group]]></category>
		<category><![CDATA[Carbon Footprint Reduction]]></category>
		<category><![CDATA[culture of participation]]></category>
		<category><![CDATA[CurrentCost]]></category>
		<category><![CDATA[Ecological Intelligence]]></category>
		<category><![CDATA[Energy Awareness]]></category>
		<category><![CDATA[Energy Saving]]></category>
		<category><![CDATA[home automation]]></category>
		<category><![CDATA[home energy monitoring]]></category>
		<category><![CDATA[home energy monitors]]></category>
		<category><![CDATA[HomeCamp]]></category>
		<category><![CDATA[Instrumenting the World]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[message brokers and sensors]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[mirror worlds]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MQTT and RSMB]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[Open Source Virtual Worlds]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Paticipatory Culture]]></category>
		<category><![CDATA[smart appliances]]></category>
		<category><![CDATA[Smart Devices]]></category>
		<category><![CDATA[Smart Planet]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[sustainable living]]></category>
		<category><![CDATA[sustainable mobility]]></category>
		<category><![CDATA[virtual communities]]></category>
		<category><![CDATA[Virtual HomeCamp]]></category>
		<category><![CDATA[Virtual Meters]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web Meets World]]></category>
		<category><![CDATA[World 2.0]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=2463</guid>
		<description><![CDATA[Google Street View Hacking This Google Street View Hack (via @timoreilly) will get my nomination for a Hacking the World Award this year, if there is such an award. A parade (the screenshot opening this post), a marathon,Â a mad-scientists laboratory, a sword fight, and more (see The Infonaut Blog) were staged all along the route [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/sampsoniawaypost.jpg"><img class="alignnone size-full wp-image-2475" title="sampsoniawaypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/sampsoniawaypost.jpg" alt="" width="450" height="274" /></a></p>
<h3>Google Street View Hacking</h3>
<p><a href="http://www.wikio.com/video/576734" target="_blank">This Google Street View Hack</a> (via<a href="http://twitter.com/timoreilly" target="_blank"> @timoreilly</a>) will get my nomination for a Hacking the World Award this year, if there is such an award.</p>
<p><a href="http://maps.google.com/maps?cbp=1,262.96388206761037,,0,16.58444579096093&amp;cbll=40.456878,-80.01196&amp;layer=c&amp;ie=UTF8&amp;ll=40.458499,-80.009319&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=zHdES6mj-vBrH2nF-K9ROQ" target="_blank">A parade</a> (the screenshot opening this post), <a href="http://maps.google.com/maps?cbp=1,260.87215088682916,,0,8.64102186979147&amp;cbll=40.457046,-80.011085&amp;layer=c&amp;ie=UTF8&amp;ll=40.458671,-80.00845&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=81ALq0NpV6uyLEF5S5ENhw" target="_blank">a marathon</a>,Â <a href="http://maps.google.com/maps?cbp=1,160.10914016686365,,0,33.949139944215034&amp;cbll=40.456949,-80.011593&amp;layer=c&amp;ie=UTF8&amp;ll=40.458573,-80.008954&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=C4I-QLkZJoT1SHXslK5f7Q" target="_blank">a mad-scientists laboratory</a>, <a href="http://maps.google.com/maps?cbp=1,9.995045624107206,,0,10.698194796922357&amp;cbll=40.457636,-80.00767&amp;layer=c&amp;ie=UTF8&amp;ll=40.459103,-80.006486&amp;spn=0.00569,0.012918&amp;z=17&amp;panoid=W_ox0QPcWyPqWGNPiK91Nw" target="_blank">a sword fight</a>, and more (see <a href="http://www.infonaut.ca/blog/?p=290" target="_blank">The Infonaut Blog</a>) were staged all along the route of the Google Street View truck by artists Robin Hewlett and Ben Kinsley working in conjunction with the local community and Google Street View<em><strong>. </strong></em></p>
<p>The Google Street View Hack suggests at a myriad of possibilities for anyone with their eye on the prize for a great world hack for 2009.Â  In my mind&#8217;s eye, I imagine the Google Street View truck&#8217;s trek across the planet triggering local environmental street action carnivals wherever it goes.</p>
<p>Local energy conservationists,<a href="http://www.nytimes.com/2008/12/27/world/europe/27house.html?_r=1&amp;pagewanted=all" target="_blank"> &#8220;passive house&#8221; architects</a>, retrofitters, could turn the arrival ofÂ  Google Street View into an occasion to create projects for a sustainable future &#8211; a traveling StreetCamp (see <a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">my post on HomeCamp &#8217;08 here</a>).Â  As Google Street View intends, surely, to go everywhere,Â  this would be a global hack for sustainable living that crossed the bounds of the physical and the virtual.Â  And the vast public record of Google Street View would became a generative engine and global resource for sustainable living.</p>
<h3>Working together on the noble aim of sustainable living</h3>
<p>- this is my (and many other people&#8217;s) big theme for 2009.</p>
<p>A Hacking the World award should also go toÂ  <a href="http://www.pachube.com/">Pachube</a> &#8211; &#8220;patching the planet&#8221; &#8211; for demonstrating that instrumenting the world is not merely a Sci FiÂ  fantasy anymore.Â  By facilitating &#8220;interaction between remote environments, both physical and virtual,&#8221;Â  Pachube demonstrates (see <a href="http://community.pachube.com/?q=node/1" target="_blank">diagram here</a>) how we have only just begun to dip our toes into the many new opportunities we have to work together to save energy, rethink our culture of consumption, and to reboot our failing economy under a new sustainable operating system.</p>
<p>Energy awareness unlike the glut of information we have in entertainment and games suffers from a dearth of information. We really have very little idea about what we are consuming and the waste we are producing.Â  So more Hacking the World Awards should go to projects like <a href="http://www.amee.com/" target="_blank">AMEE</a> &#8211; creating the world&#8217;s energy meter, and <a href="http://www.wattzon.com/" target="_blank">Wattzon</a> &#8211; your personal energy meter, for giving us new ways to understand and work with energy data.</p>
<p>Many people and organizations, given the information, will change their behaviours. But the cultural changes necessary for sustainable living are deep and old habits die hard (see <a href="http://www.nytimes.com/2008/12/27/opinion/27sat1.html" target="_blank">this disturbing report</a> on the recent return to SUV buying in November as soon as gas prices fell!).</p>
<h3>AÂ  Small Community of Volunteers Can Bring Change on a Global Scale</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/homecampthethrongpost.jpg"><img class="alignnone size-full wp-image-2535" title="homecampthethrongpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/homecampthethrongpost.jpg" alt="" width="450" height="153" /></a></p>
<p>Picture above by <a href="http://benjaminellis.co.uk/" target="_blank">Benjamin Ellis</a>, &#8220;HomeCamp &#8211; The Throng,&#8221; from his <a href="http://www.flickr.com/photos/tags/homecamp08/" target="_blank">Flickr</a><a href="http://www.flickr.com/search/?q=homecamp&amp;w=29034542%40N00" target="_blank"> stream.</a></p>
<p>One of my favorite &#8220;instrumenting the world&#8221; projects to date and another top contender for a Hacking the World Award is <span class="entry-content"><a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp â€˜08</a></span> (see my <a href="http://www.ugotrade.com/2008/12/15/smart-planetinterview-with-andy-stanford-clark/" target="_blank">previous post</a>).Â  HomeCamp brings together a community of creators and enthusiasts ofÂ  &#8220;smart stuff,&#8221; creating <a href="http://meta.wikimedia.org/wiki/Wikiculture" target="_blank">a wikiculture</a> for the noble cause of sustainable living.</p>
<p>The key to whether &#8220;instrumenting the world&#8221; empowers people and changes our lives for the better will be the capacity our systems of instrumentation have for what Jonathan Zittrain in <em><strong>&#8220;</strong></em><a href="http://futureoftheinternet.org/" target="_blank">The Future of the Internet: And How To Stop It:,&#8221; </a><em><strong> </strong></em>defines as generativity, i.e.:Â  &#8220;the system&#8217;s capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences&#8221; ( Zittrain, 2008).</p>
<p>Generativity is the &#8220;secret sauce&#8221; that makes the difference between, for example, <a href="http://www.wikipedia.org/" target="_blank">Wikipedia</a> and its all but forgotten predecessor &#8211; the &#8220;written by experts&#8221; <a href="http://en.wikipedia.org/wiki/Nupedia" target="_blank">Nupedia</a>.</p>
<p>Jonathan Zittrain writes:</p>
<p><em><strong></strong></em></p>
<p><em><strong>Wikipedia stands for more than the ability of people to craft their own knowledge and culture.Â  It stands for the idea that people of diverse backgrounds can work together on a common project with, whatever its other weaknesses, a noble aim </strong><strong>- bringing such knowledge to the world. (p.147)</strong></em></p>
<p>At <a href="http://en.oreilly.com/web2008/public/content/home" target="_blank">Web 2.0 Summit</a>, Jonathan Hochman (<em><strong><a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Known as </a><a href="http://en.wikipedia.org/wiki/User:Jehochman">Jehochman</a> on Wikipedia</strong></em>), shared with me his insider perspective as a Wikipedia administrator. The <a href="http://www.ugotrade.com/2008/12/26/wikipedia-houdini-google-street-view-instrumenting-sustainable-living#link_1">full interview</a> with Jonathan is later in this post.</p>
<p>Jonathan comments on the role of wikiculture in sustainable living:</p>
<p><em><strong>&#8220;Sustainable Living requires everything to become more efficient. Incentives need to line up with conservation priorities. This requires a radical change to the way we govern ourselves. Command economies, whether commanded by politicians or capital, lead to huge inefficiencies.&#8221;</strong></em></p>
<p>And surely, if we have learned anything in 2008, we have learned that very bad things happen when the complex systems of modern life are left in the hands of a few people motivated solely by the urge to make profit.</p>
<h3>Hacking Design and Planning Processes for Real Estate and Transportation with Virtual Worlds</h3>
<p><object width="400" height="302" data="http://vimeo.com/moogaloop.swf?clip_id=2326434&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" type="application/x-shockwave-flash"><param name="allowfullscreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="src" value="http://vimeo.com/moogaloop.swf?clip_id=2326434&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" /></object></p>
<p>This great machinima by Azwaldo Vilotta shows the progress so far on the <a href="http://studiowikitecture.wordpress.com/2008/12/12/now-is-an-ideal-time-to-join-wikitecture-40/" target="_blank">Wikitecture 4.0 project</a>, â€˜Re-Inventing the Virtual Classroomâ€™ for the University of Alabama.</p>
<p>Though still a niche market Virtual Worlds are growing at a steady pace.Â  As I mentioned in my previous post, energy hungry avatars themselves will be a target for optimization in 2009.Â  But as my personal power usage breakdown from <a href="http://www.wattzon.com/" target="_blank">Wattzon</a> shows, cutting down the amount of flying I do in 2009 would be far more effective in reducing my carbon footprint than deciding not to log into Virtual Worlds!</p>
<p>Note: Read Write Web&#8217;s recent post, &#8220;<a href="http://www.readwriteweb.com/archives/enterprise_virtual_worlds.php" target="_blank">Report Enterprise Virtual Worlds More Effective Than Web Conferencing</a>.Â  Also check out <a href="http://www.projectchainsaw.com/" target="_blank">Web.Alive</a>, and <a href="http://immersivespaces.com/" target="_blank">Immersive WorkSpaces</a> and Dusan Writer&#8217;s post on &#8220;<a href="http://dusanwriter.com/index.php/2008/12/20/thinkbalm-the-immersive-internet-and-collaborative-culture/" target="_blank">ThinkBalm,The Immersive Internet and Collaborative culture</a>,&#8221;</p>
<p>My friend Melanie Swan points out in her <a href="Jimmy Wales recent personal appeal for support for Wikipedia." target="_blank">Top Ten Computing Trends for 2009</a>, that Virtual Worlds not only have the power of the 3 Cs (communication, collaboration and commerce) but they are fast expanding into <a href="http://www.3pointd.com/20070406/rapid-architectural-prototyping-in-second-life/">rapid prototyping</a>, <a href="http://your2ndplace.com/node/926">simulation</a> and <a href="http://sldataviz.pbwiki.com/">data visualization</a>.</p>
<p>My Hacking the World, 2008, Awards for Virtual World innovation would go to three potentially world changing projects for sustainable living:</p>
<p>1) <a href="http://studiowikitecture.wordpress.com/" target="_blank">Studio Wikitecture</a>, (see <a href="http://studiowikitecture.wordpress.com/" target="_blank">&#8220;Reinventing the Virtual Classroom&#8221;</a> for The University of Alabama).</p>
<p>2) Oliver Goh&#8217;s work on &#8220;<a href="http://www.shaspa.com/cms/website.php" target="_blank">The Path to Sustainable Real Estate.&#8221;</a></p>
<p>3) Encitra,Â <a href="http://www.podcar.org/uppsalaconference/christerlindstrom.htm" target="_blank"></a>a company recently co-founded by <a href="http://www.ics.uci.edu/informatics/research/research_highlight_view.php?id=52" target="_blank">Crista Lopes</a> and <a href="http://www.podcar.org/uppsalaconference/christerlindstrom.htm" target="_blank">Christer Lindstrom</a> focused on improving urban planning processes, starting with transportation, using virtual worlds (<a href="http://www.ugotrade.com/2008/11/25/web-meets-world-participatory-culture-and-sustainable-living/" target="_blank">see my previous post here for more</a>).</p>
<p>The latter two projects are being developed in <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> &#8211; the open source project that should also get a Hacking The World Award for creating an open modular architecture for virtual worlds that is unleashing all these new possibilites for integrating physical and virtual worlds.</p>
<p>The 2008 code contributions to OpenSim of special note re world hacking are Crista Lopes&#8217;<a href="http://opensimulator.org/wiki/Hypergrid"> OpenSim Hypergrid</a> &#8211; see Justin CC&#8217;s blog for full details on <a href="http://justincc.wordpress.com/2008/12/19/what-is-the-hypergrid/" target="_blank">&#8220;What is the hypergrid?,&#8221;</a> and David Levine&#8217;s work (IBM),  in collaboration with Linden Lab (see<a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank"> Architecture Working Group</a>), on interoperability (see <a href="http://www.ugotrade.com/2008/07/" target="_blank">my earlier post here</a>).</p>
<p>Both these projects expand the frontiers of interoperability for virtual worlds although they &#8220;slice the problem from different ends,&#8221; as David Levine put it.Â  The emphasis in the LL/IBM approach is on security so assets are not moving yet.Â  In Crista&#8217;s solution you can have assets but the security issues are not addressed yet. But this work is vital to expanding the usefulness of virtual worlds and both projects should get Hacking the World Awards IMHO.</p>
<p>I asked <a href="http://archsl.wordpress.com/" target="_blank">Jon Brouchoud </a>(full interview upcoming) what he thought were Studio Wikitecture&#8217;s most important successes to date:</p>
<p><strong><em>&#8220;I think the greatest success has been in proving, on some level, that everyone has important knowledge that can inform and improve the design of a building, not just architects.Â  If we can continue building on that success, I hope we can eventually start to hack the traditional design process, and find ways to harness the wealth of knowledge held by the general public, instead of ignoring or avoiding it, as is most often the case.&#8221;</em></strong></p>
<h3>Harnessing the &#8220;Smart Stuff&#8221; to the Noble Cause of Sustainable Living</h3>
<p>Robert Scoble&#8217;s, <a href="http://scobleizer.com/2008/12/27/the-interview-of-the-year-tim-oreilly/" target="_blank">The Interview of the Year: Tim O&#8217;Reilly,</a> is not to be missed. Tim O&#8217;Reilly discusses the key trends for 2009 that are bubbling up at O&#8217;Reilly Media.Â  And, Yes, Tim O&#8217;Reilly, as the guru of Hacking the World, gets the &#8220;Distinguished Thinker &#8211; Hacking The World Award of 2008!&#8221;</p>
<p>Tim O&#8217;Reilly&#8217;s trend list includes:</p>
<p>1) big data- vast peer produced data bases in the cloud accessible by mobile devices</p>
<p>2) &#8220;smart stuff&#8221; &#8211; sensors and robotics and hacking on stuff for fun and not for profit</p>
<p>3) Green Tech</p>
<p>4) Advances in Biological/Life Sciences.</p>
<p>And, in Robert Scoble&#8217;s interview, there is a nice titbit of history re his attendance of early <a href="http://en.wikipedia.org/wiki/Foo_Camp" target="_blank">Foo Camps</a>.Â  Foo Camp is the wiki of O&#8217;Reilly conferences and a lineage holder to my favorite Hacking the World event of 2008, <span class="entry-content"><a id="h4a0" title="HomeCamp '08" href="http://homecamp.pbwiki.com/homecamp08" target="_blank">HomeCamp â€˜08</a></span>.</p>
<p>But what will be the &#8220;secret sauce&#8221; for these big ideasÂ  &#8211; the generative engines that harness to the noble cause of sustainable living these vast peer produced data bases and all the creative &#8220;smart stuff&#8221; hackers across the globe are creating?Â  What will motivate the mass adoption of Green Tech and sustainable living?</p>
<p>What can Wikipedia teach us about how generative systems and bottom up approaches can change the world?</p>
<p>Jimmy Wales (interview coming soon!)Â  writes in his recent <a href="http://wikimediafoundation.org/wiki/Donate/Letter/en?utm_source=2008_jimmy_letter_r&amp;utm_medium=sitenotice&amp;utm_campaign=fundraiser2008#appeal" target="_blank">personal appeal</a> for support for Wikipedia.</p>
<p><em><strong>At its core, Wikipedia is driven by a global community of more than 150,000 volunteers &#8211; all dedicated to sharing knowledge freely. Over almost eight years, these volunteers have contributed more than 11 million articles in 265 languages. More than 275 million people come to our website every month to access information, free of charge and free of advertising.</strong></em></p>
<p>To answer questions on a how to create a successful wikiculture for sustainable living, an insider&#8217;s view of Wikipedia may be a good place to start.</p>
<h3>Interview With Jonathan Hochman on Wikipedia.</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/gammapostjon.jpg"><img class="alignnone size-full wp-image-2477" title="gammapostjon" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/gammapostjon.jpg" alt="" width="223" height="158" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/jonathanwikikpost.jpg"><img class="alignnone size-full wp-image-2473" title="jonathanwikikpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/12/jonathanwikikpost.jpg" alt="" width="224" height="158" /></a></p>
<p>The picture on the left is from the Wikipedia article, <a href="http://en.wikipedia.org/wiki/Gamma-ray_burst" target="_blank">Gamma-ray Burst</a>, that Jonathan Hochman is currently working on.Â  It is a drawing of a massive <a title="Star" href="http://en.wikipedia.org/wiki/Star">star</a> collapsing to form a <a title="Black hole" href="http://en.wikipedia.org/wiki/Black_hole">black hole</a>. Energy released as jets along the axis of rotation forms a gamma-ray burst. <em>Credit: Nicolle Rager Fuller/NSF </em></p>
<p>The picture on the right, Jonathan at Web 2.0 Summit, is taken by me. Jonathan was part of the,<em> <a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Defending Web 2.0 from Virtual Blight, panel.</a> </em></p>
<p><em><strong><a href="http://en.oreilly.com/web2008/public/schedule/detail/6952" target="_blank">Known as </a><a href="http://en.wikipedia.org/wiki/User:Jehochman">Jehochman</a> on Wikipedia, he serves as an administrator and as a leader in addressing online harassment, disruption and sock puppetry. He is also the founder of <a href="http://www.hochmanconsultants.com/">Hochman Consultants</a>, an Internet marketing consultancy, and the director of <a href="http://www.semne.org/">Search Engine Marketing New England</a>, a regional conference series.</strong></em></p>
<p><strong>Tish:</strong> Second Life and Wikipedia are the two great experiments in collaborative co-creation what do they have to teach us about the future of the internet?</p>
<p><strong>Jonathan:</strong> Yes, Wikipedia and Second Life are key social spaces.Â  Some people have been seeing Second Life as the beginning of Web 3.0 &#8211; a wrap around environment where you can almost experience another life. Wikipedia is sort of another example of this.</p>
<p>All the problems that exist in the real world are mirrored right into that little universe.Â  For example, the Armenians and the Turks are at each others throats and the Japanese and the Koreans are going at it, the Palestinians and the Israelis, and the &#8220;Troubles&#8221;Â  &#8230; all the conflicts are imported into Wikipedia.Â  People are fighting over the content of these articles. They want to have it their way because these are first ranked in Google and they have a big impact in public opinion.</p>
<p>There was a huge fight on the waterboarding article a while back. Some guys from Little Green Footballs &#8211; they are a very conservative reactionary type of media. They are trying to change the article to say that water boarding might not be torture &#8211; change it to say it is probably not so bad.Â  Crazy stuff. They were trying to water it down.Â  And it is very clear, from every source out there, that waterboarding is torture.Â  We did a study and there are 115 sources that say waterboarding is torture. You simulate drowning &#8211; you simulate killing someone &#8211; that is a violation of the Geneva Convention and everything else. People were fighting, fighting, fighting!</p>
<p>One of the things I did was to try and clear people out who were being disruptive.Â  We actually had to go to arbitration over that article. It is like the supreme court of Wikipedia. There is a panel of 15 arbitrators.Â  They hear the case. There is evidence, arguments and decisions. It is really like a simulated law suit. You get all the experience of a simulated law suit with the real threat that you could be banned. If they don&#8217;t like what you are doing they can actually ban you or restrict you from topics.</p>
<p>So it is really fascinating how this social space Wikipedia becomes a very real platform though it is in a virtual world for real world disputes.Â  Most disputes are over the definition of things.Â  If you have a you suit most disputes are about how things are defined. And Wikipedia has become the defacto definition of things in the real world.Â  People want to know what are &#8220;The Troubles.&#8221;Â  If you go to Wikipedia you find outÂ  The Troubles are a dispute over Northern Ireland.Â  What the article says has a profound impact on public opinion.</p>
<p><strong>Tish:</strong> So who is on the court of Wikipedia?</p>
<p><strong>Jonathan:</strong> They are volunteers. these people work two or three hours a day to run this court.Â  There are all kinds of projects.Â  There is a WikiProject Spam which has people who can write computer programs to statistically analyze Wikipedia projects &#8211; not only Wikipedia. But all of them are looking at the links and reporting them and banning those people who are abusing or gaming the system.</p>
<p><strong>Tish:</strong> You were on the Stopping Virtual Blight Panel at Web 2.0 Summit &#8211; what are the most important things to think about on this topic?</p>
<p><strong>Jonathan:</strong> Yes we were talking about how to defend the web against virtual blight. The thing I find interesting about Wikipedia is that because it is the eighth largest web site and possibly the second largest web site comprised of user generated content after YouTube. The problems that exist in Wikipedia are larger and more detailed than any other site.Â  For whatever problem someone has for their social media site or their Web 2.0 site these problems already exist in Wikipedia and the solutions are there and they are transparent. You can actually see the history of what&#8217;s been done.</p>
<p>If there is, for example, a problem on Digg &#8211; some problem with sock puppetry or vote stacking &#8211; it happens, it goes away.Â  You don&#8217;t get full disclosure.Â  With Wikipedia you can actually go in and look at a dispute and watch it unfold.Â  You can watch the arbitration cases that are filed, the arguments, the decisions, the logic, the rationale.Â  You can see the successes and the failures and the different things people have tried to control blight. For example, we tried to resolve this dispute one way but it was a disaster, so we have tried something else and that worked.</p>
<p>Wikipedia is a large laboratory for social media. Wikipedia and the large universe around it Wiki and WikiMedia projects that individuals, enterprises and put together like Commons.Â  Wikimedia Commons is a repository of publicly licensed images that anyone can take and reuse. They have sound and they have video, and all of this stuff is being stitched together now.</p>
<p>So if you go to the article on ObamaÂ  you can probably now hear his acceptance speech because that is public domain &#8211; its been stitched into the article.Â  If you go to the article on Richard Nixon &#8211; his resignation speech &#8211; you may even hear his conversation with the astronauts when they landed on the moon.Â  So this becomes a giant repository of all our culture and knowledge.Â  When I design a website, a lot of times I go to Commons to find images I use for free.Â  I don&#8217;t want to pay for an image I can get for free.Â  <strong></strong></p>
<p><strong>Tish: </strong>And the Commons images get contextualized in Wikipedia too.</p>
<p><strong>Jonathan:</strong> Some of these articles are fascinatingly detailed. If you want a quick summary of the Dr. Strangelove, the article is fantastic.Â  It is enjoyable, a pleasure to read.Â  I was reading about S.A. Andree&#8217;s North Pole balloon expedition of 1897. Some guys from Sweden decided to fly a balloon over the North pole.Â  They managed to get aloft then they flew over the icepack for 24 hrs then they crashed.</p>
<p>They unloaded their stuff and hiked back across the ice toward the island they had launched from. They ended up being on the ice pack for three months before they finally holed up in an ice cave and starved to death.Â  There weren&#8217;t found until thirty years later!Â  There was a camera with these guys and the frozen pictures taken 30 yrs earlier.Â  They developed the film and those pictures are now on Wikipedia.Â  It is just a fascinating thing!</p>
<p><strong>Tish: </strong> Do you see real time collaboration beginning to play more of a role in Wikipedia &#8211; whether virtual worlds or just voice/IM &#8212; how could real time collaboration change the wikipedia editing process?</p>
<p>Jonathan:Â  The Presidential candidate articles were being edited very rapidly yesterday. There are certain real time problems.Â  Some of the more interesting problems are when you get two administrators who &#8220;get into it.&#8221; One administrator says I am blocking this user and the other one says I am unblocking him, and the other one &#8220;NO I am blocking him!&#8221; And so on&#8230;&#8230; And everyone says, &#8220;Stop fighting. You are not allowed to do that!&#8221; And they both get their powers stripped. People do get very heated over the silliest things. Wikipedia does have some mailing lists attached and there are some IRC channels. So there are some real time elements.</p>
<p><strong>Tish: </strong>What is the role of avatars in Wikipedia?<br />
<br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> In Wikipedia you have a user page and many users are anonymous.Â  They create an avatar and they personalize it and show themselves in ways they want to show themselves through an avatar. In many ways it is a lot like Second Life.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Some users have created second accounts &#8211; or a humerous second account. Bishzilla &#8211; a Swedish lady who is in tremendous command of the English language and has a razor sharp wit.Â  She has created this secondary account &#8211; almost like in a baby language.Â  Her avatar is a dinosaur that is not very bright that goes around frying people. Bizarre what people do! People may be editing a topic like an interest they have &#8211; e.g. Pokemon that they don&#8217;t want associated with their professional avatar. Or people may be editing a topic about hot political issues.Â  There have actually been some death threats issued to people over stuff they have been putting into the encyclopedia. </span><strong><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish: </strong>So avatars are important in Wikipedia.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Absolutely because people may be going in and editing articles that they may not want their friends and family to know they are editing.Â  One editor may say to another, &#8220;Stop putting stuff in or I will come and kill you!&#8221; Well then we have to ban them.Â  We have to call the police.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> Can you build reputations on multiple avatars?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan: </strong>You are allowed to use multiple avatars as long as they don&#8217;t cross paths.Â  You can&#8217;t have two avatars editing in the same area beacuse you are going to be giving yourself double weight commenting on a discussion. </span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> How do you know when this is happening?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> You can watch the style of a users editing.Â  You have to watch behavior.Â  And if you have enough evidence through behavior that suggests accounts are controlled by one person you can go and request a technical check.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">There are some uses who are called Checkusers who are able to access information desired from the server logs and check the technical characteristics of these accounts to see if they are using the same IP address.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> So if you want to understand avatar interaction on the web it helps to understand Wikipedia. </span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Yes it is a fantastic way to understand how avatars work in some aspects, and also how to deal with community dynamics.Â  We have some very strong willed people &#8211; people in their 40s, 50s, and 60s &#8211; who are very successful in business.Â  They have plenty of money and spare time and they are doing this as a hobby. And some of these people can really butt heads.Â  You can have a problem when you have an editor who has been writing fantastic articles but also happens to be rude and chew other people out and tell them to f**k off if they are not behaving. What do you do?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Tish:</strong> Sounds a bit like Second Life!</span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Jonathan:</strong> The person is a great contributor to the community but they are telling noobies to f**k off, so you can&#8217;t allow that.</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">What do you do?Â  Vested contributors are a major problem to some of these sites. They are vested in the community but they start misbehaving. You can&#8217;t block them, because if you block them there is a huge upsroar from all their friends and it causes a cataclysm.Â  It requires very careful diplomacy to deal with some of these situations. </span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish:</strong> How many Wikipedia volunteers are there now?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Jonathan: Think of a Venn Diagram &#8211; a big circle. The total number of contributors are about one million different people that contribute.Â  But there are probably about 5,000 active editorsÂ  that are consistently and regularly contributing.Â  And within that kernel there are fifteen hundred people that have administrator access and probably only eight hundred of them are active.Â  People have a natural life span with the community.Â  People come an typically stay for 6 months to 3 years.Â  Usually after that they become bored, disillusioned or get into a conflict with someone.Â  There is a natural tendency for people to stay for a while and move on. Some people stay longer, a few, but the majority will move on at some point.Â  So it is a lot of fresh faces moving in.</span><br style="background-color: #ffffff;" /><strong><br style="background-color: #ffffff;" /></strong><span style="background-color: #ffffff;"><strong>Tish:</strong> What lessons of trust does Wkipedia have to teach us about new projects like AMEE that aims to aggregate the world&#8217;s energy data?</span><br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;"><strong>Jonathan:</strong> Well you have to know who is releasing the data. Who is creating the data? The beauty of Wikipedia is that you have an edit history so you can see exactly who has done what.Â  So you can judge whether this person is trustworthy or not.Â  That&#8217;s a huge problem on the web today.Â  We don&#8217;t have enough identification information.Â  When you see a web page you don&#8217;t necessarily know when that page was created and by whom, or how many revisions it has had.Â  Sometimes you can glean information by checking it.Â  If you see typos and errors you may decide that that page probably didn&#8217;t receive as much attention as it should have, and probably it is not that good.</span> <br style="background-color: #ffffff;" /><br style="background-color: #ffffff;" /><span style="background-color: #ffffff;">Typos are an interesting thing.Â  People always try to figure out how Google ranks web pages. </span><a id="uy3s" style="background-color: #ffffff;" title="Matt Cutts" href="http://www.mattcutts.com/">Matt Cutts</a><span style="background-color: #ffffff;"> was here from Google today.Â  And he was talking about spam.Â  But Matt also did a <a id="e4lo" title="blog post" href="http://www.mattcutts.com/blog/2006-pubcon-in-vegas-getting-there-and-back/">blog post</a> about how he was in an airport once, and how he has a policy &#8211; when you are reading a document as soon as you come to the first error just stop because if the author hasn&#8217;t taken the care to make everything correct, you don&#8217;t need to read it. So he was in the airport, there was a sign, he came to a typo and stopped reading it. Somehow he got in trouble for not reading the sign and not having the information.Â  But it is interesting to think whether Goggle is looking for for typos, misspellings, broken links and using that as a signal of quality to rank pages.</span><br style="background-color: #ffffff;" /></p>
<p><strong>Tish:</strong> Aaaagh typos might bring down your page rank!!!Â  That certainly is a scary thought for a blogger like me who likes to write impossibly long posts that are hard to check&#8230;&#8230;&#8230;</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/12/29/hacking-the-world-in-2009-google-street-view-smart-stuff-and-wikiculture/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>Doing Something Useful With Virtual Worlds</title>
		<link>https://www.ugotrade.com/2008/10/28/doing-something-useful-with-virtual-worlds/</link>
		<comments>https://www.ugotrade.com/2008/10/28/doing-something-useful-with-virtual-worlds/#comments</comments>
		<pubDate>Tue, 28 Oct 2008 08:52:37 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[avatar 2.0]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metarati]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[vapor standards]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[collaboration in virtual worlds]]></category>
		<category><![CDATA[connecting-the-physical-world-to-the-digital-world]]></category>
		<category><![CDATA[doing-something-useful-with-the-internet]]></category>
		<category><![CDATA[enterprise virtual worlds]]></category>
		<category><![CDATA[enterprise-applications-for-virtual-worlds]]></category>
		<category><![CDATA[extended-internet]]></category>
		<category><![CDATA[green-it]]></category>
		<category><![CDATA[integrating-virtual-worlds-into-web-2.0]]></category>
		<category><![CDATA[lternative-reality-games]]></category>
		<category><![CDATA[soa]]></category>
		<category><![CDATA[social-computing]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[virtual-conferences]]></category>
		<category><![CDATA[virtual-worlds-for-green-conferencing]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1962</guid>
		<description><![CDATA[I have just got back from attending two conferences in the UK, the Head Conference, and Virtual Worlds London.Â  I was on a mission at both the events to ask questions about how Virtual World technology will answer the call Tim O&#8217;Reilly made at the Web 2.0 Expo in New York City to &#8220;create more [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/images/VirtualWorldRoadMapupload.jpg" target="_blank"><img class="alignnone size-full wp-image-1964" title="virtualworldroadmapuploadpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/virtualworldroadmapuploadpost.jpg" alt="" width="311" height="207" /></a><a href="http://www.ugotrade.com/images/BruceDamerupload.jpg" target="_blank"><img class="alignnone size-full wp-image-1963" title="brucedameruploadpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/brucedameruploadpost.jpg" alt="" width="138" height="207" /></a></p>
<p>I have just got back from attending two conferences in the UK, the <a href="http://www.headconference.com/" target="_blank">Head Conference</a>, and <a href="http://www.virtualworldslondon.com/" target="_blank">Virtual Worlds London</a>.Â  I was on a mission at both the events to ask questions about how Virtual World technology will answer the call Tim O&#8217;Reilly made at the Web 2.0 Expo in New York City to &#8220;create more value than you extract&#8221; and do something worthy and useful with the internet.</p>
<p>The <a href="http://www.headconference.com/">Head Conference</a> was an ambitious, timely, and much needed creative exploration of the potential for &#8220;green&#8221; conferencing using Adobe Connect Pro, Second Life andÂ  <a href="http://www.headconference.com/hubs/">local conference hubs</a> in various cities. For more on the conference organization see <a href="http://www.digital-web.com/articles/head_conference_aral_balkan/" target="_blank">this pre-conference interview</a> with Aral Balkan.</p>
<p>Head will be the focus of my next post, so more on Head soon!Â  One of my main goals in attending the <a href="http://www.headconference.com/hubs/london-uk/" target="_blank">London Hub</a> of Head was to interview the CEO and founder of <a href="http://www.amee.cc/" target="_blank">AMEE</a>, &#8220;Avoiding Mass Extinctions Engine,â€ <a href="http://www.headconference.com/speakers/gavin-starks/" target="_blank">Gavin Starks</a>. AMEE aims to be &#8220;the energy meter of the world.&#8221;</p>
<blockquote><p>AMEE is a neutral aggregation platform designed to measure and track all the energy data on Earth.&#8221;</p></blockquote>
<p>AMEE is a project with the kind of big goals that O&#8217;Reilly talked about in his keynote at Web 2.0 Expo, NYC.Â  Tim O&#8217;Reilly is an investor in AMEE. He announced, at Head, that the O&#8217;Reilly VC company has just closed a deal with AMEE.</p>
<p>I had an extraordinary opportunity to spend time some time talking with Tim O&#8217;Reilly while looking for a sandwich in Euston Square.Â  More on this sandwich adventure and my interview with Tim O&#8217;Reilly, and my long talk with Gavin Starks about AMEE, in my next post!</p>
<p>Tim kept saying in London that he doesn&#8217;t like predicting the future. But the future comes to Tim O&#8217;Reilly!</p>
<p>And, after talking with Tim and Gavin, I felt I had a very exciting glimpse of what is emerging from the tech&#8217;s burning issues. George F. Colony, Forrester, summarized these issues nicely in his post, <a href="http://blogs.forrester.com/colony/2008/10/my-take-on-the.html" target="_blank">&#8220;Why This Tech recession Will Be Different.&#8221;</a> Colony noted, &#8220;Virtualization, social computing, mobile computing, Green IT, SOA, extended Internet (connecting the physical world to the digital world) are front and center on the agendas of large companies.&#8221;</p>
<p>And, yes, this is supposed to be a little bit of a teaser for my next post on AMEE!</p>
<h3>Virtual Worlds Road Map.</h3>
<p>The final keynote at the Virtual Worlds London was what Ian Hughes in <a href="http://eightbar.co.uk/2008/10/23/virtual-worlds-london-metarati-and-moving-coffee-day-1-part-1/" target="_blank">his post on the conference for Eightbar</a>, aptly described as a call to arms for the <a href="http://www.virtualworldsroadmap.org/" target="_blank">Virtual Worlds Roadmap</a>. As Ian pointed out: &#8220;This needs a post in its own right as we all need to get on board with this across the industry and help.&#8221; Ian Hughes&#8217; (IBM) own presentation on &#8220;Business Process Management&#8221; was one of the best I attended in conference.Â  Yes, amazingly, he made this topic very interesting and fresh!</p>
<p>The pictures opening this post are the Virtual Worlds Road Map presenters. Victoria Coleman (Samsung) -seated at center, Sibley Verbeck (<a href="http://www.electricsheepcompany.com/">Electric Sheep Company)</a> &#8211; in trademark hat, <a href="http://www.virtualworldslondon.com/speakers/jeffreypope.html">Jeffrey Pope </a>3Di &#8211; far left, andÂ  <a href="http://www.damer.com/">Bruce Damer</a> &#8211; close up in the picture on the right.</p>
<p>I am delighted to join Bruce Damer, later today, for a <a href="http://www.fastcompany.com/node/1052129" target="_blank">FastCompany.com Technology Group Call-in</a>: <strong>&#8220;Next Generation Interaction: Are Virtual Worlds Waiting in the Wings?&#8221; </strong>with <a title="Donald Schwartz" href="http://www.fastcompany.com/user/donald-schwartz" target="_blank">Donald Schwartz</a> (October 28th at 4:00 PM EST).</p>
<p>I will also be in Second Life <a href="http://slurl.com/secondlife/Wolpertinger/173/87/51" target="_blank">at Train 4 Success (SLURL)</a> on Thursday, October 30 (starting at 9AM PST) with <a href="http://peterquirk.wordpress.com/" target="_blank">Peter Quirk, EMC</a>, and Jani Pirkola, <a href="http://www.realxtend.org/" target="_blank">realXtend</a> talking about <a href="http://www.opensimulator.org" target="_blank">OpenSim</a> and <a href="http://www.realxtend.org/" target="_blank">realXtend</a> for an event organized by Eilif Trondsen of the <a href="http://www.sri.com/" target="_blank">Stanford Research Institute</a> and the Gronstedt Group.</p>
<p>John Hengeveld (Intel) &#8211; was off screen for this group picture (above). But, Intel is doing some very interesting work in Virtual Worlds <a href="http://www.ugotrade.com/2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/" target="_blank">see my earlier post here</a>.Â  And, John isÂ  &#8220;helping <a href="http://www.digitalspace.com/projects/b612movies.html">NASA work out how to deflect extinction level event asteriods from Earth!</a>).&#8221;</p>
<p>As Ian noted, the main aim of Virtual Worlds Road Map, &#8220;is to gather together and cut through use cases to understand and help people come to terms with which applications need to be built for which case.&#8221;</p>
<p>For more great coverage of Virtual Worlds London check out <a href="http://eightbar.co.uk/2008/10/23/virtual-worlds-london-metarati-and-moving-coffee-day-1-part-1/" target="_blank">Ian&#8217;s post</a> on Eightbar. And, check out Roo Reynolds&#8217;, <a href="http://rooreynolds.com/2008/10/21/virtual-worlds-london-liveblogging-day-2/" target="_blank">live blogging here </a>and <a href="http://rooreynolds.com/2008/10/20/virtual-worlds-london-liveblogging/" target="_blank">here</a>. Also see Roo&#8217;s post on his panel on <a href="http://rooreynolds.com/2008/10/24/arg-panel-at-virtual-worlds-london-2/" target="_blank">&#8220;ARGs [Alternative Reality Games] and Virtual Worlds.&#8221;</a> which includes slides and audio. Picture below is Roo  in action live blogging. Roo is Portfolio Executive for Social  	Media at BBC Vision.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/rooreynoldslivebloggin.jpg"><img class="alignnone size-full wp-image-1987" title="rooreynoldslivebloggin" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/rooreynoldslivebloggin.jpg" alt="" width="450" height="299" /></a></p>
<h3>Tribal Media: A Teacher Training Intranet For The Swedish Government on OpenSim</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/darrenpost.jpg"><img class="alignnone size-full wp-image-1980" title="darrenpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/darrenpost.jpg" alt="" width="350" height="368" /></a></p>
<p>One of the more interesting developments I saw at Virtual Worlds London was a highly customized training intranet for 50,000 teachers being developed for the Swedish Government by <a href="http://tribalnet.se/About/TribalMedia/tabid/78/Default.aspx" target="_blank">Tribal Media</a>. The flexibility of <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> to provide cost effective custom intranet solutions was nicely demoed by Darren Guard, Tribal Media R&amp;D (pictured above). Darren is one of the more reclusive founders and phenom developers of OpenSim.</p>
<h3>Virtual Worlds and Web 2.0</h3>
<p>In my earlier interviews with Rob Smart <a href="http://www.ugotrade.com/2008/09/29/rob-smart-ibm-web-20-to-opensim-made-easy/" target="_blank">here</a>, and Teravus Ousley <a href="http://www.ugotrade.com/2008/10/06/putting-opensim-into-the-heart-of-web-20/" target="_blank">here</a>, we discussed the work to integrate OpenSim with Web 2.0.</p>
<p>To meet the O&#8217;Reilly challenge &#8211; to do something useful with the internet and help solve some of the world&#8217;s big problems, in my view, Virtual World technologies must engage more fully with the power of the internet-as-a-platform &#8211; <span id="intelliTxt">&#8220;a system without an owner, tied together by a set of protocols, open standards and agreements for cooperation.&#8221; (see O&#8217;Reilly, </span> <a href="http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html" target="_blank">&#8220;What Is Web 2.0?&#8221;</a> ).</p>
<p>Unfortunately the worst presentation at Virtual Worlds London was purportedly on standards for virtual worlds.Â  I do not want to waste energy rehashing the misinformed and misguided presentation on the MPEG-V&#8217;s archaic blunderbuss approach to standards in this post.Â  I completely concur with Jim Purbrick of Linden Lab&#8217;s characterization of this talk as <a href="http://jimpurbrick.com/2008/10/23/second-life/" target="_blank">&#8220;the worst talk Iâ€™ve heard in a long time</a>.&#8221; (Also, see Jim&#8217;s post for an <a href="http://jimpurbrick.com/2008/10/23/second-life/" target="_blank">astute commentary</a> on other aspects of Virtual Worlds London.)Â  Luckily, there is much productive work from quarters aimed at leading to standards for Virtual Worlds. And, s<span id="intelliTxt">ome of these efforts I have blogged here on Ugotrade. </span></p>
<p><span id="intelliTxt"><strong> B</strong>ecause there is confusion, sometimes, in Virtual World discussions about how business models work on a &#8220;system without an owner,&#8221; here is the concluding quote from, &#8220;What is Web 2.0.&#8221;</span></p>
<blockquote><p><span id="intelliTxt">This is not to say that there are not opportunities for lock-in and competitive advantage, but we believe they are not to be found via control over software APIs and protocols. There is a new game afoot. The companies that succeed in the Web 2.0 era will be those that understand the rules of that game, rather than trying to go back to the rules of the PC software era.</span></p></blockquote>
<h3><strong>What is the Killer App. for Virtual Worlds?</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/robsmartpost.jpg"><img class="alignnone size-full wp-image-1971" title="robsmartpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/robsmartpost.jpg" alt="" width="450" height="299" /></a></p>
<p><strong>&#8220;The killer is that any app you do create is automatically presence enabled.<br />
The people with you can view the changing states of that application or context as and Â when you do.&#8221; Rob Smart, IBM.</strong></p>
<p>The picture above are the presenters for the <span class="style34"><strong>&#8220;<strong>Platform Integration Considerations for Enterprise Virtual Worlds&#8221; panel. From left to right: </strong></strong></span><a href="http://www.virtualworldslondon.com/speakers/jeanmiller.html">Jean Miller, German Market  		Development Manager, Linden Lab</a><span class="style34"><strong>, </strong></span><a href="http://www.virtualworldslondon.com/speakers/mattfurman.html">Matt Furman, Software Engineer,  		Northrop Grumman</a>, <span class="style34"><strong></strong></span><a href="http://www.virtualworldslondon.com/speakers/robsmart.html">Rob Smart, Emerging Technology  		Specialist, IBM Hursley</a>,</p>
<h3>Interview with Rob Smart, IBM: Part 2.</h3>
<p><span style="font-size: small;"><strong><strong>Tish Shute:</strong> </strong>Up to now, Virtual Worlds have been relatively isolated from Web 2.0, living somewhere between the gaming world and the Web 2.0 world. How are the curtains lifting and virtual worlds becoming the linking the space between social media, and online gaming?</span></p>
<p><span style="font-size: small;"><strong><strong>Rob Smart: </strong></strong>Virtual Worlds that allow user created content and the association of behaviour to that content via scripting put themselves forward as the ideal platform to combine realtime social interaction with existing Web 2.0 tools. The data and function out there currently on Web sites can serve to augment the real-time social interactions. For example enhancing/enabling cross cultural communication with chat translation (example my translation HUD from wayback in 2006). </span></p>
<p><span style="font-size: small;">Another example is augmenting personal spaces with flickr images, video etc. In many flash room based Virtual Worlds this level of integration exists. However without the ability of the users to create their own gadgets and gizmos the pressure is on the development team to innovate and give users what they want, tough to do in the long term. A blended approach is to open APIs and content creation to registered developers.</span></p>
<div class="Ih2E3d">
<p><strong><strong>Tish Shute:</strong> </strong>Many developers have not been interested in taking part in virtual world development yet as they haven&#8217;t yet seen a killer app. How are, open source, open protocols, and the use of web standards where possible Â enabling an environment of innovation from which killer apps may emerge?</div>
<p><span style="font-size: small;"><strong><strong>Rob Smart:</strong><strong> </strong></strong>When you&#8217;re integrating any system with another it becomes so much simpler if the creators have provided,Â  services and APIs for external systems to interact with. It becomes even easier if those system entry accept/give inputs and outputs in a common way e.g. xml/json. The same goes for both data and media.</span></p>
<p><span style="font-size: small;"> By using common existing standards we shorten the development time taken, because if a standard is widely adopted there will be a multitude of programming language libraries for it. The existence of which means the developer can get straight onto the important task of creating the logic for their application/gadget rather than messing around trying to understand some weird data encoding method you&#8217;ve invented. </span></p>
<p><span style="font-size: small;">Having an Open Source platform spreads the work load around, as long as the method under which the OS software is licenced isnt too prohibitive then developers from all walks of life will contribute. Spreading that workload also leads to an increase of innovative features as people always bring their experience and interests to bear, the features they create can be shared back and others build on top of them.</span></p>
<p><span style="font-size: small;"> If a company chooses to implement a feature they specialize in or integrate with their existing products they can sell this as an add-on, this creates a market where the base product can improve through contributions from companies making a living of the OS product, it also introduces some competition and financial incentive to the platforms well being.</span></p>
<p><span style="font-size: small;">People keep talking about killer apps within Virtual Worlds, the killer is that any app you do create is automatically presence enabled. The people with you can view the changing states of that application or context as and Â when you do.</span></p>
<div class="Ih2E3d"><span style="font-size: small;"><strong><strong> Tish Shute:</strong> </strong>How have Virtual Worlds outgrown this name! Â The term Virtual Worlds has connotations of separateness from &#8220;real&#8221; worlds?Â  What might be a better term? Â (I have seen a number of other terms cropping up = Virtual Universe is what IBMers wore on their t-shirts here in London, Immersive Work Spaces has been trade marked by RRR, and many people prefer the terms virtual environments or virtual spaces).</span></div>
<p><span style="font-size: small;"><strong><strong>Rob Smart: </strong></strong>I still think Virtual Worlds is a good term, though it is very fuzzy. If we&#8217;re talking about VWs that can be extended and integrated with web 2.0 then maybe we need to talk about Immersive Application Platforms. Yep not very catchy but probably something more people in the enterprise world would say out loud in front of their boss <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" />  In addition another term that could be used is 3D Internet it conjures more of a picture of integration between the different parts of what is a vast networked system.<br />
<strong><strong><br />
Tish Shute:</strong> </strong>The Â original metaverse roadmap had four distinct segments Augmented Reality, and Life Logging at the pole of augmentation, and Mirror worlds and Virtual worlds at opposite corners of the pole of simulation. How are these areas coming together?<br />
</span><strong><br />
<span style="font-size: small;"><strong>Rob Smart: </strong></span></strong><span style="font-size: small;">There&#8217;s no reason these need to be separated, its all down to the use of the VW platform these four segments are just applications of a virtual world platform. A platform like OpenSim can merge several of these together if neccessary. For example the Publish Subscribe messaging module written about on eightbar that I created lets me do things like bring in Realtime Flight data and show planes positions etc. across a region I could at the same time call an API that gives me more details on that flight. I could even search for blogs that mention that flight number and bring them into the same space. I could add additional script functions to the plane objects so that when a visitor clicks on a plane it thereafter sends them messages about its position. </span><br />
<strong><span style="font-size: small;"><br />
<strong>Tish Shute:</strong> </span></strong><span style="font-size: small;">Virtual worlds are being broken down to open source basics building blocks and modules that can be mixed and matched and mashed up with Web 2.0 to create a new ecosystem that enriches both what has been know as virtual worlds and traditional web environments. What kind of innovation do you see coming out of these new opportunities to mashup virtual worlds with Web 2.0?</span></p>
<p><span style="font-size: small;"><strong><strong>Rob Smart: </strong></strong>I&#8217;m hoping to see as a number one priority an increase of accessibility, despite a number of people saying that browser based virtual worlds aren&#8217;t worth the effort they certainly are. The ability to just send a friend a URL or Instant Message etc.. and pull them in with you is an important step to adoption. As are simplified interfaces that don&#8217;t scare off those unfamiliar with gaming. An example of this is the Lotus Sametime 3D work with OpenSim that lets you invite a friend or colleague in via an instant message.</span></p>
<h3>Virtual Worlds For Enterprise: A Coming of Age Party?</h3>
<p>As Ian mentioned I did think that the London Conference was a coming of age party for enterprise virtual worlds. In the picture below there are just some of the Lindens who were there, many to promote the Linden Lab collaboration with Rivers Run Red on <a href="http://immersivespaces.com/" target="_blank">&#8220;Immersive Work Spaces&#8221; </a>which was <a href="http://blogs.wsj.com/biztech/2008/10/20/linking-the-real-web-with-virtual-worlds/" target="_blank">written up in Wall Street Journal.</a> Also see this post yesterday on Silicon.com, <a href="http://www.silicon.com/silicon/networks/webwatch/0,39024667,39285821,00.htm" target="_blank">&#8220;Virtual Worlds Set For Second Coming.&#8221; </a></p>
<p>Someone please help me with the all the names of the Lindens in the picture below!Â  <a href="http://www.virtualworldslondon.com/speakers/mattfurman.html">Matt Furman</a> from Northrop Grumman is center and Joey Seiler from <a href="http://www.virtualworldsnews.com/" target="_blank">Virtual World News</a> is on the right.<a href="http://blogs.wsj.com/biztech/2008/10/20/linking-the-real-web-with-virtual-worlds/" target="_blank"></a></p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/lindens.jpg"><img class="alignnone size-full wp-image-1988" title="lindens" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/lindens.jpg" alt="" width="450" height="299" /></a></p>
<p>Justin Bovington said to me that this conference was in his view: &#8220;the enterprise virtual worlds coming out party &#8211; an acceptance that this is a tangible solution- about selling relevant tools and relevant ROI &#8211; rather than talk about virtual worlds it is about relevant tool sets.&#8221;</p>
<p>And, while the conference was small, I think the engagement level of the enterprise attendees did back up this assertion of Justin&#8217;s. <a href="http://www.virtualworldslondon.com/speakers/mattfurman.html">Matt Furman, Software Engineer,  		Northrop Grumman</a> was asked by more than one attendee how he was dealing with scaling up the behind the firewall virtual world he is developing for Northrup Grumman with Linden Lab to meet a big demand internally to start using virtual worlds for collaboration.Â  Apparently some attendees were seeing so much interest in virtual world solutions for internal collaboration in their own companies, they were concerned about meeting the needs of thousands of employees in short order.</p>
<h3>Immersive Work Spaces</h3>
<p>I asked Justin a few questions about Immersive Work Spaces while waiting for an elevator!</p>
<p><strong>Tish Shute:</strong> And what are the relevant tool sets from your point of you?</p>
<p><strong>Justin Bovington:</strong> Collaboration, sharing, integration of existing backend systems and applications.Â  For example, we have developed seamless ways to share powerpoint or share screens. And, also going back down to the ROI models as well,Â  tangible ROI based on subscription based system where basically in four or five usages it has paid for itself. We have never had that with Virtual Worlds. It has always been in the bounds of experimentation or the bounds of isn&#8217;t it cool technology. Now we are seeing this become a serious collaboration tool.</p>
<p>And as I have said before that argueably the twentieth century ended two weeks ago and the twenty first century is now with us.Â  And that is about companies rengineering their thinking particularly in the financial sector they have to restart again. And that is going to be aboutÂ  using additional tools and additional guide lines to do that. This is the change over and I have said this in the panel as well. This show in particularly is enterprise virtual worlds coming out party.</p>
<p>And again we see a massive change between the last three shows &#8211; there is a level of interest we have never seen before and also an acceptance that this is a tangible solution not just something that is cool&#8230;</p>
<p>We have hundreds of users in out product and it will goÂ  to thousands and tens of thousands in the next year.</p>
<p>And we know where it is going &#8211; data visualization is going to be the next big thing and getting this 10,000 ft view of your company. We are using this term called snow globing which lets you pick up a snow globe and shake it and let you see exactly what a company is about and this is exactly what virtual worlds are about.</p>
<p>It&#8217;s about having a ten thousand foot view of your company because that&#8217;s when it becomes powerful because then it becomes a broadcast medium. And I think it will change people&#8217;s perception of data. And it is also moving to beyond just having the avatar as the main presence. The environment itself becomes an essence or a kind of dynamic level that is inside there. We are working on stuff at the moment that allows you have direct influence on data or the environment you are in which on a massive collaboration scale could actually give you a huge amount of input and ideas around company. And there is a genuine need to have this kind of collective intelligence.</p>
<h3>Sine Wave Dinner!</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/sinewavedinnerpost.jpg"><img class="alignnone size-full wp-image-1990" title="sinewavedinnerpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/sinewavedinnerpost.jpg" alt="" width="450" height="299" /></a></p>
<p>The grand finale for me was the excellent Indian meal very generously hosted by Rohan Freeman of <a href="http://www.sinewavecompany.com/" target="_blank">Sine Wave Company</a>. Standing on the left is Chris Collins, Linden Lab, seated left front is, Steve Spangaro, bigpipemedia, and on the right Ren Reynolds of the Virtual Policy Network. Many other metarati were there including Bruce Joy, Vast Park, Corey Bridges, Multiverse, Dave Taylor, Imperial College, Gia Rossini, Sloodle, Peter Haik, Metaversality, Adam Frisby, OpenSim, Mal Burns, and many more &#8211; please help me out with the name tagging!<a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/sinewavedinnerpost.jpg"><br />
</a></p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/10/28/doing-something-useful-with-virtual-worlds/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Mashing Up Virtual Worlds With Web 2.0 and Online Gaming</title>
		<link>https://www.ugotrade.com/2008/10/16/mashing-up-virtual-worlds-with-web-20-and-online-gaming/</link>
		<comments>https://www.ugotrade.com/2008/10/16/mashing-up-virtual-worlds-with-web-20-and-online-gaming/#comments</comments>
		<pubDate>Fri, 17 Oct 2008 02:53:51 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Architecture Working Group]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[crossing digital divides]]></category>
		<category><![CDATA[digital public space]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[MMOGs]]></category>
		<category><![CDATA[Mobile Technology]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[social gaming]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[vapor standards]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[a smart world]]></category>
		<category><![CDATA[Engage]]></category>
		<category><![CDATA[gartner hype cycle]]></category>
		<category><![CDATA[Google's Lively]]></category>
		<category><![CDATA[internet standards]]></category>
		<category><![CDATA[Joe The Plumber]]></category>
		<category><![CDATA[Mashing Up Virtual Worlds With Web 2.0]]></category>
		<category><![CDATA[Mashups]]></category>
		<category><![CDATA[Metaverse1]]></category>
		<category><![CDATA[MPEG-V]]></category>
		<category><![CDATA[OpenGrid Protocol]]></category>
		<category><![CDATA[shared spaces]]></category>
		<category><![CDATA[SHASPA]]></category>
		<category><![CDATA[Sinewave Pub Quiz]]></category>
		<category><![CDATA[smart buildings]]></category>
		<category><![CDATA[smart spaces]]></category>
		<category><![CDATA[standards]]></category>
		<category><![CDATA[the problem with top down standards]]></category>
		<category><![CDATA[top down standards]]></category>
		<category><![CDATA[Virtual Worlds and Online Gaming]]></category>
		<category><![CDATA[virtual worlds for performance optimization]]></category>
		<category><![CDATA[virtual worlds for product life cycle management]]></category>
		<category><![CDATA[Web 2.0 Standards]]></category>
		<category><![CDATA[Wikitecture]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1887</guid>
		<description><![CDATA[The curtains are lifting between Virtual Worlds, Web 2.0, and online gaming. There are many indications of this in the news including, the rebranding of the Virtual Worlds Conf. and Expo in New York City as &#8220;Engage! Expo &#8211; 3D Web, Virtual Worlds, and Virtual Goods,&#8221; and Google&#8217;s Lively opening up an API for game [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/modular_rex.jpg"><img class="alignnone size-full wp-image-1903" title="modular_rex" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/modular_rex.jpg" alt="" width="450" height="306" /></a></p>
<p>The curtains are lifting between Virtual Worlds, Web 2.0, and online gaming. There are many indications of this in the news including, the rebranding of the Virtual Worlds Conf. and Expo in New York City as <a href="http://www.engageexpo.com/expo/index.html" target="_blank">&#8220;Engage! Expo</a> &#8211; 3D Web, Virtual Worlds, and Virtual Goods,&#8221; and<a href="http://www.lively.com/html/landing.html" target="_blank"> Google&#8217;s Lively</a> opening up <a href="http://www.virtualworldsnews.com/2008/09/will-lively-be.html" target="_blank">an API for game development</a>.</p>
<p>And, If you have been reading Ugotrade recently, you will know I have been up late several nights trying to keep up with the pace of theÂ  <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a>, <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architecture Working Group</a> and <a href="http://secondlifegrid.net.s3.amazonaws.com/docs/specs/SLGOGP-draft-1.html" target="_blank">OpenGrid Protoco</a>l teams that are proceeding at a fast clip with their work on Web 2.0 integration for immersive Virtual Worlds (and there is still much more to write on this!).</p>
<p>Also, this week, there was the preview launch (the full launch is scheduled for November) of a new collaboration, &#8220;SHASPA,&#8221; between EOLUS<sup>Â®</sup> One and <a href="http://www.seriousgamesinstitute.co.uk/" target="_blank">The Serious Games Institute</a>.Â  SHASPA was unveiled to a select audience of business decision makers at the <a href="http://www.shakespeares-globe.org/" target="_blank">Globe Theater, London</a> on Wednesday.</p>
<p><a href="http://www.ugotrade.com/images/OliverShaspa.jpg" target="_blank"><img class="alignnone size-thumbnail wp-image-1910" title="olivershaspapost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/olivershaspapost-150x150.jpg" alt="" width="150" height="150" /></a><a href="http://www.ugotrade.com/images/Shaspa1.jpg" target="_blank"><img class="alignnone size-thumbnail wp-image-1911" title="shaspa1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/shaspa1post-150x150.jpg" alt="" width="150" height="150" /></a><a href="http://www.ugotrade.com/images/DavidWortley.jpg" target="_blank"><img class="alignnone size-thumbnail wp-image-1917" title="davidwortleypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/davidwortleypost-150x150.jpg" alt="" width="150" height="150" /></a></p>
<h3>SHASPA-  making a &#8220;smart&#8221; world with &#8220;shared spaces.&#8221;</h3>
<p>Oliver Goh said, &#8220;SHASPA&#8221; is a unique cooperation to bring together Virtual Worlds (OpenSim, Second Life<sup>Â®</sup>), Web 2.0 Applications and the world of mobile applications&#8230;.&#8221;</p>
<p>The pictures above show Oliver speaking (far left) and David Wortley, Director of the Serious Games Institute (SGI) at Coventry (far right), and some of the audience (center). I recognize several very important virtual world pioneers and innovators in the midst of the frey &#8211; Rohan Freeman (CEO of the <a href="http://www.sinewavecompany.com/" target="_blank">Sine Wave Company</a>), Prof Jeremy Watson (<a href="http://www.arup.com/">Arup</a>), Dr Anthony Dennis (<a href="http://www.infoterra.co.uk/" target="_blank">infoterra</a>),Â  and <a href="http://www.nanodave.com/" target="_blank">Dave Taylor</a>, Programme Lead, Virtual Worlds and Medical Media at Imperial College London.</p>
<p>EOLUS<sup>Â®</sup> One initially started as an innovation project with the focus to develop new service oriented offerings for the real estate industry. Oliver explained to me:</p>
<blockquote><p>The focus is on sustainable real estate, enhancing the structural and technical performance of properties which will be the first use case for SHASPA. SHASPA uses the combined power of the work done by the SGI and EOLUS One to create Smart Shared Spaces for various industries. The first use case will be in the Real Estate industry to revolutionize approaches to facilities/property performance optimization and energy management.</p></blockquote>
<h3>RealXtend harnesses OpenSim as engine for their server side development</h3>
<p>Adam Frisby sent me the picture opening this post today.Â  Adam pointed out It shows:</p>
<blockquote><p>&#8230;<a href="http://www.realxtend.org/" target="_blank">Realxtend</a> just running under OpenSim rather than the forked version of OpenSim realXtend did. It&#8217;s been converted to a set of OpenSim plugins &#8211; we&#8217;re still at a semi-preliminary stage, however, we&#8217;ve got Rex avatars and a few other features now working.</p></blockquote>
<p><a href="http://www.realxtend.org/" target="_blank">Realxtend</a> is now able to leverage the OpenSim core, and OpenSim developers can work with realXtend innovation as plugins. For more details on this modular integration <a href="http://www.ugotrade.com/2008/07/02/new-release-from-realxtend-and-modular-integration-into-opensim/" target="_blank">see my previous post</a>.Â  This modular architecture will create many new opportunities for mashups.Â  And the Web 2.0 integration and interoperability work that is central to the OpenSim vision will be aligned with the advanced 3D Internet layer realXtend has been building on top of it.</p>
<p>On their diverse and multifaceted team, RealXtend has a number of world class game developersÂ  who have in a very short time progressed rapidly towards the goals Tony Manninen, Ludocraft, <a href="http://www.ugotrade.com/2008/02/27/realxtends-vision-for-avatar-20/" target="_blank">described to me back in February,</a> &#8220;making sure the realXtend development reaches the required quality and performance standards you would expect from MMOGs.&#8221;</p>
<p>And as Jani Pirkola, Project Manager for RealXtend, points out:</p>
<blockquote><p>For realXtend it means that we can have all the OpenSim development directly benefit realXtend, whether it is Web 2.0 or other features.</p></blockquote>
<h3>Web Friendly Standards for Virtual Worlds</h3>
<p>I am off to London next week to the <a href="http://www.virtualworlds2007.com/" target="_blank">Virtual Worlds Conf an Expo</a>.Â  But, while I am very excited to meet old and new friends there, it is disappointing to note that the open source developer communities and the interoperability and open protocol efforts of OpenSim and Linden Lab are sadly unrepresented in London.</p>
<p>Making virtual worlds part of the fabric of the internet and everyday computing will not happen because some arbitrary standards body pontificates on elaborate requirements and then tries to get the backing of big business to implement their standards from top down. There are many white papers on why this old fashioned way of developing standards is not applicable to the fast moving internet environment.Â Â  As David Levine, IBM, so nicely put it a while back, interoperability and standards for virtual worlds:</p>
<blockquote><p>will emerge battered byte by battered byte from the hands of grubbie techies each with an agenda. Except on Second Life some of us are blonde, with a pert smile but yeahâ€¦.</p></blockquote>
<p>It is, in my view, unfortunate that Dr Yesha Y Sivian, <a href="http://www.metaverse1.org/" target="_blank">Metaverse1</a>, in <a href="http://www.metaverse1.org/2008/08/virtual-worlds-sos-state-of-standards.html" target="_blank">his talk</a> &#8220;Virtual Worlds State of Standards (SOS): MPEG-V, Metaverse1, Open-SIM and more&#8221; has put OpenSim in his title (and Architecture Working Group in the body of his abstract) when he does not seem to have (yet) invited anyone from OpenSim or Architecture Working Group or OGP to represent their own work.Â  Again, unfortunately, a panel including key industry leaders and representatives from OpenSim and Architecture Working Group did not get the opportunity to present in London because Dr Sivian&#8217;s proposal gave the conference organizers the impression there was already a &#8220;similar panel.&#8221;</p>
<p>MPEG-V and Metaverse 1 are Dr Yesha Sivian&#8217;s projects and they are at a very early stage of development (basically an effort to define a set of requirments and garner business support for the notion of creating so called MPEG-V standards). To have credibility, in my view, these projects need to engage with other groups that are working on standards and actually have working code, asÂ  Architecture Working Group and OpenGrid Protocol (OGP) do.</p>
<p>There are some common misunderstandings about the approach of the Architecture Working Group that should be cleared up.</p>
<p>As key architects of OpenGrid Protocol (OGP), and the Architecture Working Group, frequently stress, OGP is a point of departure.Â  While its focused on the existing code of OpenSim and Second Life, the overall framework is as broad, or broader than the meteverse work.Â  The goal is to create a fully described set of web based protocols and formats which will do anything MPEG-V wants to do, but meshed far more fully into the web.</p>
<p>Metaverse1 needs to be in dialogue with the standards work that has already produced code, if they are serious about creating good standards.</p>
<h3>Out of the Trough of Disillusionment onto the Slope of Enlightenment</h3>
<p>It seems Virtual Worlds may have started onto the Slope of Enlightenment (see <a href="http://en.wikipedia.org/wiki/Hype_cycle" target="_blank">Gartner Hype Cycle</a>).Â  Virtual Worlds, and immersive Virtual Worlds (in particular <a href="http://secondlife.com" target="_blank">Second Life<sup>Â®</sup> </a>and <a href="http://opensimulator.org" target="_blank">OpenSim</a>), continue to garner broad consumer interest. And, the ability of Virtual Worlds to deliver added value in key areas of collaboration and energy conservation is fueling a a lot of interest from education and enterprise.</p>
<p>While worries of depression and recession in the global economy abound, and the internet is abuzz with discussion of Joe The Plumber (as Mitch Kapor noted in Twitter: &#8220;<span class="entry-content">Not since Nixon have we heard so much about plumbers&#8221;). </span></p>
<p><span class="entry-content"> Nevertheless</span>, there has been quite a steady flow of positive news from Virtual Worlds. See <a href="http://www.calebbooker.com/blog/2008/10/12/business-in-virtual-worlds-news-roundup-oct-6-12-2008/" target="_blank">Caleb Booker&#8217;s roundup</a> andÂ  <a href="http://www.virtualworldsnews.com/" target="_blank">Virtual World News</a> and check for yourself. And just in, Forbes.com post, <a href="http://www.forbes.com/technology/ebusiness/2008/10/09/virtual-world-economy-tech-ebiz-cx_mji_1010virtual.html" target="_blank">&#8220;A &#8216;virtual&#8217; Escape from Economic Pain,&#8221;</a> After scanning my reader I checked my perception in <a href="http://twitter.com" target="_blank">Twitter</a> and quickly got replies from <a href="http://wagner.typepad.com/wagner/links_to_my_informationweek_content/" target="_blank">Mitch Wagner of Information Week</a>.</p>
<p><a class="url" href="http://twitter.com/Ugotrade"><img id="profile-image" class="photo fn" src="http://s3.amazonaws.com/twitter_production/profile_images/56220741/TishheadshotNYC3twitter_normal.jpg" alt="Tish Shute" /></a></p>
<div class="status-body"><strong><a title="Tish Shute" href="http://twitter.com/Ugotrade">Ugotrade</a></strong> <span class="entry-content"> Anyone seen ANY negative stories about Second Life lately? Seems there&#8217;s negative news everywhere else but immersive VWs r gold again? </span> <span class="meta entry-meta"> <a class="entry-date" rel="bookmark" href="http://twitter.com/Ugotrade/statuses/957104815"><span class="published" title="2008-10-13T02:07:55+00:00">about 13 hours</span> ago</a> from web </span></div>
<p><a class="url" href="http://twitter.com/MitchWagner"><img id="profile-image" class="photo fn" src="http://s3.amazonaws.com/twitter_production/profile_images/57644893/Mitch_Wagner_business_mug_shot_normal.jpg" alt="Mitch Wagner" /></a></p>
<div class="status-body"><strong><a title="Mitch Wagner" href="http://twitter.com/MitchWagner">MitchWagner</a></strong> <span class="entry-content"> @<a href="http://twitter.com/Ugotrade">Ugotrade</a> I looked for negative stories about SL a few weeks ago, couldn&#8217;t find any recent ones. </span> <span class="meta entry-meta"> <a class="entry-date" rel="bookmark" href="http://twitter.com/MitchWagner/statuses/957109943"><span class="published" title="2008-10-13T02:13:49+00:00">about 13 hours</span> ago</a> from <a href="http://www.tweetdeck.com/">TweetDeck</a> <a href="http://twitter.com/Ugotrade/statuses/957104815">in reply to Ugotrade</a> </span></div>
<p><a class="url" href="http://twitter.com/Ugotrade"><img id="profile-image" class="photo fn" src="http://s3.amazonaws.com/twitter_production/profile_images/56220741/TishheadshotNYC3twitter_normal.jpg" alt="Tish Shute" /></a></p>
<p><span class="entry-content">@<a href="http://twitter.com/MitchWagner">MitchWagner</a> &#8211; while you didn&#8217;t find any negative stories have you seen an increase in positive stories in mainstream media? </span> <span class="meta entry-meta"> <a class="entry-date" rel="bookmark" href="http://twitter.com/Ugotrade/statuses/957131133"><span class="published" title="2008-10-13T02:36:38+00:00">about 13 hours</span> ago</a> from web                   <a href="http://twitter.com/MitchWagner/statuses/957109943">in reply to MitchWagner</a> </span></p>
<p><a href="http://twitter.com/MitchWagner"><img id="profile-image" src="http://s3.amazonaws.com/twitter_production/profile_images/57644893/Mitch_Wagner_business_mug_shot_normal.jpg" alt="Mitch_wagner_business_mug_shot_normal" /></a></p>
<div class="status-body"><strong><a href="http://twitter.com/MitchWagner">MitchWagner</a></strong> <span class="entry-content"> Sure. I&#8217;d say I saw only positive news in the MSM. [mainstream media]</span></div>
<h3>Recents News Events of Note</h3>
<p>The coming of age of Open Source Virtual worlds is attracting some mainstream attention now.Â  One of the leading authorities on Open Source Software and Services,Â  <a href="http://ostatic.com/" target="_blank">OStactic </a>has several recent posts on OpenSim and Open Source Virtual Worlds. And, of course, I was thrilled that Ugotrade got a mention in the most recent one, <a href="http://ostatic.com/173728-blog/open-source-virtual-reality-spreads-out" target="_blank">Open Source Virtual Reality Spreads Out.</a></p>
<div class="status-body">
<h3>Wikitecture on O&#8217;Reilly Radar</h3>
<p><a href="http://radar.oreilly.com/josh/" target="_blank">Joshua-Michele Ross</a> gave <a href="http://radar.oreilly.com/2008/10/wikitecture-radical-collabor.html" target="_blank">an excellent write up</a> today of Wikitecture a project I have followed from its inception to proof of concept in <a href="http://secondlife.com" target="_blank">Second Life<sup>Â®</sup></a>.Â  The mainstream recognition of the value of Wikitecture is really exciting. Recently Studio Wikitecture won <a href="http://www.architectureforhumanity.org/">Architecture for Humanityâ€™s</a> Founders Award for their submission; a health facility in Nepal. And Ross of O&#8217;Reilly radar offers high praise:</p>
<blockquote><p>Wikitecture is first sophisticated tool I have seen in 3D where programmed logic provides a clear structure to facilitate collaboration. Are there other radical examples of collaboration taking place that we should be looking at?</p></blockquote>
<h3>The Inaugral Sinewave Pub Quiz on OSGrid.org</h3>
<p>This was a really fun event.Â  Read all about it on <a href="http://www.adamfrisby.com/blog/2008/10/osgrid-pub-quiz-summary/#comments" target="_blank">Adam Frisby&#8217;s blog</a> including a technical write up and more on the most excellent bot-in-residence Chinzy Quizmaster running the <a href="http://www.sinewaverobots.com/home/auth.php">Sinewave Quizbot</a> code.Â  But, most importantly, don&#8217;t miss the next one while you still have a really good shot at the $500 prize! The Pub Quiz is a load testing event for OpenSim and <a href="http://osgrid.org/" target="_blank">OSGrid</a>.Â  And, as I know OpenSim has ambitions for big concurrencies in the future, try to be an early bird on this one! Next Pub Quiz: <strong>Sunday, 26th of October at 9PM GMT (1PM PST)</strong> with a Halloween theme.</p>
<h3>&#8220;Fashion Goes 3D&#8221;</h3>
<p>A recent post in <a href="http://bigtech.blogs.fortune.cnn.com/2008/09/26/fashion-goes-3d/?source=yahoo_quote" target="_blank">Fortune</a> foregrounded Shenlei Winkler&#8217;s (CEO, <a href="http://www.fashionresearchinstitute.com/" target="_blank">Fashion Research Institute</a>), collaboration with IBM in OpenSim and Second Life (Shenlei Winkler is Shenlei Flasheart in Second Life and OpenSim). <a href="http://www.marketwatch.com/news/story/ibm-signs-services-agreement-fashion/story.aspx?guid={6626C1FE-26A8-423B-9DA3-CD70B349932D}&amp;dist=hppr" target="_blank">MarketWatch</a> also featured a piece on the &#8220;multi-million IBM Global Business Services agreement with the Fashion Research Institute (FRI) to implement a first-of-a-kind Virtual World Product Lifecycle Management (PLM) Enterprise System.&#8221; in OpenSim.Â See <a href="http://www.fashionresearchinstitute.com/media/news5.html" target="_blank">the press release here</a> and<a href="http://fashiontech.wordpress.com/2008/10/12/ibm-fri-update-virtual-fashion-for-real-world-production/" target="_blank"> this article</a> from <a href="http://fashiontech.wordpress.com/about/" target="_blank">Elaine Polvinen</a> for more.</div>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/10/16/mashing-up-virtual-worlds-with-web-20-and-online-gaming/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Putting OpenSim Into The Heart of Web 2.0</title>
		<link>https://www.ugotrade.com/2008/10/06/putting-opensim-into-the-heart-of-web-20/</link>
		<comments>https://www.ugotrade.com/2008/10/06/putting-opensim-into-the-heart-of-web-20/#comments</comments>
		<pubDate>Mon, 06 Oct 2008 18:36:56 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Architecture Working Group]]></category>
		<category><![CDATA[BSD versus GPL]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[GPL]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[virtual worlds in china]]></category>
		<category><![CDATA[virtual worlds in Japan]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[3Di OpenSim Standards]]></category>
		<category><![CDATA[Asian virtual Worlds]]></category>
		<category><![CDATA[ChinaQ]]></category>
		<category><![CDATA[communication protocols for virtual worlds]]></category>
		<category><![CDATA[immersive virtual worlds and Web 2.0]]></category>
		<category><![CDATA[Immersive Worlds and Web 2.0]]></category>
		<category><![CDATA[Integration of OpenSim into Web 2.0]]></category>
		<category><![CDATA[Integration of Virtual Worlds in Web 2.0]]></category>
		<category><![CDATA[licensing of open virual worlds]]></category>
		<category><![CDATA[MPEG-V]]></category>
		<category><![CDATA[Open Grid Protocol]]></category>
		<category><![CDATA[OpenSim in the Architecture of Web 2.0]]></category>
		<category><![CDATA[OpenSim Standards]]></category>
		<category><![CDATA[small architecture versus big architecture virtual worlds]]></category>
		<category><![CDATA[standardization of virtual worlds]]></category>
		<category><![CDATA[virtual world protocols]]></category>
		<category><![CDATA[virtual worlds and consumer adoption]]></category>
		<category><![CDATA[Web 2.0 Architecture]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1853</guid>
		<description><![CDATA[This post, and my previous post about integration of OpenSim into Web 2.0, explore how immersive virtual worlds, through a full architectural integration into Web 2.0, will become part of the fabric of everyday computing. The diagram above shows where OpenSim sits in Web 2.0 (click on the diagram to see a readable enlarged version!). [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/images/Teravus2copy.jpg" target="_blank"><img class="alignnone size-full wp-image-1857" title="teravus2copypostnew1" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/teravus2copypostnew1.jpg" alt="" width="450" height="255" /></a></p>
<p>This post, and <a href="http://www.ugotrade.com/2008/09/29/rob-smart-ibm-web-20-to-opensim-made-easy/">my previous post </a>about integration of <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> into Web 2.0, explore how immersive virtual worlds, through a full architectural integration into Web 2.0, will become part of the fabric of everyday computing.</p>
<p>The diagram above shows where OpenSim sits in Web 2.0 (click on the diagram to see a readable enlarged version!). The following interview with OpenSim developer, Teravus Ousley, describes some of the work being done to create documented protocols that will make OpenSim fit seamlessly into Web 2.0 architecture.</p>
<p>OpenSim is in the news a lot these days, explicitly as in the case of the announcement last week by <a href="http://3di.jp/" target="_blank">3Di</a> of their  <a href="http://3di-opensim.com/">â€œ3Di OpenSimâ€ Standard</a> (for more see <a href="http://www.virtualworldsnews.com/2008/10/3di-begins-sell.html" target="_blank">here</a> and <a href="http://blog.mindblizzard.com/2008/10/3di-moves-opensim-into-enterprise-mode.html#links" target="_blank">here</a>), and <a href="http://www.chinaq.com/web/" target="_blank">implicitly with the launch of ChinaQ</a>.Â <a href="http://www.adamfrisby.com/blog/" target="_blank"> Adam Frisby</a>, OpenSim, pointed out to me if you download the ChinaQ client that it is based on OpenSim, it connects nicely to <a href="http://osgrid.org/" target="_blank">OSGrid</a> too. There is speculation the client is a rebranded version of the<a href="http://www.realxtend.org/" target="_blank"> realXtend</a> viewer (which is based on the open source <a href="http://www.secondlife.com" target="_blank">Second Life</a> viewer) as all the version numbers are the same.</p>
<p>So OpenSim is not only attracting the interest of business giants like IBM, Microsoft and Intel, it is becoming the architecture of choice for virtual world initiatives from Chinese and Japanese telecoms (see <a href="http://parksassociates.blogspot.com/2008/09/chinaq-based-on-opensim.html" target="_blank">here</a> and <a href="http://www.virtualworldsnews.com/2008/06/ntt-investing-1.html" target="_blank">here</a> for more). Also, <a href="http://www.realxtend.org/page.php?pg=news&amp;s=20080929" target="_blank">see the press release</a> about Nokia and the <a href="http://www.businessoulu.com/">City of Oulu</a>, Finland, joining as supporters of  <a href="http://www.realxtend.org/">realXtend</a>.</p>
<p>But, as Raph Koster in <a href="http://www.raphkoster.com/2008/10/03/enterprise-vws-do-they-suck/" target="_blank">his post commenting on 3Di&#8217;s OpenSim announcement</a> notes, the question how immersive virtual worlds can go from strong niche or enterprise markets to mass adoption in consumer markets must be answered.Â  As Raph points out, <em>Lively</em>, <em>Whirled, SmallWorlds, Vivaty</em>, and yes, <a href="http://www.metaplace.com/"><em>Metaplace</em></a> have a very different architecture that they hope will attract broad consumer markets.Â   (I did a long interview with Raph on this at <a href="http://www.virtualworldsexpo.com/" target="_blank">The Virtual Worlds Conference and Expo in LA</a> which I will post as soon as it is transcribed, so more on this soon!).</p>
<p>Architectural integration into the heart of Web 2.0, I would argue, is the key to mass adoption for immersive virtual worlds. While architecture alone will not guarantee the necessary breakthroughs in usability for widespread consumer adoption, it will create the ideal conditions for the innovation through which usability obstacles will be overcome, and the enormous potential for immersive, real time interaction over the internet will be realized.</p>
<h3><strong><br />
</strong></h3>
<h3><strong>Interview with Teravus Ousley</strong></h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/teravus_ousley_pic.jpg"><img class="alignnone size-full wp-image-1869" title="teravus_ousley_pic" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/teravus_ousley_pic.jpg" alt="" width="314" height="271" /></a></p>
<p><strong>Tish: </strong>What has beenÂ  the most fundamental problem re virtual world architecture that has kept immersive virtual worlds isolated from web 2.0 to date?Â <strong> </strong></p>
<p><strong>Teravus</strong>: a lack of standardization, licensing issues, and the difficulty of entry into the industry.</p>
<h3>1) Standardization</h3>
<p><strong>Tish: </strong>In order of importance what in your view are the priorities for standardization?</p>
<p><strong>Teravus:</strong> Probably the same order that OpenSimulator was tackled in, basic connect (current state of OGP &#8211; <a href="http://wiki.secondlife.com/wiki/SLGOGP_Draft_1" target="_blank">Open Grid Protocol</a>).Â  Basic Service (interaction standards).Â  Advanced connect/mashup/aggregate extensions. Â  Preferably people will have working code in the various spaces there to use freely under various licenses..</p>
<p><strong>Tish:</strong> Can you show me where OpenSim will fit in this drawing of Web 2.0 architecture? [Teravus makes some modifications on the drawing I send him from  <a href="http://hinchcliffeandcompany.com/" target="_blank">Dion Hinchcliffeâ€™s</a> presentation from his Web 2.0  Expo workshop, <a href="http://www.ugotrade.com/images/Hinchcliffe.jpg" target="_blank">see  original here</a>]</p>
<p><strong>Teravus:</strong> The modified diagram [now opening this post] is a great view of how it will look.</p>
<p><strong>Tish</strong>: Why is the TCP stream left out of the original drawing? [For more about <strong>Transmission Control Protocol (TCP)</strong> is one of the core protocols of the <a title="Internet Protocol Suite" href="http://en.wikipedia.org/wiki/Internet_Protocol_Suite">Internet Protocol Suite </a>see <a href="http://en.wikipedia.org/wiki/Transmission_Control_Protocol" target="_blank">here</a>.<a title="Internet Protocol Suite" href="http://en.wikipedia.org/wiki/Internet_Protocol_Suite"><br />
</a></p>
<p><strong>Teravus:</strong> It is left out because the person who made this diagram had web pages in mind.Â  Static large files, or small changing files. In the the drawing the fact that TCP streams are smaller then HTTP is on purpose.</p>
<p><strong>Tish:</strong> I have heard different opinions on the percentage of the communications for virtual worlds that can be done over HTTP?</p>
<p><strong>Teravus:</strong> The fact is that the biggest usage of communications in virtual worlds is transmitting images thatâ€™s the number one bandwidth usage. So, if weâ€™re counting by â€˜usageâ€™ I say 91%.Â Â  If weâ€™re counting by services that use http.Â Â  I say probably 75%Â  I definitely think that http should be evaluated for use on new things â€˜firstâ€™. But, there are a few places where HTTP doesnâ€™t shine.</p>
<p>I am skeptical about replacing things in the UDP with HTTPÂ  thinking that theyâ€™ll â€˜perform better. [For more about <strong>User Datagram Protocol</strong> (<strong>UDP</strong>) another of the core protocols of the <a title="Internet Protocol Suite" href="http://en.wikipedia.org/wiki/Internet_Protocol_Suite">Internet Protocol Suite </a>see <a href="http://en.wikipedia.org/wiki/User_Datagram_Protocol" target="_blank">here</a>.]<a title="Internet Protocol Suite" href="http://en.wikipedia.org/wiki/Internet_Protocol_Suite"><br />
</a></p>
<p>I think thereâ€™s been a huge test going on now and for the last 5 or six years with regards to the UDP protocol and it really has performed admirably.Â Â  In the last year and a half, Iâ€™ve seen attempts to convert several things to HTTP that have failed, and failed somewhat spectacularly sometimes.Â  In the end the items get reverted back to the UDP protocol. One such item that sticks out in my mind is CAPS(HTTP) based inventory retrieval. The capability to do that in the client has been available since before February. And, itâ€™s been turned on and off on â€˜Agniâ€™ at least once in the process. Additionally, we (OpenSimulator) enabled http inventory, and, theÂ  inventory failures rose pretty steeply.</p>
<p>I think some services are really just not â€˜rightâ€™ for HTTP.. . particularly where a â€˜pollâ€™ methodology is used, or, the data is significantly dynamic enough that it makes caching useless.</p>
<p>Anyway, as far as the future is concerned, I do want to see some services over HTTP. Other services, it would be more appropriate to have a TCP stream. Stock market data, for example, uses a TCP stream. The Scalability of the stock market, is just one example of a scalable TCP stream.</p>
<p><strong>Tish:</strong> So you see TCPÂ  as the communications protocol that would do the work for the parts of virtual worlds not suitable for HTTP. At least that is how you have shown it in our Web 2.0 architecture drawing. But should there also be a UDP stream?</p>
<p><strong>Teravus</strong>: For the virtual world of tomorrow? .. probably not.</p>
<p><strong>Tish:</strong> Why not?</p>
<p><strong>Teravus:</strong> You have less control over the quality of service when it&#8217;s delivered over UDP then TCP.</p>
<p><strong>Tish</strong>: What is the exact relation between TCP and UDP.Â  My understanding is UDP a lower level protocol.</p>
<p><strong>Teravus:</strong> TCP offers guaranteed delivery through flow control, while UDP does not.Â  One of the failures of UDP, is the â€˜resendâ€™ technology weâ€™ve put on top of it to try and make it reliable.Â Â  TCP does this automatically and better then we could at a lower level but it does also cost up to twice the bandwidth depending on what is being sent. HTTP is a layer on top of TCP.</p>
<p><strong>Tish:</strong> So just like the HTTP/TCP discussion there has to be a TCP/UDP boundary discussion â€¦so it is HTTP then TCP then UDP and the boundaries have to be worked on.</p>
<p><strong>Teravus: </strong>Those are the orderings in my mindâ€¦Â  probably if UDP uses any..Â  it should use less then 0.5%.</p>
<p><strong>Tish:</strong> And the current Second Life architecture what does it use if it isnâ€™t using HTTP? [to see the work of the <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architecture Working Group</a> on the future <a href="http://www.secondlife.com" target="_blank">Second Life</a> architecture here]</p>
<p><strong>Teravus:</strong> UDP or HTTP</p>
<p><strong>Tish:</strong> and TCP?</p>
<p><strong>Teravus:</strong> Well, TCP is a layer under HTTP.Â  As far as I know, SL doesnâ€™t use TCP streams directly</p>
<p><strong>Teravus: </strong>Instead, it uses HTTP polling.Â  This is one of the places, that Iâ€™ve highlighted where it doesnâ€™t shine.</p>
<p><strong>Tish: </strong>Polling does sound slow?</p>
<p><strong>Teravus:</strong> Polling is essentially..Â Â Â Â  (connect) Got any data for me? No?(disconnect), (connect) Got any data for me?Â  No?(disconnect).</p>
<p><strong>Tish:</strong> So what is the path to standards for this then?<strong></strong></p>
<p><strong>Teravus:</strong> Distilling what we know works and what we actually intend on supporting as far as adoption under these standards.</p>
<p><strong>Tish:</strong> Where does <a href="http://www.metaverse1.org/" target="_blank">MPEG-V</a> fit in?Â  Have you read their document yet?</p>
<p><strong>Tervavus:</strong> MPEG-V is interesting readingâ€¦Â Â Â Â  but is there any working example? I have just the overview. But Iâ€™ll read it over to have a better determination of how to â€˜keep it in mindâ€™ for the future. It looks like theyâ€™ve only really defined the requirements of the MPEG-V spec. The MPEG-V spec looks quite far reaching..Â  butÂ  the documents so far are requirements and marketing talk aimed toward business people &#8211; obviously intended to get more people interested in working on them.</p>
<p>But I have a feeling that any format with MPEG before it will be onerous to support. ..for me itâ€™s too early to tell. Itâ€™s quite far reachingâ€¦it isnâ€™t anything like â€™signal processingâ€™ which the MPEG group is most famous for.</p>
<p><strong> Tish:</strong> The whole top down approach of the MPEG-V initiative seems counter to Web 2.0 principles to me.</p>
<p><strong>Teravus:</strong> Well, remember..Â  that even if thereâ€™s a virtual world format war (reference to DVD-HD vs BlueRay) we still need to win over the rest of the web.</p>
<p><strong>Tish:</strong> Yes and donâ€™t you think the way to win over the web is to use as many existing standards as possible?</p>
<p><strong>Teravus:</strong> Well, itâ€™s to use as many existing standards as â€˜fitâ€™ though.. KISS, as always (K)eep (I)t (S)imple (S)tupid if we have 30 different internet standards..Â Â Â Â  people looking at it will @.@</p>
<p><strong>Tish:</strong> But it is just lack of documented protocols that has created isolation from Web 2.0?Â  And really doesnâ€™t it boil down to standardizing that small percentage that is outside HTTP &#8211; the TCP and UDP stream that we talked about earlier where the real time stuff that virtual worlds bring to the web happens?</p>
<p><strong>Teravus:</strong> no..Â  actually the HTTP standardization is just as important.</p>
<p><strong>Tish:</strong> You mean even though SL used HTTP it isnâ€™t standardized?</p>
<p><strong>Teravus:</strong> Not documented specifically.</p>
<p><strong>Tish:</strong> And OpenSim is that documented?</p>
<p><strong>Teravus:</strong> Not well enough probably to define a standard.</p>
<p><strong>Tish:</strong> Is AWG (<a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architecture Working Group</a>) doing the documentation?</p>
<p><strong>Teravus:</strong> working on it..</p>
<h3>2)<strong> Licensing Issues</strong></h3>
<p><strong>Tish:</strong> It sounds like some of this work has to go on across client and server.Â  Are we running into the issue of <a href="http://en.wikipedia.org/wiki/Berkeley_Software_Distribution" target="_blank">BSD</a> for OpenSim and <a href="http://en.wikipedia.org/wiki/GNU_General_Public_License" target="_blank">GPL</a> for the Second Life viewer?</p>
<p><strong>Tervaus:</strong> Well, some of the issue here is license choice.Â  One of the reasons that libOMV was able to achieve what they did was they did it /before/ the client was open sourced.</p>
<p><strong>Tish:</strong> So open sourcing the client actually became an obstacle!!???</p>
<p><strong>Teravus</strong>: I donâ€™t think so in a whole.Â  I think it was great for the community.Â  I do, however think that C++ UDP stacks will be scrutinized more for GPL license violations because, of course, the client is GPL and C++ .<strong><br />
</strong></p>
<p><strong>Tish:</strong> It is my understanding that Linden Lab is open to discussions on making the licensing more efficient for the open source community?</p>
<p><strong>Teravus</strong>: Well, the client, in a whole, should not be changed as far as the license.Â Â  JUST the things that they expect people to adopt should be made more open. If they expect people to adopt PRIMs, then there should be an efficient implementation available for anyone to use..Â Â  at the very least, in <a href="http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License" target="_blank">LGPL</a> format. Otherwise, the die hards are forced to re-implement them from scratch, and most people will just choose something more open.</p>
<p><strong>Tish: </strong>Has anyone ever put together a list of the parts that need to be <a href="http://en.wikipedia.org/wiki/GNU_Lesser_General_Public_License" target="_blank">LGPL</a>ed?</p>
<p><strong>Teravus</strong>: Well, I think itâ€™s there in a few places.Â  There is at least one jira open on it.</p>
<p><strong>Teravus:</strong> A few that come to mind for me..Â Â  is the UDP stack and the prim to mesh/UV code. Â  I think there are some things that can definitely be improved about the UDP Stack.Â  There are some things, (images come to mind), that would be better over HTTP</p>
<p><strong>Tish: </strong>Do you think if the UDP stack were L GPLed that would be a significant help to integrating OpenSim better with the web?</p>
<p><strong>Teravus:</strong> Well, it would certainly be adopted by more clients. GPL + (your own code) = GPL Licensed client. LGPL linked library + (your own code) = Your own license.<br />
You still need to mention that you used LLâ€™s UDP stack, and provide the source code for it at request.</p>
<p>The general client itself should remain GPL, itâ€™s better for LL that way.Â  Just the items that they want people to â€™standardizeâ€™ on. It would help..Â Â  if it was at least LGPL<br />
<strong></strong></p>
<p><strong>Tish:</strong> And the value toÂ  LL on LGPLing these parts is it helped spread their basic technology while protecting the rest of their viewer?</p>
<p><strong>Teravus:</strong> It furthers their goal of standardization on their systems because it allows more people to adopt it for their own uses without worrying about GPL-ing their own client.</p>
<p><strong>Tish:</strong> It is hard to standardize without access to the low level parts of the client right?</p>
<p><strong>Teravus:</strong> The general population of Developers..Â Â Â Â  will want a libX that they can plug into their application for communicating.. .Â  libY to deal with object data..</p>
<p><strong>Tish:</strong> Hence your requests for LGPL wereÂ  UDP stack andÂ  the prim-&gt;mesh/UV</p>
<p><strong>Teravus nods</strong></p>
<p><strong>Tish: </strong> and at the moment they only have openmv?</p>
<p><strong>Teravus</strong>: Thatâ€™s the only â€˜trulyâ€™ open standard right now as far as the LL technology is concerned. OpenSimulatorâ€™s use of that data..Â Â  could also be seen as a standard..</p>
<p><strong>Teravus:</strong> But we have not published anything beyond code..Â Â  neither have theyÂ  really..Â  technically..Â  but their organization of the way things work is very very clear</p>
<p><strong>Tish:</strong> What are the most significant limitations of openmv?</p>
<p><strong>Teravus:</strong> Probably..Â  just it not being in c++.Â Â  c++ has itâ€™s benefits and itâ€™s pitfalls.Â  Changes in c++ usually take longer then ones in C#.Â  But, of course c++ is always faster.Â  With libOMV It isnâ€™t always clear about what packet is used when.Â  However, with some experimentation, you can figure it out in 30 minutes or less..</p>
<p><strong><br />
</strong></p>
<h3><strong>Usability</strong></h3>
<p><strong></strong></p>
<p>We didnâ€™t spend much time discussing some of the innovation in usability that this architectural integration into Web 2.0 will enable (more to come on that!). But, Teravus mentioned one interesting use case he is working on.</p>
<p><strong>Teravus:</strong> You might also stick a â€˜cloud rendererâ€™ into the graphic [Tervaus was looking at the diagram (from   <a href="http://hinchcliffeandcompany.com/" target="_blank">Dion Hinchcliffe</a>) that opened my previous post on &#8220;Web 2.0 to OpenSim Made easy&#8221;Â  click on the thumbnail below].</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/teravus1the-moving-pieces-modified-twice.jpg"><img class="alignnone size-medium wp-image-1865" title="teravus1the-moving-pieces-modified-twice" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/10/teravus1the-moving-pieces-modified-twice-300x186.jpg" alt="" width="300" height="186" /></a></p>
<p>Some people have discussed having a â€˜video streamâ€™ thatâ€™s rendered on the cloud and providing that to flash clients would be the best solution to it for them.</p>
<p>The cloud renderer is for organizations that have large pools of servers with GPUs so would allow for very powerful rendering. The servers can render the scenes and stream them to the low end browsers. It would allow extremely high quality rendering for really low end browsers..Â  such as â€˜cell phones.â€™</p>
<p><strong>Tish:</strong> Is that possible now on OpenSim?</p>
<p><strong>Teravus</strong>: Nope.Â  But itâ€™s something that in the future, I intend on working on. It would essentially be a video [streamed to low end browsers].</p>
<p><strong>Tish:</strong> Is that different from what <a href="http://blog.newsweek.com/blogs/levelup/archive/2008/04/21/second-life-on-your-mobile-phone-yes-says-vollee.aspx" target="_blank">Vollee</a> is doing? The mobile client for SL?</p>
<p><strong>Teravus</strong>:Â  It appears that they are, indeed, pre-rendering the client&#8217;s view and streaming it to the mobile device</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/10/06/putting-opensim-into-the-heart-of-web-20/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Rob Smart, IBM: &#8216;Web 2.0 to OpenSim Made Easy&#8217;</title>
		<link>https://www.ugotrade.com/2008/09/29/rob-smart-ibm-web-20-to-opensim-made-easy/</link>
		<comments>https://www.ugotrade.com/2008/09/29/rob-smart-ibm-web-20-to-opensim-made-easy/#comments</comments>
		<pubDate>Mon, 29 Sep 2008 23:56:14 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Architectural Working Group]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[vapor standards]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[innovative communication devices for virtual worlds]]></category>
		<category><![CDATA[integrating virtual worlds with Web 2.0]]></category>
		<category><![CDATA[integrating virtual worls into the architecture of Web 2.0]]></category>
		<category><![CDATA[JSON and OpenSim]]></category>
		<category><![CDATA[leveraging network effects with virtual worlds]]></category>
		<category><![CDATA[Microsoft ESP]]></category>
		<category><![CDATA[outeroperability]]></category>
		<category><![CDATA[paraverse]]></category>
		<category><![CDATA[taking virtual worlds mainstream]]></category>
		<category><![CDATA[web 2.0 and OpenSim]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1788</guid>
		<description><![CDATA[Web 2.0 surpasses all previous technologies in its ability to &#8220;explicitly leverage network effects&#8221; (a definition of Web 2.0 from Tim O&#8217;Reilly). But, while virtual worlds pass another classic litmus test of Web 2.0 &#8211; two way participation, they have been, up to this point, largely cut off from Web 2.0 power/network effects. Persistent immersive [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/images/Web20Opensimfull.jpg" target="_blank"><img class="alignnone size-full wp-image-1801" title="web20opensimlgsm" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/web20opensimlgsm.jpg" alt="" width="450" height="293" /></a></p>
<p><span id=":16a" dir="ltr">Web 2.0 surpasses all previous technologies in its ability to </span>&#8220;explicitly leverage <a href="http://en.wikipedia.org/wiki/Network_effect" target="_blank">network effects</a>&#8221; (a definition of Web 2.0 from Tim O&#8217;Reilly)<span id=":16a" dir="ltr">. But, w</span>hile virtual worlds pass another classic litmus test of Web 2.0 &#8211; two way participation, they have been, up to this point, largely cut off from Web 2.0 power/network effects.</p>
<p><span id=":16a" dir="ltr"> </span>Persistent immersive virtual worlds, led by Second Life, have done well as niche markets but they remain relatively isolated from Web 2.0, even though they bring somethingÂ  vital and new to the internet &#8211; real time interaction and dynamic melded states &#8211; in contrast to the current web&#8217;s large static files, or small changing files.</p>
<p>The slide opening this post is a modification of a slide from <a href="http://hinchcliffeandcompany.com/" target="_blank">Dion Hinchcliffe&#8217;s</a> presentation from his Web 2.0  Expo workshop &#8211; Building Successful Next Generation <span class="nfakPe">Web</span> <span class="nfakPe">2.0</span> Applications. Virtual worlds are not anywhere to be found on the original. So I asked Rob Smart, IBM, who has just added JSON support to <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> to draw <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSimulator</a> into this picture. In my interview with Rob, later in this post, he discusses the relationship between virtual worlds and Web 2.0 and how JSON is an important step towards virtual worlds taking up a place in Web 2.0 architecture.</p>
<p>When people think of the current architecture of Web 2.0 virtual worlds do not come to mind. But we are on the cusp of a big change in this regard.Â  Linden Lab and OpenSim, in the <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architectural Working Group</a>, AWG, have been working on trust negotiations and the standardization, documentation and use of http (REST enabling).Â  But more work remains on standardizing and documenting where TCP and UDP streams have to be used to create the immersive real time interactions that are the heart of what virtual worlds bring to today&#8217;s web (see my upcoming interview with Teravus Oursley, OpenSim, for more on this).</p>
<p><a href="http://www.ugotrade.com/images/1stand2ndlifelarge.jpg" target="_blank"><img class="alignnone size-full wp-image-1793" title="1stand2ndlife" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/1stand2ndlife.jpg" alt="" width="450" height="333" /></a></p>
<p>There is a complex network of connections through identity (1st and 2nd life) that have enabled virtual worlds to implicitly leverage the social networks ofÂ  Web 2.0 (see <a href="http://botgirl.blogspot.com/" target="_blank">botgirl&#8217;s</a> lovely illustration of this above)Â  The slide above is from <a href="http://nwn.blogs.com/nwn/2006/02/nwn_tips.html" target="_blank">W. James Auâ€™s</a> <a href="http://webexny2008.crowdvine.com/talks/show/1051">â€œThe Post-Hype State of Virtual World Marketing: What Works, What Doesnâ€™t and Why.â€</a></p>
<p><a href="http://www.myrl.com/" target="_blank">Mry</a>l (beta) is an application hoping to streamline these linkages with a social gateway for virtual worlds that will provide whatÂ  <a href="http://www.kzero.co.uk/blog/?cat=82" target="_blank">KZero terms &#8220;outeroperability&#8221;</a>.Â  In this vein, Second Life developers have produced a number of interesting high level communications applications, including <a href="http://www.intersectionunlimited.com/ourproducts.html" target="_blank">Chatbridge from Intersection Unlimited</a>, to link Second Life better with the web. I will moderate a panel for <a href="http://www.orange-island.com/?p=901" target="_blank">Orange Island Innovation Week</a>, Wednesday, Oct 1st, 12 noon PDT, <strong>Innovative Communications Devices</strong>, with Beyers Sellers, Chase Marellan (Chatbridge), Kevni Koolhaven (Learning Tree International).</p>
<p>But, it is the low level architectural integration of virtual worlds into Web 2.0 (along with improved usability and new User Interfaces) that will weave virtual worlds into the fabric ofÂ  Web 2.0 andÂ  our everyday lives.Â  But <a href="http://www.techcrunch.com/2008/07/08/ibm-and-second-life-announce-interoperability-project-but-bridging-virtual-worlds-is-the-wrong-answer/" target="_blank">unlike Eric Schonfeld of TechCrunch</a>, I see interoperability work (see<a href="http://wiki.secondlife.com/wiki/Open_Grid_Public_Beta/" target="_blank"> OpenGrid Beta</a>), and the production of standard protocols (see <a href="https://wiki.secondlife.com/wiki/Open_Grid_Protocol" target="_blank">Open Grid Protocol, OGP</a>) that interoperability work helps negotiate, as an important part of the process.</p>
<p>Immersive virtual worlds are still a long way from mainstream.Â  I attended the <a href="http://blogs.forrester.com/information_management/2008/09/attracting-and.html" target="_blank">Forrester Business and Technology Leadership Forum in Orlando </a><a href="http://blogs.forrester.com/information_management/2008/09/attracting-and.html" target="_blank">last week </a>to help <strong>Oliver Goh, </strong>business development executive at Implenia, talk about delivering results with virtual worlds. We found the audience, while familiar with many aspects of Web 2.0 and its business value,Â  had relatively little direct experience with virtual worlds. But, the interest and excitement with this technology was very apparent.</p>
<p>Architectural integration ofÂ  virtual worlds in Web 2.0 and the standardization of protocols (using existing web standards where possible) will change the picture, creating new opportunities to improve usability, create specific clients for particular needs, facilitate mashups, and leverage network effects, and more!Â  And, JSON support for OpenSim is an important step as it allows virtual worlds to explicitly begin talking the language of Web 2.0.</p>
<p><em>Rob Smart is an Emerging Technology Specialist located at IBM Hursley where he works as part of the IBM CIO office Metaverse Initiative. In Second Life he is known as Yossarian Seattle and became known to some as the inventor of the translation HUD, which was his second foray into integrating Virtual Worlds with Web applications. The first project was enabling some of IBM&#8217;s messaging products to publish events into Second Life, including creation of an RSS Viewer for Second Life. Â Recently, <span class="nfakPe">Rob</span> has been working with clients integrating their internal IT services with various virtual world platforms. His interests now extend to the OpenSim project, with a focus on integration of enterprise data and common web APIs into OpenSim.</em></p>
<h3>Interview with Rob Smart, IBM</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/yossarianseattlepost.jpg"><img class="alignnone size-full wp-image-1814" title="yossarianseattlepost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/yossarianseattlepost.jpg" alt="" width="450" height="450" /></a></p>
<p><em>Tara5 Oh &#8211; on the right (me, Tish Shute)Â  interviewed Yoassarian Seattle (Rob Smart, IBM) in Second Life outside Andy Stanford-Clark&#8217;s remote control house on Hursley islandÂ  (for more <a href="http://www.ugotrade.com/2007/06/05/extreme-life-logging-3d-experience-architects-digging-it-with-destroy-tv/" target="_blank">see here</a>)</em></p>
<p><strong>Tara5 Oh:</strong> I am interviewing you from the media lounge at Web 2.0 Expo and coincidently it seems JSON is the hot standard here, in fact, the hottest it seems other than RSS for its ubiquity.</p>
<p><strong>Yossarian Seattle:</strong> Yes, well the popularity of JSON stems from increase of AJAX enabled websites that need to frequently pass data between server and web browser and have the javascript in a web-page understand that data. It provides a simple, lightweight way of serialising your server-side objects and doesnt require lots of extra coding in the browser like XML data does.</p>
<p><strong>Tara5 Oh</strong>: As virtual worlds are still isolated from many of the network effects of  Web 2.0, at the moment could you explain how  integrating JSON support to OpenSim is &#8220;Web 2.0 made easy for OpenSim?&#8221;</p>
<p><strong>Yossarian Seattle:</strong> JSON was created to make data exchange from browser to server easy. We want that same exchange between VWs and web servers to be equally as simple. However JSON was written with javascript in mind as you can call a simple eval() function and that&#8217;s it, you&#8217;re done and you have a nice object to use in the browser. So as a result lots of these nice service APIs out there in Web 2.0 land talk JSON,  e.g. Google Translation service, flickrs image querying etc. Also our internal IBM web 2.0 systems talk JSON.</p>
<p>But Second Life and OpenSim so far have poor string handling functions which meant that in LSL, in particualar, parsing anything more than a simple piece of JSON was just not an option.</p>
<p>Lots of coders and developers in Second Life have to run PHP and other scripts on external web servers to act as an intermediary stage in calling thse Web 2.0 APIs.</p>
<p>Thats a real pain, and means you need to have a server somewhere and up full time if others are to use your scripts.  Whereas now, with this osParseJSON. function you can forget all that hassle and go straight to the source from OpenSim.</p>
<p>Its a simple but powerful enabler of Web 2.0 technology. I expect it will take people a while to find it and start using it, but it just widens the accessibility for those people who get into scripting in OpenSim.</p>
<p>I&#8217;m planning to do a similar thing for XML parsing, but its a bit lower on my priorities at the moment. JSON parsing gives a good quick win so to speak <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" /> </p>
<p><strong>Tara5 Oh:</strong> I just sent you a couple of slides &#8216;cos one thing I have noticed here at the Web 2.0 Expo is that the understanding of where OpenSim might fit into the architecture of Web 2.0  is vague to zero.</p>
<p>Can you sketch something that relates OpenSim into current understandings of Web 2.0 architecture?</p>
<p><strong>Yossarian Seattle:</strong> Really in that first diagram with the APIs etc  OpenSim just fits on the level of the web servers. And actually that diagram is a bit wrong as there should arrows between the web servers as sites should really be connected to each other.Â  I&#8217;ll stick in here <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </p>
<p>OpenSim is being REST enabled.  At the moment its access to assets, clothes, objects, etc. from the asset servers. But there is no reason that REST interface cannot give access to people logged on, object positions sim layouts etc.</p>
<p><strong>Tara5 Oh</strong>: Could you explain the difference between the power of REST for virtual world technology in relation to the power of JSON?</p>
<p><strong>Yossarian Seattle:</strong> So REST is really just calling a web URL. You use the tree structure of the URL to indicate your asking for different data.  Whereas JSON is an encoding for the actual data that&#8217;s returned to you. So they are complementary really.  But there has already been some discussion within the OpenSim community about introducing new APIs to OpenSim that allow different clients to connect.</p>
<p>I personally think that VWs are too siloed currently. At the moment in VWs it&#8217;s pretty much one world one client. Providing REST or other interfaces to the world data opens up the possiblity of a wider range of clients accessing those worlds. And when i say clients i&#8217;m talking about flash interfaces, browser interfaces or other 3d interfaces such as Unity3d clients.</p>
<p><strong>Tara5 Oh:</strong> Could you tell me more about Unity3d?</p>
<p><strong>Yossarian Seattle:</strong> <a href="http://unity3d.com" target="_blank">Unity3d</a> is a game engine. It&#8217;s a very flexible engine and adheres to a lot of the 3d modeling standards etc. One of its most interesting features is the ability to deploy the games/applications you make as web brower plugins (as well as windows/mac stand alone). I&#8217;ve been messing around with it for a while now, I can show you some demos while you&#8217;re over at the VW conf in London.</p>
<p><strong>Tara5 Oh:</strong> Another theme at this conference, raised by O&#8217;Reilly in his keynote, is that the future is &#8220;world to web,&#8221; e.g ., sensor projects etc.</p>
<p><strong>Yossarian Seattle:</strong> Ah well that&#8217;s another favourite topic of mine with regards to VWs <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />  Hursley is the home of realtime messaging technology.</p>
<p>At the moment as I say there is pretty much one VW client to VW server and because the only library to acces SL and OpenSim is openmv( formerly libsl) that restricts new clients to being written in c#Â   There isn&#8217;t a java library, a flash library , a php library a ruby library etc.</p>
<p>So if in OpenSim we add new connectors, REST ones, talking JSON or XML then we enable lots of new client types and VWs become another mashable service in the Web 2.0 world.</p>
<p>Its about making it easy to get information in and information out. Web 2.0 sites don&#8217;t do realtime very well, whereas VWs do.  VWs are the real time space that the web often tries to provide but kind of half fudges. Web Servers aren&#8217;t built to deal with realtime asynchronous data.</p>
<p>Its interesting how you mention Web 2.0 not really acknowledging Virtual Worlds as when I read the terms of service for a lot of the APIs they&#8217;re very specific about use from other web sites  but they often dont cover the use of the API from other applications.</p>
<p><strong>Tara5 Oh</strong>: Really?</p>
<p><strong>Yossarian Seattle:</strong> Yes.</p>
<p><strong>Tara5 Oh: </strong>What does this mean?</p>
<p><strong>Yossarian Seattle</strong>e: It doesn&#8217;t necessarily have any significance for some services. But there is often specific text saying for example that you must use a particular piece of HTML on a page and show the API owners logo etc</p>
<p>I think as time goes on though and more people connect to Web 2.0 services from within VWs then they will be acknowledged as a valid service consumer, after all VWs are platforms that provide novel ways for people to display and interact with data.</p>
<p><strong>Tara5 Oh: </strong>I know Hursley and other IBMers  have done some nice use case of RL data integration in OpenSim and Second Life. What is your favorite for illustrating the power of Virtual Worlds to bring realtime world to web experiences to Web 2.0</p>
<p><strong>Yossarian Seattle:</strong> Andy Stanford-Clark&#8217;s remote control house on Hursley island is still a favourite.</p>
<p>I did a hook up ages back with a messaging product MQTT and Second Life. I&#8217;d like to revisit that work and extend it.  i&#8217;m interested in propagating events between platforms whether they be VWs or Web sites.</p>
<p><strong>Tara5 Oh</strong>: I am amazed how little play virtual worlds have here at the  Web 2.0 Expo.</p>
<p><strong>Yossarian Seattle:</strong> Virtual Worlds live somewhere between the gaming world and the web 2.0 world. We see it with the flash social worlds too they edge more towards gaming.</p>
<p><strong>Tara5 Oh:</strong> What do you think are the gains of virtual worlds getting more integrated with Web 2.0?</p>
<p><strong>Yossarian Seattle:</strong> Virtual Worlds are a platform and and its often said by some that they&#8217;re not interested in taking part yet as they haven&#8217;t yet seen a killer app for Virtual Worlds. Some of that view stems from the fact that VWs are very isolated it&#8217;s hard to get content in and hard to get it back out again.</p>
<p>Virtual Worlds are the shared realtime spaces of the Internet, up until now this position has been filled by IRC chat rooms and instant messaging apps. Neither of these forms lend themselves particularly well to group interaction. VWs are streets ahead in terms of rich social interaction and sharing of content and experiences.<strong></strong></p>
<p><strong>Tara5 Oh</strong>: You mentioned you just started working on OpenSim development and becoming part of this growing effort.</p>
<p><strong>Yossarian Seattle:</strong> Yep thats right. There is a very vibrant community around OpenSim.</p>
<p><strong>Tara5 Oh:</strong> Why did you decide to put your energy into OpenSim at this time?</p>
<p><strong>Yossarian Seattle:</strong> I&#8217;m now working for IBMs CIO office Metaverse initiative and investigating all of the relevant VWs is one of our remits. OpenSim is my chosen focus.</p>
<p><strong>Tara5 Oh:</strong> What is CIO?</p>
<p><strong>Yossarian Seattle: </strong>One of the IBM CIO office responsibilities is to look at and provide technologies and tools that improve the productivity of IBMers world wide. But as you know IBM has several people working on the OpenSim project  and there is an interested community internally. I&#8217;m looking at how we can hook up OpenSim to our existing web 2.0 services internally.</p>
<p><strong>Tara5 Oh: </strong> What kind of internal Web Services?</p>
<p><strong>Yossarian Seattle: </strong>We have a number of internal Web 2.0 based systems that provide APIs for data sharing, things like Blogcentral our internal blogging platform, Fringe which contains customizable profile information, Beehive is a social networking platform helps people share their interests, track and schedule events within IBM. We also have a platform called TAP (Technology Adoption Program) where people can share services and applications they have created with other IBMers. In addition we have Cattail, a file repository that allows easy sharing and tagging of all types of file. There are many more useful internal services than this even all of which could be integrated with OpenSim.</p>
<p>The nice thing is though that OpenSim affords that flexibility to integrate it with our products  and with existing web systems, and provide value back to the community at the same time.</p>
<p><strong>Tara5 Oh:</strong> So do you have any thoughts about the path to standards for virtual worlds?</p>
<p><strong>Yossarian Seattle</strong>: In terms of standards I think it&#8217;s a case of look at whats out there and successful at the current time not just in terms of 3d models, but in terms of real time chat protocols like XMPP things like JSON, REST as well and pick those for the relevant components</p>
<p>The reason for this is every time you introduce a new standard, you have to wait for the communities to catch up and write language specific APIs for that standard.</p>
<p>[Better to use existing ones where possible and give the communities that will build the tools and the extensions a head start.</p>
<p><strong>Tara5 Oh</strong>: This is also some of why top down standards like MPEG-V have issues?</p>
<p><strong>Yossarian Seattle:</strong> Yep, standards often work best when they&#8217;re bottom up, like JSON.</p>
<p>As I mentioned before because the messaging  structure currently for OpenSim and Second Life is proprietary (although open)  and the only library is libsl (openmv) thats stopped a lot of potential innovation by restricting client/bot developemtn to the c# language.</p>
<p><strong>Tara5 Oh:</strong> why is client/bot development restricted to c#?</p>
<p><strong>Yossarian Seattle: </strong>Because currently the only library you can use to talk the Second Life libsl (openmv) is written in c#</p>
<p><strong>Tara5 Oh: </strong>What do you see as the way through this obstacle?</p>
<p><strong>Yossarian Seattle: </strong>If for example the messages that went between your SecondLife client and the OpenSim/SecondLife servers was a standard protocol which had a bunch of libraries for a variety of languages. Then you could start logging into VW servers from all kinds of clients</p>
<p><strong>Tara5 Oh:</strong> Aren&#8217;t there plenty of standard messaging protocols to use?</p>
<p><strong>Yossarian Seattle:</strong> Yep, but at the moment they&#8217;re not being used. There are some technical reasons for that. like reducing the amount of data to be downloaded etc. But there&#8217;s a balance to be had somewhere.</p>
<p><strong>Tara5 Oh:</strong> But in a modular architecture like OpenSim what is to stop them being implemented?</p>
<p><strong>Yossarian Seattle:</strong> There isn&#8217;t anything to stop them being implemented in OpenSim <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" />  Which is why i like it <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </p>
<p><strong>Tara5 Oh</strong>: I hear a lot about people wanting to change the physics in OpenSim/Second Life (the linking to the physics simulation in particular). Do you have thoughts on this or is it not on your agenda currently?</p>
<p><strong>Yossarian Seattle</strong>: There are a few different physics modules already. Though to be honest i don&#8217;t think its the most important area to focus on, for me at least.</p>
<p>But obviously a high end physics engine is going to benefit anyone who wants to do any kind of simulation.</p>
<p>And that&#8217;s the beauty of Open Source, someone else will have that as their priority.</p>
<p><strong>Yossarian Seattle:</strong> I think there&#8217;s a lot of work that needs to be done around ease of adoption still. i&#8217;d like it to be easy for people to write new clients for OpenSim.</p>
<p>When we get to that stage then people can produce simplified cut down clients to suit their precise need, so if you&#8217;re a retailer and just want to showcase products and let people shop you have a UI to reflect that.</p>
<p>Tara5 Oh: What about the OpenViewer project?</p>
<p><strong>Yossarian Seattle:</strong> It&#8217;s a wider problem and piece of work.  Well notice that openviewer is written in c# <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";)" class="wp-smiley" />   That&#8217;s because they&#8217;re limited again to using libsl (openmv). libsl recently changed it name by the way which is why i&#8217;m bracketing it.</p>
<p><strong>Tara5 Oh:</strong> So it doesn&#8217;t address the underlying issue of messaging and open API&#8217;s for OpenSim.</p>
<p><strong>Yossarian Seattle:</strong> Not really. But they have made the wise choice of releasing it under a free BSD license, which will enable more people to work on the project.</p>
<p><strong>Tara5 Oh:</strong> Intel is working on breaking out openmv into smaller building blocks and basic types. How will this contribute to efforts to integrate OpenSim with Web 2.0?</p>
<p><strong>Yossarian Seattle:</strong> Yes they recently hired John Hurliman who wrote a lot of it. i&#8217;m following what they do with interest.</p>
<p><strong>Tara5 Oh: </strong>John wrote the original openmv?</p>
<p><strong>Yossarian Seattle:</strong> He started the project back in 2006 .</p>
<p><strong>Tara5 Oh</strong>: How will the work he is doing on openmv now help with the goal of making it easy to write new clients?</p>
<p><strong>Yossarian Seattle:</strong> Well if they provide libraries in different languages that would be a good start and breaking it into chunks would allow anyone writing a client to pick and choose between the function they enable in their custom client.</p>
<p>However I&#8217;m not sure that&#8217;s tackling the root of the problem still.</p>
<p><strong>Tara5 Oh</strong>: &#8216;cos the root of the problem is the messaging protocols which restrict you at the minute to C# for the client?</p>
<p><strong>Yossarian Seattle:</strong> The standards need to be applied at the server end, to make it truly accessible.</p>
<p><strong>Tara5 Oh: And these messaging standards need to allow for more than C# development?</strong></p>
<p><strong>Yossarian Seattle: exactly.</strong></p>
<p><strong>Tara5 Oh:</strong> well is seems like something quite doable, just time?</p>
<p><strong>Yossarian Seattle:</strong> and careful thought <img src="https://www.ugotrade.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /> </p>
<p><strong>Yossarian Seattle:</strong> A lot of people are focusing on issues such as object portability in VWs but i&#8217;m not sure those are the ones to be concerned about right now, the games industry seems to have settled on collada as a standard for that. These VWs platforms are complex beasts and the games industry has already solved a certain amount of problems. However in terms of social interactions the VWs industry is ahead, a blend of games and social media.</p>
<p><strong>Tara5 Oh:</strong> But games platforms have not solved either the web 2.0 effects or the web to world have they where things get most interesting now?</p>
<p><strong>Yossarian Seattle:</strong> No and the games industry is playing catch up in that sense.</p>
<p><strong>Yossarian Seattle</strong>: <a href=" http://www.littlebigplanet.com/  " target="_blank">Little Big Plane</a>t will be the game that brings user created content into the mainstream for 3d worlds.</p>
<p><strong>Yossarian Seattle:</strong> did you read this article? http://eightbar.co.uk/2008/09/10/moving-cubes-from-world-to-world/  that&#8217;s not a hack or anything in there.</p>
<p>That&#8217;s a full publish subscribe messaging client embedded in unity3d, realtime events across worlds.</p>
<p><strong>Tara5 Oh:</strong> What do you think are the most interestingÂ  world to web ideas that Andy&#8217;s house points too?</p>
<p><strong>Yossarian Seattle:</strong> Well the fact that the communication is two way, both in and out of world and also that its real time. when something happens in Andy&#8217;s real house it happens here too.</p>
<p><strong>Tara5 Oh:</strong> Yes I am very interested in the development ofÂ  the paraverse!</p>
<p><strong>Yossarian Seattle:</strong> There is a personal project Peter Finn has been looking at in IBM, which is actually called Paraverse and is taking real world data including geospatial mapping information and applying it in OpenSim.</p>
<p>Unfortunately our interview ended here, at a very interesting point (I had to go to a panel at the Web 2.0 Expo, NYC). ButÂ <a href="http://www.redmonk.com/jgovernor/2008/09/25/living-in-de-material-world-on-microsoft-train-sim-and-the-virtual-everything/" target="_blank"> James Governor&#8217;s post/essay &#8211; a superlative ode to the paraverse </a>- prompted by his first look at<a href="http://www.microsoft.com/esp/" target="_blank"> Microsoft ESP visual simulation platform</a> produced an interesting debate on the potential of the Paraverse in the comments that includes a response by Rob. So check it out!</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/09/29/rob-smart-ibm-web-20-to-opensim-made-easy/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>Philip Rosedale: Open Source, Interoperable Virtual Worlds</title>
		<link>https://www.ugotrade.com/2008/09/26/philip-rosedale-open-source-virtual-worlds/</link>
		<comments>https://www.ugotrade.com/2008/09/26/philip-rosedale-open-source-virtual-worlds/#comments</comments>
		<pubDate>Fri, 26 Sep 2008 06:08:46 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[3D internet]]></category>
		<category><![CDATA[Architectural Working Group]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[Intel in Virtual Worlds]]></category>
		<category><![CDATA[interoperability of virtual worlds]]></category>
		<category><![CDATA[Linden Lab]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Open Grid]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open protocols for virtual worlds]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[open standards for virtual worlds]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Philip Rosedale]]></category>
		<category><![CDATA[realXtend]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[vapor standards]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1750</guid>
		<description><![CDATA[Metanomics host Robert Bloomfield interviewed Second Life founder and Chairman of the Board, Philip Rosedale, at the Second Life Community Convention in Tampa, Florida.Â  The Rosedale interview is available here (pictures above are Philip Rosedale and his avatar). Rosedale talked about Linden Lab&#8217;s long standing commitment to open source and open protocols in one segment [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/philip_linden_2.jpg"><img class="alignnone size-full wp-image-1751" title="philip_linden_2" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/philip_linden_2.jpg" alt="" width="156" height="176" /></a><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/philippost.jpg"><img class="alignnone size-full wp-image-1752" title="philippost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/philippost.jpg" alt="" width="156" height="176" /></a></p>
<p><a href="http://metanomics.net/19-sep-2008/philip-rosedale-interview-and-expert-reactions">Metanomics</a> host Robert Bloomfield interviewed Second Life founder and Chairman of the Board, Philip Rosedale, at the Second Life Community Convention in Tampa, Florida.Â  <a onclick="javascript:urchinTracker ('/outbound/article/www.metanomics.net');" href="http://www.metanomics.net/19-sep-2008/philip-rosedale-interview-and-expert-reactions">The Rosedale interview is available here</a> (pictures above are Philip Rosedale and his avatar).</p>
<p>Rosedale talked about Linden Lab&#8217;s long standing commitment to open source and open protocols in one segment of this interview and Robert asked me to post a brief reaction. The full interview covers a wide range of topics and Robert has gotten responses on different parts of the interview from <a href="http://nwn.blogs.com/nwn/2008/09/philip-linden-o.html#more" target="_blank">Wagner James Au</a>, <a href="http://www.christianrenaud.com/weblog/2008/09/metanomics-and-rosedales-future-vision.html#more" target="_blank">Christian Renaud</a>, <a href="http://npirl.blogspot.com/2008/09/reacting-to-rosedale-on-ll-press.html" target="_blank">â€˜Bettina Tizzy,â€™</a> <a href="http://www.kzero.co.uk/blog/?p=2501" target="_blank">Nic Mitham</a> and <a href="http://dusanwriter.com/?p=941" target="_blank">â€˜Dusan Writer,â€™</a> and <a href="http://virtuallyblind.com/2008/09/22/rosedale-interview-reaction/" target="_blank">Benjamin Duranske</a> as well.</p>
<h3>A System Without an Owner is A beautiful Thing</h3>
<p>While Philip Rosedale&#8217;s comments may not, at first glance, appear to be saying anything new, they are in fact a very cogent summary of the important and crucial role Linden Lab has played, and continues to play, in moving virtual worlds out of their walled gardens and bringing them closer to that beautiful thing &#8211; a system without an owner.</p>
<p>Only a system without an owner can unleash, for virtual world technology, the kind of creative, world changing power that we have seen on the 2D web from http and html.Â  Anyone with even a vague idea of the history of the internet understands that it is only through openess, open source, open protocols, open standards, and open APIs, that we will get from here &#8211; the alpha days of virtual world technology, to their coming of age of age as a mainstream phenomena.</p>
<p>It is very much to the credit of Linden Lab that, as Rosedale says, they have never been afraid of openess: &#8220;I donâ€™t think that the open grid will impact our revenues any more than open sourcing the client,&#8221;Â  he says. While there have been criticisms of licensing choices and ways Linden Lab handles contributions back to their viewer from the community, I think that overall Linden Lab has made very important and visionary moves, first to open source, and now to open protocols.</p>
<p>Open sourcing the viewer at a relatively early point in Second Life&#8217;s development created an enormous opportunity for the rapid development of an open source re-engineering of the server side, <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a>.Â  OpenSim with the Second Life viewer is the most complete, open implementation of a persistent virtual world.Â  Without the head start from the open source Second Life viewer, and the connection to the thriving developer community of Second Life, the light speed progress of OpenSim would have been considerably more difficult.</p>
<p>Now OpenSim is getting closer to breaking free from the Second Life viewer. And, standard messaging protocols between client and server are, perhaps, the next step. Rob Smart, IBM, discussed this with me recently (see my upcoming interview with Rob Smart, &#8220;Web 2.0 Made Easy in OpenSim,&#8221; and see <a href="http://tinyurl.com/3ekl2d" target="_blank">his post by this title</a> for more).</p>
<p>As, Rob Smart, IBM, notes, &#8220;If, for example, the messages that went between your SecondLife client and the OpenSim/SecondLife servers was a standard protocol which had a bunch of libraries for a variety of languages, then you could start logging into VW servers from all kinds of clients.&#8221;Â  (for more see my upcoming post, &#8220;Interview with Rob Smart, IBM: Web 2.0 Made easy for OpenSim.&#8221;</p>
<h3>Open Standards Will Emerge From Rough Consensus and Working Code</h3>
<p>There are some that subscribe to the view that standards will arise in a virgin birth from an ivory tower, i.e., professors and captains of industry, removed from open source developer communities, will produce long documents that describe all of the fields, and every one of the messages, and all the APIs in detail prior to implementation.</p>
<p>But as, David Levine, IBM. Mike Mazur, 3Di, Mic Bowman, Intel, <a href="http://justincc.wordpress.com/">Justin Clark-Casey</a>, and <a href="http://www.adamfrisby.com/blog/">Adam Frisby</a>, Deep Think/<a href="http://www.sinewavecompany.com/" target="_blank">Sine Wave</a> cogently argued, on the &#8220;Open Source and Interoperable Virtual Worlds&#8221; panel at the Virtual Worlds Conference and Expo in LA, this top down approach to standards, (or &#8220;vapor standards&#8221;), does not, typically, produce good results. For more on the the virtues of creating standards from &#8220;rough consensus and working code&#8221; as opposed to top down there is a full recording of the LA panel <a href="http://www.ugotrade.com/2008/09/09/open-source-and-interoperability-will-take-virtual-worlds-mainstream/" target="_blank">here</a>.</p>
<p>Thus, in my view, Linden Lab&#8217;s current focus on open protocols, <a href="http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/" target="_blank">OpenGrid</a> (for more see <a href="http://www.ugotrade.com/2008/07/31/the-open-grid-beta-the-first-step-to-interoperable-virtual-worlds/" target="_blank">here</a>), and interoperability is another key step towards the creation of open standards for virtual worlds. And Linden Lab are again leading the way in creating an environment that fosters innovation.</p>
<p>OpenGrid creates a testing ground where protocols can be worked out, and it enables the kind of heterogeneous ecosystem to develop that can nurture the creation of standards. IÂ  agree with Rosedale when he says content makers will have an important role in driving interoperability and standards. The creation of standards is certainly a social as well as technical process. And as Rosedale notes content creators will have compelling reasons to move their content around in an open metaverse.</p>
<p>David Levine&#8217;s (IBM), described in detail in LA (again see <a href="http://www.ugotrade.com/audio/OSInteroppanel.mp3" target="_blank">recording here</a>) the importance of interoperability and parallel innovation  for the creation of standards. OpenSim has already produced an extraordinary amount of innovation, <a href="http://www.realxtend.org/" target="_blank">realXtend</a>, <a href="http://tribalnet.se/" target="_blank">Tribal Media</a> and more. Also see my interview with <a href="http://www.ugotrade.com/2008/09/15/interview-with-mic-bowman-intel-the-future-of-virtual-worlds/" target="_blank">Mic Bowman, Intel</a>, for more on the role of open source/open standards in fostering innovation and in moving virtual worlds into &#8220;the fabric of everday computing.&#8221;</p>
<p>While Linden Lab only have a small team working on OpenGrid, it is a vital one.Â  And, with MarkLentczner (<a href="http://wiki.secondlife.com/wiki/User:Zero_Linden" target="_blank">Zero Linden </a>in Second Life) leading the <a href="http://wiki.secondlife.com/wiki/Architecture_Working_Group" target="_blank">Architectual Working Group</a> for Linden Lab, and a collaboration with IBM led by David Levine (<a href="http://zhaewry.wordpress.com/" target="_blank">Zha Ewry</a> in Second Life) driving the interoperability effort, plus the OpenGrid project, Linden lab has a high powered, agile, lean, machine working for an open future.</p>
<p>So with no more ado, here it is: Robert Bloomfield&#8217;s interview with Philip Rosedale!</p>
<h3>Rosedale on Open Sim:Â  Pandoraâ€™s Box Was Already Open</h3>
<p><strong>Introduction from Robert Bloomfield</strong></p>
<p>Naturally, a major topic of my interview with Philip Rosedale was on the implications of OpenSim and the Open Grid project, which both involve creating open source server-side implementations of virtual worlds that can replicate Second Lifeâ€™s funcationality.Â  As a relative newcomer to this corner of the tech industry, I still find myself asking what a company would essentially create its own competitor.Â  Here is what Philip had to say; I have asked Tish Shute of UgoTrade to comment, as one of the people who has covered the OpenSim/OpenGrid movement with more detail and passion than just about anyone.</p>
<p>PHILIP ROSEDALE: I just really hold true to the strategic belief that thereâ€™s going to be a tremendous amount of consolidation and interconnection between these worlds because the content development process is so challenging that the content developers are going to push us all together. Theyâ€™re going to say, â€œGive me a file format. Give me an interchange format. And let me move that chair from this grid to that grid. Iâ€™ve got to be able to do that because Iâ€™ve got a customer here who wants to buy it.â€ And so I think that that consolidation is going to happen, and itâ€™s going to happen earlier than people would have thought.</p>
<p>ROBERT BLOOMFIELD:Â  And this is looking at the success, the energy around OpenSim, open grid.</p>
<p>PHILIP ROSEDALE:Â  The energy, yeah. I think, at this point, weâ€™ve got an appropriate level of energy â€“ I think thatâ€™s exactly the right word â€“ around exploring how quickly we can generalize all this stuff and open and interconnect everything together. I really think thatâ€™s going to continue.</p>
<p>ROBERT BLOOMFIELD:Â  [D]o you feel like you might have opened Pandoraâ€™s box and that itâ€™s not really under your control now?</p>
<p>PHILIP ROSEDALE:Â  I think that Second Life has, in many ways, not been under our control from the beginning and that itâ€™s been a basic operating assumption that to create the kind of incredible place and business opportunity, and social opportunity more broadly, that Second Life would require a certain lack of control. And that was true with the content from day one.</p>
<p>So for us, oh, we open-sourced the client a while ago, and now weâ€™re trying to do the same thing with respect to operating standards to interconnect grids. This is a pretty logical progression, using worlds that weâ€™re pretty familiar with. I mean weâ€™ve always felt that, if you have a compelling use proposition, which certainly Second Life does, in other words, if thereâ€™s real utility, real fun or real business or real whatever in what people are doing, then there should be a way, as a company, to be open, global and still make money on an hour-to-hour or a user-to-user basis or whatever on what weâ€™re doing. And the economic aspects of the business have been fantastic from the very early days, and we donâ€™t really even worry about them.</p>
<p>Our ability as a company to find a way to make a reasonable amount of money per hour that people spend in Second Life, itâ€™s really never been that much of a problem. Itâ€™s actually been fascinating as weâ€™ve changed pricing and as weâ€™ve changed the ways that we make money. Introducing new ways of making money â€“Â  like selling currency on the LindeX â€“ itâ€™s been amazing how stable our revenues have been as a function of usage hours. Itâ€™s one of the things that we sometimes marvel at. Itâ€™s almost an emergent effect, if you will, that the companyâ€™s business, its operating revenues are really very stable.</p>
<p>ROBERT BLOOMFIELD:Â  Even though theyâ€™re coming from different streams.</p>
<p>PHILIP ROSEDALE:Â  Even though theyâ€™re coming from different streams. And sometimes the requirements of the platform and decisions that we make will really substantially change the nature of those streams, but when you put them all together and you divide them by the number of usage hours, itâ€™s like a constant. Itâ€™s almost a magic number. And itâ€™s a magic number that allows us to be profitable, and therefore, is certainly adequate to make a business in the future. I donâ€™t think that continuing to open Second Life up as we have been is going to impact that. Again, I just think there are so many opportunities to make money that we shouldnâ€™t have to worry about that too much in the company. And, again, I think thatâ€™s a lot like the early internet. I mean if you step back and look holistically at the internet â€“ you look at PayPal, the payment systems, auction systems, transaction systems, posting, naming â€“ you look at all the businesses that comprise the internet, well, those are all the kinds of businesses that we as a company can be in, in this emerging market. Thereâ€™s no business thatâ€™s denied us. We are in the hosting business. We can continue to be in the hosting business long term, putting servers up and providing access to them.</p>
<p>We can certainly be in the naming business. Weâ€™re in the currency and transaction support business. Itâ€™s funny, itâ€™s something thatâ€™s often discussed. We worry much more about improving the scalability, stability and the usability of the system: reducing that initial user experience, reducing the time associated with it, making it easier. Thatâ€™s got to be the lever that drives more growth in the overall industry, more revenues for us. So itâ€™s really all we worry about. But I donâ€™t think that the open grid will impact our revenues any more than open sourcing the client did.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/09/26/philip-rosedale-open-source-virtual-worlds/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
<enclosure url="http://www.ugotrade.com/audio/OSInteroppanel.mp3" length="40308529" type="audio/x-mpeg" />
		</item>
		<item>
		<title>O&#8217;Reilly: &#8220;What Will You Do With Web 2.0?&#8221;</title>
		<link>https://www.ugotrade.com/2008/09/19/oreilly-what-will-you-do-with-web-20/</link>
		<comments>https://www.ugotrade.com/2008/09/19/oreilly-what-will-you-do-with-web-20/#comments</comments>
		<pubDate>Fri, 19 Sep 2008 22:55:52 +0000</pubDate>
		<dc:creator><![CDATA[Tish Shute]]></dc:creator>
				<category><![CDATA[digital public space]]></category>
		<category><![CDATA[free software]]></category>
		<category><![CDATA[Metaverse]]></category>
		<category><![CDATA[Mixed Reality]]></category>
		<category><![CDATA[open metaverse]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[OpenSim]]></category>
		<category><![CDATA[Second Life]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[virtual world standards]]></category>
		<category><![CDATA[Virtual Worlds]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<category><![CDATA[Web 3D]]></category>
		<category><![CDATA[Web3.D]]></category>
		<category><![CDATA[World 2.0]]></category>
		<category><![CDATA[conferences]]></category>
		<category><![CDATA[developers]]></category>
		<category><![CDATA[Election 2008]]></category>
		<category><![CDATA[O'Reilly Media]]></category>
		<category><![CDATA[politics]]></category>
		<category><![CDATA[social entrepreneurship]]></category>
		<category><![CDATA[Tim O'Reilly]]></category>
		<category><![CDATA[virtual worlds and carbon emission reduction]]></category>
		<category><![CDATA[virtual worlds and carbon footprint]]></category>
		<category><![CDATA[virtual worlds and sustainable devlopment]]></category>
		<category><![CDATA[Web 2.0 Expo]]></category>

		<guid isPermaLink="false">http://www.ugotrade.com/?p=1693</guid>
		<description><![CDATA[Tim O&#8217;Reilly, founder of O&#8217;Reilly Media addressed the audience of the Web 2.0 Expo with a series of challenging questions. I felt happy that so many people I know are already answering this call &#8220;to do something worthy&#8221; with Web 2.0 and &#8220;to make technology that matters.&#8221;Â  However, many of these people were not at [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/oreillypost.jpg"><img class="alignnone size-full wp-image-1695" title="oreillypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/oreillypost.jpg" alt="" width="450" height="317" /></a></p>
<p>Tim O&#8217;Reilly, founder of O&#8217;Reilly Media addressed the audience of the Web 2.0 Expo with a series of challenging questions. I felt happy that so many people I know are already answering this call &#8220;to do something worthy&#8221; with Web 2.0 and &#8220;to make technology that matters.&#8221;Â  However, many of these people were not at the Web 2.0 Expo.Â  This is, in part, because as O&#8217;Reilly pointed out:</p>
<blockquote><p>if you look at the focus of a lot of what you call &#8216;Web 2.0,&#8217; the relentless focus on advertising-based consumer models, lightweight applications, we may be <a title="The Web 2.0 economy hangs in limbo -- Friday, Apr 25, 2008" href="http://news.cnet.com/8301-13577_3-9928453-36.html">living in somewhat of a bubble</a>, and I&#8217;m not talking about an investment bubble. (It&#8217;s) a reality bubble.&#8221;</p></blockquote>
<p>But as I explored the conference and expo, I did find friends, old and new, dedicated to figuring out how to use Web 2.0 to make a better world.</p>
<p>Caroline McCarthy has an excellent post, on <a href="http://news.cnet.com/8301-13577_3-10045321-36.html" target="_blank">CNET news</a> on the message of Tim O&#8217;Reilly&#8217;s keynote address.</p>
<p>If you have read Ugotrade before you already know the threads I have been following re the potential for virtual worlds for positive global development and to reduce the carbon footprint of business, so some of McCarthy&#8217;s comments caught my attention:</p>
<blockquote><p>There&#8217;s an inherent irony in what O&#8217;Reilly said, given the fact that massive conferences like the Web 2.0 Expo are packed with the trendspeak and hype that birthed SuperPoke-like entertainment, and certainly aren&#8217;t helping the environment by distributing tons of press kits and swag&#8211;not to mention flying in hundreds of attendees in a massive spurt of carbon emissions.</p>
<p>To be fair, O&#8217;Reilly Media has been printing fewer event programs and encouraging conference goers to recycle, and it has used carpeting made of post-consumer material.</p>
<p>There is clearly a lot that needs to change, and perhaps the tech industry trend of large-scale conferences is part of it. We&#8217;ll see whether Silicon Valley&#8217;s leaders and moguls are willing to do what they think is right, rather than what they think is profitable.</p></blockquote>
<p>But, as Tim O&#8217;Reilly pointed out, the huge problems we face today create an enormous amount of opportunity for us to find creative solutions.</p>
<h3>&#8220;We are going to figure out how to make a better world using the power of the web.&#8221; O&#8217;Reilly</h3>
<p>Virtual World technology will soon, play a major role in re-imagining these tech industry large-scale conferences. There is an talented and dedicated community of open source developers working hard to take this nascent area of Web 2.0 technology mainstream through open source, open standards, and open API&#8217;s. I am so proud to be part of this global community!</p>
<p>Virtual Worlds were only a very small part of the Web 2.0 Expo. <a href="http://nwn.blogs.com/nwn/2006/02/nwn_tips.html" target="_blank">W. James Au&#8217;s</a> <a href="http://webexny2008.crowdvine.com/talks/show/1051">&#8220;The Post-Hype State of Virtual World Marketing: What Works, What Doesn&#8217;t and Why&#8221;</a> was the only panel I noticed focusing on Virtual Worlds in any of the main tracks. This reflects the relative lack of integration of virtual worlds into Web 2.0.</p>
<p>One of my passions is to make this isolation of virtual world technology from Web 2.0 a thing of the past (see my upcoming post on Rob Smart&#8217;s, IBM, integration of JSON support into <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a> &#8211; which is a vital step towards Web 2.0 made easy for <a href="http://opensimulator.org/wiki/Main_Page" target="_blank">OpenSim</a>).</p>
<p>But, as W. James Au pointed out very eloquently, this notion of isolation is really not accurate, even now. For example, Second Life communities interact in myriad and powerful ways with other social media communities on the web despite the currentÂ  lack of common protocols that have kept immersive virtual worlds architecturally cut off from some of the networking effects of Web 2.0.</p>
<p>But, for all of us living here in the US, O&#8217;Reilly&#8217;s most important message was simple and fundamental. So let&#8217;s reblog, retweet, plurk, friendfeed, facebook, send it out on notecards in SL, make machinima, and spread the word in every way available to us.</p>
<h3>&#8220;Bad politicians are elected by good people who don&#8217;t vote.&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/registertovote.jpg"><img class="alignnone size-full wp-image-1697" title="registertovote" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/registertovote.jpg" alt="" width="450" height="346" /></a></p>
<h3>&#8220;There is no reason in 2008 to do shit you hate, &#8216;cos you can lose just as much money being happy as hell.&#8221;</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/garypost.jpg"><img class="alignnone size-full wp-image-1710" title="garypost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/garypost.jpg" alt="" width="332" height="529" /></a></p>
<p>Quote and pic above from <a href="http://garyvaynerchuk.com/">Gary Vaynerchuk&#8217;s</a> keynote, Web 2.0 Expo, 2008 (also see <a href="http://garyvaynerchuk.com/2008/09/11/execute-on-being-you/" target="_blank">Execute on Being you</a>).</p>
<h3>&#8220;Do something you love,&#8221; Vaynerchuk</h3>
<p>I personally can&#8217;t wait until the potent mix of real time interaction in immersive spaces is combined with the networking effects of Web 2.0. Not just because this will unleash an awesome new wave of innovation and creativity but the early adopters I have met in immersive virtual worlds, and the phenom developers in the rapidly growing open source communities of this emerging technology, have passion, do stuff they love, and do stuff that is worthy, w00t!</p>
<h3>And further: &#8220;Do Something Worthy,&#8221; O&#8217;Reilly</h3>
<p>While speakers and exhibitors from virtual worlds were scant in the main hall and panel tracks, Second Life had a strong showing in theÂ  &#8220;Do Something You Love,&#8221; &#8220;Do something that need&#8217;s to be done&#8221; zone of the Not For profit strip. &#8220;Create more value than you capture&#8221; (from O&#8217;Reilly&#8217;s keynote) is the natural heart of their mission.</p>
<p>Below is the awesome Evonne Heyning ( InKenzo in Second Life) of <a href="http://amoration.pbwiki.com/">Amoration.</a></p>
<blockquote><p>What is Amoration?<br />
AMO:Â  The root of love<br />
~ation:Â  The state of being, practice and study<br />
Amoration:Â  living in love, a practice of open engagement<br />
Our Mission:Â  To Create a Culture of Conscious Compassion</p></blockquote>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/inkenzo7post.jpg"><img class="alignnone size-full wp-image-1705" title="inkenzo7post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/inkenzo7post.jpg" alt="" width="450" height="299" /></a></p>
<p>Evonne addressed the problem of the Not For Profit&#8217;s being in an out of the way strip in the Expo hall with some very tasty peanut brittle.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/notforprofitspost.jpg"><img class="alignnone size-full wp-image-1707" title="notforprofitspost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/notforprofitspost.jpg" alt="" width="450" height="299" /></a></p>
<p>In Contrast, the MS Surface crew, in a prime location, playing with super shiny things, did not have to do much to keep a crowd at their table!</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/mssurface2post.jpg"><img class="alignnone size-full wp-image-1708" title="mssurface2post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/mssurface2post.jpg" alt="" width="450" height="299" /></a></p>
<p>I IMed my friend Kyle Gomboy (G2 Proto in SL) from the <a href="http://www.sldnug.net/" target="_blank">Microsoft Development Community in Second Life</a> and <a href="http://reactiongrid.com/projects.aspx" target="_blank">in OpenSim</a> to ask him if the MicrosoftÂ  .net, technet, and c#Â  developers in OpenSim had any plans to integrate Surface with OpenSim.Â  I saw the Surface/Virtual Earth integrationÂ  and realized Surface with OpenSim would be hotness for a small company looking to develop a vertical for hospitality, medicine, or education. At between 12 to 15K with SDK and two days training, Surface is priced in a range a small company can probably afford.</p>
<p>G2 also came up with a thought that would bring shiny together with worthy when he mentioned to me how great it would be to see Surface in every public school library in the country.</p>
<p>See <a href="http://www.rikomatic.com/blog/2008/09/sprinting-throu.html" target="_blank">Rik Riel&#8217;s blog</a> for a nice video of the MS Surface demo in the expo hall.</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/jamesbooksigningpost.jpg"><img class="alignnone size-full wp-image-1716" title="jamesbooksigningpost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/jamesbooksigningpost.jpg" alt="" width="450" height="299" /></a></p>
<p>&#8220;<a href="http://www.amazon.com/Making-Second-Life-Notes-World/dp/0061353205" target="_blank">The Making of Second Life,</a>&#8221; W. James Au&#8217;s book signing</p>
<p>From left to right, Rik Rik Panganiban from <a href="http://www.globalkids.org/" target="_blank">Global Kids</a> (Rik Riel in SL), Joyce Bettencourt, <a href="http://whymysl.blogspot.com/">Rhiannon Chatnoir</a>, in SL, W. James Au, (Hamlet Au in SL),Â  Evonne Heyning ( InKenzo in Second Life), Jennifer Schlegel (Schlink Lardner in SL).</p>
<h3>And further more: &#8220;Make technologies that matter,&#8221; O&#8217;Reilly</h3>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/interopnocteampost.jpg"><img class="alignnone size-full wp-image-1726" title="interopnocteampost" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/interopnocteampost.jpg" alt="" width="450" height="299" /></a></p>
<p>One of the highlights for me of w2e was getting an inside look at the Interop NOC and meeting Bill &#8220;WEJ&#8221; Jensen the Troubleshooting Lead of the InteropNETteam (WEJ center sitting at the Mac).</p>
<p>The Interop NOC is a &#8220;real&#8221; world work of artÂ  &#8211; &#8220;the largest temporary network in the world&#8221; where voluteers have come together with industry leaders to take on the ultimate network challenge &#8211; &#8220;creating a completely interoperable network using the industry&#8217;s most cuting edge technology.&#8221;</p>
<p><a href="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/noc1post.jpg"><img class="alignnone size-full wp-image-1728" title="noc1post" src="http://www.ugotrade.com/wordpress/wp-content/uploads/2008/09/noc1post.jpg" alt="" width="450" height="299" /></a></p>
<p>If you have been reading Ugotrade you will know I have been following the work of Michael Osias, IBM,Â  (Illuminous Beltran in SL) who has been creating virtual network operation centers (VNOCs) in OpenSim (for more seeÂ  <a href="http://www.ugotrade.com/2008/02/21/the-wizard-of-ibms-3d-data-centers/" target="_blank">here</a>).Â  I am looking forward to introducing &#8220;WEJ&#8221; to Michael&#8217;s work which I believe foreshadows a new era for software &#8211; along the lines Gelertner first envisioned in 1992. Michael follows the Gelertner vision pretty closely.</p>
<p>Gelertner talks about software as an embodied information machine. And, as virtual worlds come of age so will this notion of software as 3d info machines that we can walk around, tinker with, and hang out in with other avatars and agents in real time.</p>
<blockquote><p>Mirror Worlds will transform the meaning of â€œcomputer.â€ Our dominant metaphor since 1950 or thereabouts, â€œthe electronic brain,â€ will go by the boards. Instead people will talk about crystal balls, telescopes, stained glass windows, wine, poetry, or whatever &#8211; things that make you see <em>vividly</em>.</p>
<p>Software today offers assistance to the specialist (in everybody) not to the citizen. The mere citizen deals with the increasingly perilous complexity of his government, business, transportation, health, school, university and legal systems unaided. Mirror Worlds represent one attempt to change this state of affairs (Mirror Worlds, David Gelertner 1992).</p></blockquote>
<p>More on VNOCs in an upcoming post.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.ugotrade.com/2008/09/19/oreilly-what-will-you-do-with-web-20/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
	</channel>
</rss>
