RSS

Interview with Mic Bowman, Intel: The Future of Virtual Worlds

Mon, Sep 15, 2008

Intel obviously benefits from broad adoption of applications that drive significant compute so it is hardly surprising that they had been paying attention to the early adopters of the Gaming & Visual Computing market.  But, in a recent post the Intel blog states, “going forward the bigger growth will be coming from the other two segments Metaverse and Paraverse (for more on the future of the paraverse see the recording of the Augmented reality panel in LA in my previous post.)

(Thanks Joshua Meadows (Joshua Nightshade in SL), Abstract Avatars, for the picture of the Linden Lab booth at the Virtual Worlds Conference and Expo, LA 2008.  Those giant avatars from Second Life (TM) are very cool. That is John Lester (Pathfinder Linden) in the striped shirt helping give us an idea of their scale.)

Intel is also in a powerful position to facilitate mass adoption of rich, immersive virtual worlds  where there is a direct connection between more compute and better user experience.  As Christian Renaud pointed out in, The Techology Intelligence Group’s Virtual Worlds Industry Outlook, 2008 -2009 (written with Sean F. Kane Esq.), the “ability for the computer’s graphics subsystems to render the data as quickly as required” has been an obstacle for mainstream adoption of virtual worlds. But, Renaud goes on to note, Intel’s new Larrabee architecture may be a game changer for virtual worlds.

Recent announcements may change the landscape.  At the SIGGRAPH trade show in August 2008, Intel announced their Larrabee architecture, slated for product release in the late 2009-2010 timeframe.  This would take what has typically been a separate Graphical Processing Unit (GPU)  function and relocate it into the processor architecture on the motherboard of a computer.
Although the early stages of this technology will undoubtably be prone to compatibility issues with legacy graphics drivers, the assimilation of this function on to the main motherboard should streamline the graphics performance and compatibility issues that virtual worlds have been susceptible to.

Jobi George, on the Intel blog explains how Intel sees three segments, gaming, metaverse, and paraverse as driving “the next logical evolution of web, where “connectedness” and “immersion” (not just richness) come together to bring us to an era of  “Connected Visual Computing” (see the press coverage of CVC here, here, and here).

Getting from here (gaming, metaverse, paraverse) to there (connected visual computing)

Mic Bowman, Intel, was on two panels at the Virtual Worlds Conference and Expo in LA last week. I wrote up and posted the recording of the panel I facilitated, “Open Source, Interoperable Virtual Worlds” in my previous post. On our panel, Mic explained in detail some of the work Intel is doing to help us get from here (gaming, metaverse, paraverse) to there (connected visual computing). Mic also spoke on the Virtual World Road Map session with keynote speaker, Sibley Verbeck, Electric Sheep Company, (see Sibley’s blog). This panel focused more on cross industry cooperation.

Mic’s message for our panel on “OpenSource and Interoperable Virtual Worlds,” in a nutshell was:

To achieve a thriving, growing, broadly adopted CVC ecosystem, we believe the industry must come to some agreement on common building block technologies. Open source technologies represent a critical element in the discovery and development of these technologies, and foster innovative usages that drive adoption.

To give you a taste of how deeply (err yes we were a panel of unbridled geekiness to some)  we discussed the work being done to research and create these common building blocks. Here is a short transcription of a portion of Mic’s contribution to our panel, lightly edited.

The creation of common building blocks for virtual worlds similar to what HTML and HTTP did for the internet is a vital step, in Mic’s view, for the transition to connected visual computing and for the experience of virtual worlds to become ubiquitous and transparent in the way that when we say “browse the web,” i.e., we take the “web” for granted it is the applications YouTube, Flickr etc that gets our attention

The Evolution of the Web Into Connected Visual Computing

In 1995 we talked about surfing the web, nobody uses that phrase any more. Today we talk about updating our blogs or adding something to twitter, or I want to go off and buy something from E-Bay or Amazon. The web has become essentially a fundamental part of the fabric. It’s the applications that it enables that are important. Right now we think about virtual world technologies generally as an application. Ultimately we would like to figure out how to get that kind of technology into the basic fabric. So that we think about collaboration as an application, we think about a conference, and attending the conference, as the thing we do, not as a platform on which we do that. And to accomplish that, what we envision at Intel is a set of building blocks that are created or emerge out of the various platforms, as being consistent technologies.

And so we looked at a variety of different approaches to understanding what those technologies could be, what those common technologies were, and how they are created and adopted. What we saw in OpenSim’s modular architecture, was an opportunity to start articulating boundaries between the various pieces of technology in a way that allowed us to disaggregate the architecture so that we could start thinking about how to pull the pieces apart and think about how the interfaces could be made consistent across those pieces. For example, there’s a set of types for the basic building blocks that exist across the Second Life and OpenSim protocols.

One of the people we just hired John Hurliman has been working libopenmv for awhile, and as one of the things we were having a discussion about is how to capture that consistency of types. And so John’s going off and pulling the set of modules out of the openmv project, in order to give us a basic set of types that can be applied across multiple applications, that can be re-used in many different ways.  And so it’s useful to the OpenSim community, and its useful for building out some new test servers and clients that can allow us to actually try out different types of load, and potentially allows us a way of extracting out the set of protocols that implement those types so that we can start looking at new ways of building more efficient protocols.

Another example of that would be the meshing code, the code that actually takes the basic conceptual level of object that is being represented in the world and turns it into something that can actually be sent to a GPU in order to be put on a screen. And so that basic meshing component that breaks it down seems to be something that we see as a consistent piece of technology that occurs in  several places that’s useful both in sort of mapping the representation into the physics engine and on the client mapping it into the graphics engine. And so that’s another example of the basic technology that seems to be appearing to consistently in many locations.

And so, what we like about OpenSim in particular, and again this is just a tool and framework for us for understanding what these basic building blocks are, but what we like about it is we can experiment with these new boundaries in the framework of a complete and functioning system. And so it gives us a framework for testing out what these interfaces should be and what the basic building blocks are.

Mic pointed out some of the key points of OpenSim architecture and ecosystem at the Intel Developer Forum. The slide below is from his presentation there.

(The Genkii team created the OpenSim N-Body demonstration with astrophysicists Piet Hut and Junichiro Makino, see here for more).

Interview with Mic Bowman:  “The Future of Connected Visual Computing.”

1) First could you define what you mean by, “Connected visual computing?”

Connected Visual Computing is the union of three application domains: mmog, metaverse, and paraverse (or augmented reality). These application domains are united through common technologies, especially 3D content creation, and common properties such as persistence, social interaction, rich presentation, and user-generated content with potentially complex behaviors.

2) One of the key aspects of fostering innovation in a new technology  is recognizing the important paradigm shifts that it fosters.  New forms of collaboration are one  potentially most disruptive contributions of  virtual worlds.  However, I know you have gone a little further than most on thinking how virtual worlds create new opportunities for non-linear, asynchronous collaboration.  Could you explain some of your thinking on this? And, why developing thinking about the applications of virtual worlds is something you and thus Intel has got involved with?

This slide is from Mic Bowman’s presentation “Non-Linear Presentation: or how to use virtual worlds for asynchronous collaboration.”

Although Intel’s research agenda focuses on the hard ware platform impact of CVC applications, it is necessary to understand the different usages that CVC enables. To that end, we built an experimental tool in OpenSim where we could explore new modes of collaboration designed exclusively for virtual worlds. That is, we didn’t want to look for ways to just translate our real world collaborative culture into the virtual world, we wanted to find out what unique forms of collaboration are enabled by virtual worlds. The first result is a tool we call non-linear presentations.

In addition, Intel actively collaborates with Qwaq/Croquet to integrate information space visualization into their enterprise collaboration tool “Qwaq Forums”.


3) Why did Intel choose to engage with OpenSim?

We like OpenSim because it has the best logo. Go Hippos!

Seriously… a year ago we started to look at open source platforms for virtual worlds. Open source platforms provide a completely functional framework that enables researchers to focus on specific innovations. My group wanted to look at scalability limitations in the distributed systems software architecture of CVC applications. We considered four candidate platforms (OpenSim, Croquet, Ogoglio, and Wonderland). We chose OpenSim because it was the most complete implementation of a persistent world. In addition, the development community was most active. Further, the modular architecture makes it easier to experiment with new functionality.

4) I know you have contributed code to OpenSim,  will Intel be putting more developers into OpenSim in the future?

Our focus is on investigating general technologies to support broad adoption of scalable CVC applications. That is, we want to understand the general problems that limit scalability across multiple CVC applications. However, it is important to validate general principles through specific implementations (even better, implementations with real end users). As a result, we expect to continue our collaboration with the OpenSim development community and with the emerging end-user community.


5) You mentioned you were doing some testing on OpenSim.  Have you found specific areas in Intel’s domain  that could be significantly improve OpenSim performance?

Our research is still very early stage. In one area, however, we have some very promising early results. Script execution in CVC applications creates unique stress on the platform with potentially thousands of concurrently executing scripts. One method we are investigating appears to improve performance and scales to the number of hardware threads on the CPU.


6) Everyone I think agrees that OpenSim and a next generation browser/viewer would be killer.  And when we talked last you mentioned interest in the OpenViewer project.  What do you see as being the best way forward on this very big task?

Clearly, experimentation with new communication protocols requires that we modify both the client and server. Licensing issues with existing viewers certainly complicate any effort to modify the viewer.


7) And, what about the user experience in virtual worlds?  What might be the contribution of browser-based views?  What are your thoughts on this?

Browser-based viewers are a reflection of deployment challenges. Broad adoption of CVC applications requires that the industry address the problem of simplified deployment, whether through stand-alone viewer (or viewer platform) consolidation or through browser-based viewers.

Software as a service is one approach that could address the deployment problem. Limitations in browser-based sandboxes must be addressed to deliver appropriate client performance and experience.


9) Intel has Havok and a software ray tracing engine that scales to cores.  The latter would really make for a completely new generation of  virtual world viewers.  Can you explain some of the innovations you see coming from this ray tracing engine?  And will there be a special license offered to bring Havok into reach of the open source community? What role / impact will Larrabee have?

Ray tracing is particularly helpful in making user-created content look good. Let me give you a concrete example… In a professionally authored 3D environment, objects can be placed with complete understanding of the lighting requirements. In any virtual world where users can create or customize content (including simple customizations like changing the placement of objects), lighting cannot be predicted (and as a result it is very difficult to create the appropriate shading for objects). Ray tracing (both as a runtime component and as an offline tool) can dynmically compute appropriate lighting, shadows and reflections.

Havok is a fully owned subsidiary of Intel with an independent business model. Questions of Havok’s license should be directed to Havok. (see the link to the Havok evaluation and developers licenses)

As a compute engine, Larrabee is designed for compute loads that frequently occur in CVC applications including physics (collision detection), spatialization of audio, and ray tracing. In usages where rich immersion, ie accurate physical simulation and photorealistic content, determines the quality of user experience, Larrabee can certainly improve the user’s experience.


10) How do you see the landscape for virtual worlds five years out?

Obviously any predictions on the future of an industry as immature as virtual worlds must be considered highly speculative. That being said, Intel’s vision is that the industry, as it matures, forms around a relatively small set of basic common building block technologies that are sufficiently general to enable many different usages. Examples we see emerging include identity, presence, text and voice communication, and asset/object management/storage. These basic building blocks can be put together with physics, game engines, and other tools to address the needs of a particular usage.





categories: 3D internet, Augmented Reality, avatar 2.0, free software, Intel in Virtual Worlds, interoperability of virtual worlds, Linden Lab, Metaverse, MMOGs, open metaverse, open source, OpenSim, Second Life
tags:

3 Comments For This Post

  1. epredator Says:

    A great post as per usual :-)
    The CVC attribute slide blends with what I documented with the Reverse ICE model of interaction earlier in the year. Reverse ICE model

  2. admin Says:

    Yes, I agree Ian. And your post is a perfect complement to this one. Thanks!

  3. Darkflame Says:

    Not quite sure why Augmented Reality has been renamed Paraverse here.
    Its just adding another name to the already large list of things.
    Lets all settle on “Augmented Reality” being “A virtual overlay on the real world”, as thats what has been established.
    So that would be virtual surgery, seeing people accross the world in your room, using 3D overlays to instruct you to fix a car etc.

    To me thats the real future, allthough certainly an open 3D platform for world building is needed.
    Personaly I think we need an IRC like system, whereby any use can make a world layer(channel), or view other peoples (open) world layers, contribute too, or edit them.
    So I could have my personal world layer locked for just me, a wider one for friends and family, and finaly I might also be viewing a few public layers as well.

    Anyway, I went a bit offtopic there.
    This is a good artical, and it looks like real progress is made.
    I cant wait for a future were we are making virtual worlds around our real one.

4 Trackbacks For This Post

  1. Pages tagged "sheep" Says:

    [...] bookmarks tagged sheep Interview with Mic Bowman, Intel: The Future of Vi… saved by 6 others     fruit9129 bookmarked on 09/15/08 | [...]

  2. Does Intel not see immersive story happening? — Justin Gibbs Says:

    [...] see any immersive story. Maybe they included it under “Gaming”. Then I cam across an interview with Mic Bowman on UgoTrade. It elaborated on a bit more on visual computing and gave me this interesting [...]

  3. UgoTrade » Blog Archive » Philip Rosedale: Open Source Virtual Worlds Says:

    [...] an extraordinary amount of innovation, realXtend, Tribal Media and more. Also see my interview with Mic Bowman, Intel, for more on the role of open source/open standards in fostering innovation and in moving virtual [...]

  4. UgoTrade » Blog Archive » Doing something useful with Virtual Worlds Says:

    [...] for this group picture (above). But, Intel is doing some very interesting work in Virtual Worlds see my earlier post here.  And, John is  “helping NASA work out how to deflect extinction level event asteriods from [...]