Highlights:

  • Nvidia’s Omniverse Cloud is a collaborative “metaverse” platform for developers, artists, and engineers, allowing them to visualize and create 3D models for diverse projects.
  • The capabilities of Nvidia’s Omniverse Cloud can also be streamed to Nvidia’s Graphics Delivery Network, a global network of data centers created specifically to stream 3D content to Apple’s Vision Pro mixed reality headset, thanks to these Omniverse Cloud APIs.

Nvidia Corp. has disclosed its intentions to improve Omniverse Cloud by introducing application programming interfaces (APIs), opening the door for software development, and integrating industrial digital twin applications.

Nvidia’s Omniverse Cloud is a collaborative “metaverse” platform for developers, artists, and engineers, allowing them to visualize and create 3D models for diverse projects. Objects simulated in Omniverse, often called “digital twins,” interact with hyperrealistic physics simulations resembling real-world objects. The platform provides a collaborative solution for business organizations and individuals.

Nvidia’s Omniverse Cloud is a software-as-a-service platform for artists, developers, and teams. It enables users to use the platform to create, publish, and operate metaverse apps from anywhere worldwide.

“Everything manufactured will have digital twins,” Nvidia founder and Chief Executive Jensen Huang stated before the company’s annual GTC conference in San Jose, California. “Omniverse is the operating system for building and operating physically realistic digital twins. Omniverse and generative AI are the foundational technologies to digitalize the USD 50 trillion heavy industries market.”

Nvidia unveiled five new Omniverse Cloud APIs at the GTC 2024 developer conference. These can be used singly or in combination to link workflows or applications to the platform for interoperability, employing OpenUSD, a high-performance, extensible layout for describing animated 3D scenes for extensive film and visual effects production.

The new APIs include Omniverse Channel, USD Render, USD Query, USD Write, and USD Notify. These APIs enable tasks such as generating ray-traced RTX renders using OpenUSD data, modifying and interacting with data, querying scene data, tracking USD changes, and facilitating collaboration by connecting users, tools, and worlds. Developers can employ these functionalities to generate “windows” from applications into Omniverse Cloud, thereby reducing the smooth incorporation of “metaverse” scenes and digital twins into their software.

As stated by Roland Busch, the CEO and President of Siemens AG, “Through the Nvidia Omniverse API, Siemens empowers customers with generative AI to make their physics-based digital twins even more immersive. This will help everybody to design, build, and test next-generation products, manufacturing processes, and factories virtually before they are built in the physical world.”

Apple Vision Pro Now Includes Nvidia’s Omniverse

The capabilities of Nvidia’s Omniverse Cloud can also be streamed to Nvidia’s Graphics Delivery Network, a global network of data centers created specifically to stream 3D content to Apple’s Vision Pro mixed reality headset, thanks to these Omniverse Cloud APIs.

Using only the device and an internet connection, Nvidia demonstrated how workflows based on Omniverse could be seamlessly integrated with Vision Pro’s high-resolution displays, utilizing the company’s RTX cloud rendering capabilities to generate visually captivating experiences. The company claims that the cloud-based method allows real-time physically based renderings of virtual objects, or “digital twins,” to be directly streamed to the Vision Pro headset, guaranteeing high fidelity and realistic images.

Using front-facing cameras and sensors, the Vision Pro headset creates “mixed reality” experiences by allowing users to see their surroundings while superimposing 3D renders of objects onto the actual world. Moreover, the headset can produce a fully computer-generated virtual reality that is highly immersive. When the device was first unveiled, Tim Cook, the CEO of Apple Inc., referred to it as a “spatial computer” to highlight its ability to connect the virtual and physical worlds.

Mike Rockwell, vice president of the Vision Products Group at Apple, stated, “The breakthrough ultra-high-resolution displays of Apple Vision Pro, combined with photorealistic rendering of OpenUSD content streamed from Nvidia accelerated computing, unlocks an incredible opportunity for the advancement of immersive experiences. Spatial computing will redefine how designers and developers build captivating digital content, driving a new era of creativity and engagement.”

Developers now have the chance to combine local on-device rendering with remote cloud rendering thanks to the release of Apple Vision Pro. Using Apple’s native SwiftUI and Reality Kit, along with Omniverse RTX Renderer streaming over GDN, users can now create fully interactive digital twins and experiences.