- Nvidia unveiled two new software development kits (SDKs) with Kaolin WISP and NeuralVDB.
- The business also disclosed several technological initiatives to enhance its computer graphics creation capabilities for applications in the metaverse.
Nvidia is making several technological announcements at the SIGGRAPH 2022 conference and announced that it is going all-in on the metaverse. The firm announced a slew of new metaverse initiatives, including the launch of the Nvidia Omniverse Avatar Cloud Engine (ACE), a set of tools and services to build virtual assistants driven by Artificial Intelligence (AI).
The business also disclosed several technological initiatives to enhance its computer graphics creation capabilities for applications in the metaverse. The new NeuralVBD library, a development of the OpenVBD open-source library for sparse volume data, is one such initiative. Nvidia is working on improving the open-source Universal Scene Description (USD) format to enable metaverse applications further.
“3D content is especially critical for the Metaverse as we need to put stuff in the virtual world,” Sanja Fidler, VP of AI research at Nvidia, said in a press briefing. “We believe that AI is existential for 3D content creation, especially for the metaverse.”
Neural graphics AI will shape the future of the metaverse
Computer graphics are no longer simply rendered images; they can be much more with the concept of neural graphics.
According to Fidler, the goal of neural graphics is to insert AI capabilities into various stages of the graphics pipeline. AI can speed up graphics in a wide range of applications, including video games, digital twins, and the metaverse.
Nvidia also unveiled a pair of Software Development Kits (SDKs) with Kaolin WISP and NeuralVDB at SIGGRAPH 2022 that use the capabilities of neural graphics to create and present animation and 3D objects. Kaolin WISP is an extension to an existing PyTorch machine learning toolkit designed to enable quick 3D deep learning. According to Fidler, Kaolin WISP focuses on neural fields, a branch of neural graphics that focuses on 3D picture representation and the production of content using neural approaches. While Kaolin WISP is focused on speed, NeuralVDB is a project created to aid with 3D picture compression.
“Using machine learning, NeuralVDB introduces compact neural representations that dramatically reduce the memory footprint, which means that we can now represent the much higher resolution of 3D data,” Fidler said.
Defining the metaverse
According to Rev Lebaredian, VP of Omniverse and simulation technology at Nvidia, one crucial but the least understood fact about the metaverse is the core technology required to represent everything inside the metaverse.
The open-source Universal Scene Description (USD) technology, created by the animation firm Pixar, is that technology for Nvidia. The Omniverse platform from Nvidia is on top of USD.
“We’ve been hard at work advancing Universal Scene Description, extending it and making it viable as the core pillar and foundation of the metaverse, so that it will be analogous to the metaverse just like HTML is to the web,” Lebaredian said.
At SIGGRAPH 2022, Nvidia is announcing its plans for extending Universal Scene Description (USD), including new compatibility suites with graphics tools and tools to help users learn how to use USD.
Nvidia Avatar Cloud Engine introduces lifelike virtual assistants
Although chatbots and virtual avatars are not new, they have not yet been very lifelike. However, this may soon change, thanks to the new Nvidia Avatar Engine.
According to Lebaredian, the Avatar Cloud Engine is a framework that contains the essential technologies required to build avatars. The avatars are artificially intelligent robots that can communicate, perceive, and act in virtual environments and the metaverse.
“The metaverse without human-like representations or artificial intelligence inside it will be a very dull and sad place,” Lebaredian said. “We are providing the toolkit of technologies necessary to construct avatars of different forms so that others can take these technologies and build their specific ideas around what avatars should look, feel and behave like in those worlds.”