NVIDIA launches into SaaS to support Omniverse on the cloud
At GTC 2022, Nvidia announced major updates and features around its Omniverse metaverse building platform. For the first time, the company announced its intention to offer SaaS—Nvidia Omniverse Cloud– a comprehensive suite of cloud services for artists, developers, and enterprise teams to build, publish, operate, and experience metaverse applications anywhere.
“The Metaverse is the evolution of the Internet connecting virtual 3D worlds using Universal Scene Description (USD) and visualization through a real-time virtual world simulation engine,” explained Richard Kerris, vice president of Omniverse, highlighting its applications.
Kerris said fashion designers, furniture and ware makers, and retailers offer virtual 3D products that can be used to augment reality. Furthermore, he said that telecom operators are creating digital twins of radio networks to optimize and deploy radio towers.
He said most companies today are creating digital twins of warehouses and factories to optimize their layouts and logistics. “We’re building a digital twin of Earth to predict the climate in decades to come,” Kerris added.
Incidentally, most of the Nvidia GTC predictions made by Analytics India Magazine have come true to a large extent. Check out Nvidia’s expectation story here.
Powering the Omniverse Ecosystem in India
Nvidia said its ecosystem of developers and customers is growing. Currently, the company has more than 150 software partners and cash. Over the years he has built hundreds of extensionsthanks to its internal team, its community of developers, its partners and its resellers around the world.
Nvidia said more 2,00,000 individual users downloaded Omniverse, spread across sectors such as telecommunications, transportation, retail energy, automotive, manufacturing, and more.
Kerris told AIM that Nvidia Omniverse has about hundreds of customers in India. “It could also be thousands,” he added, stating that they have a developer relations team in the country, where they are constantly overwhelmed with ongoing work from customers using Omniverse or wanting to access Omniverse training. “It’s a growing market for us,” he added.
Nvidia enters SaaS with Omniverse cloud services
At the GTC, Nvidia announced the launch of its first software and Infrastructure as a Service offer-Nvidia Omniverse Cloud—a comprehensive suite of cloud services for artists, developers, and enterprise teams to build, publish, operate, and experience metaverse applications anywhere.
With this, individuals and teams can design and collaborate on 3D workflows without the need for local computing power. For example, roboticists can train, simulate, test, and deploy AI-enabled intelligent machines with increased scalability and accessibility.
Some early Omniverse Cloud proponents include RIMAC Group, WPP and Siemens.
Simply put, users can create and collaborate on any device with Omniverse App Streaming, access and modify shared virtual worlds with Omniverse Nucleus Cloud, and scale 3D workloads on the cloud with Omniverse Farm.
Powered by Nvidia OVX, Omniverse Cloud runs on the planet-scale Omniverse Cloud Computer, alongside Nvidia HGX for advanced AI and the Nvidia Graphics Delivery Network to enable low-latency streaming of 3D experiences interactive on peripheral devices.
Click here to start.
Other Omniverse Updates
“Omniverse was never built,” Kerris said.
Omniverse is a real-time 3D database, a platform developed by Nvidia to create and operate applications for the metaverse. The platform enables designers and 3D teams to better connect, build existing 3D pipelines, and leverage virtual world simulations. Companies can now write applications and services on Omniverse, such as replicators to generate synthetic data and run real-world simulations for robotics (Issac sim) and autonomous vehicles (DRIVE sim).
To ensure Omniverse runs smoothly, Nvidia announced at the GTC the launch of the second generation of Nvidia OVX, powered by next-generation GPUs and enhanced networking technology to deliver graphics, AI, and gaming capabilities. simulation of revolutionary digital twins.
(Source: Nvidia GTC)
Citing BMW Group and Jaguar Land Rover, Kerris said they were among the first customers to receive second-generation Nvidia OVX systems. Nvidia has partnered with Inspur, Lenovo and Supermicro to improve the configuration, which should be launched in early 2023. “We will expand the partner ecosystem, including Gigabyte, H3C and QCTin the future,” he added.
Over at GTC, Nvidia also released several major Omniverse updates. This includes a new universal scene description (USD), where it added new collections of free, online USD schema examples and tutorials. In addition, the company has also released USD++ extension samples with the latest kit, as well as web-based USD experiment samples.
Click here to see the Omniverse kit.
Inside of Omniverse Kit, Nvidia has now released kit-based benchmark applications like ‘Create and display‘. Additionally, he announced major improvements for real-time ray tracing, path tracing, performance, large scene animation and behavior, and neural graphics – new experimental AI tools based on GAN and fusion models, a new AI car explorer and a new animal explorer.
With XR (mixed reality), Nvidia released important rendering and performance improvements thanks to their new GPUs which pilot VR (virtual reality) in real time and fully retracted. The team said it offered twice the performance of their previous versions. This means that large scenes previously impossible to use with fully ray traced virtual reality are now smooth enough for wide and expansive viewing. “You will be able to have fully raytraced VR experiences with Omniverse,” Kerris said.
Omniverse Replicator: Nvidia has made available five containers for AWS deployment.
Nvidia launched a new ‘SimReady Assets’. This contains thousands of free data for AI workflows. This can be for digital twins, synthetic data generation, and AI training workloads.
In addition, Nvidia has also released new developer tools, for example, a new CMS (content management system) for Omniverse developers.
More importantly, Nvidia launched ‘Siemens JT’. JT is a widely used common language in 3D format. It’s used throughout the product development cycle and across all major industries to communicate critical design information typically locked away in CAD files, Kerris explained.
Last month at SIGGRAPH 2022, NVIDIA announced the launch of its Cloud Avatar Engine (ACE). It is a collection of cloud-based AI templates and services for developers to create, customize, and deploy engaging and interactive avatars.
Continuing that momentum, at this year’s GTC, Nvidia announced updates to its cloud-native avatar technologies—Omniverse ACE, accompanied by the unveiling Purple. This cloud-based avatar represents the latest evolution in avatar development via ACE.
Nvidia said that to animate interactive avatars like Purple; developers must ensure that the 3D character can see, hear, understand and communicate with people. But, of course, that’s easier said than done.
To fuel this, Nvidia has launched a new limited EA program: Nvidia Tokyo, a domain-specific AI framework used to create and deploy fully autonomous interactive customer service avatars in the cloud. “Avatar are the inhabitants of virtual worlds, and creating and deploying interactive avatars can be incredibly difficult. ACE essentially brings any avatar to life using AI at scale anywhere with a suite of cloud-native AI microservices,” Kerris explained.
Nvidia Maxine cloud-native microservices
At GTC 2022, Nvidia also announced the launch of Maxine, a real-time communication AI application framework. It has been redesigned for cloud-native microservices. With the latest announcement, customers can now request early access audio effects microservices for premium sound in multi-cloud deployments. These microservices include Noise Suppression, Room Echo Suppression, Acoustic Cancellation, and Audio Super-Resolution.
Nvidia said all new SDK features will provide innovative AI effects while improving the SDK’s AI models for better audio and video quality. This includes features like bass expression estimation, eye contact, and more.