Nvidia is plumbing the 3D universe for our avatars • The Register

Nvidia sees itself as the hardware overlord of the “metaverse” and has dropped some references to the operation of a parallel 3D universe in which our cartoon selves can work, play and interact.
The chip business has expanded Omniverse with new installations – an underlying hardware and software engine that acts as a planetary core that brings together virtual communities in an alternate 3D universe. Omniverse is also used to create avatars to enhance real-world experiences in cars, hospitals, and robots.
“We’re not telling people to replace what they do, we’re improving what they do,” said Richard Kerris, vice president of the Omniverse platform, during a press conference.
The Omniverse announcements came this week during the company’s GPU technology conference. Nvidia CEO Jensen Huang will speak about many of these announcements on Tuesday.
Jensen like you’ve never seen him before. Source: Nvidia. click to enlarge
One such announcement is Omniverse Avatar, which can generate interactive, intelligent AI avatars, for example to help guests order meals or help a driver park themselves or navigate the streets.
Nvidia gave an example of a conversational avatar replacing servers in restaurants. When ordering food, an AI system – represented by an avatar on the screen – could converse in real time using speech recognition and natural intelligence techniques, record a person’s mood with the help of computer vision and recommend dishes based on the knowledge database.
To do this, the avatar has to execute several AI models – for example language, image recognition and context – at the same time, which can be a challenge. The company developed the Unified Compute Framework, which models AI as microservices so that apps can run in a single or hybrid system.
Nvidia already has underlying AI systems such as the Megatron-Turing Natural Language Generation model – a monolithic transformer language developed jointly with Microsoft. The system is now offered on its DGX AI hardware.
Omniverse Avatar is also the underlying technology behind Drive Concierge – an in-car AI assistant that is an “in-car personal concierge, ready for you,” said Deepu Talla, vice president and general manager of Embedded and Edge Computing.
AI systems in cars, represented by interactive characters, can understand a driver and the occupants of the car through habits, voice, and interactions. Accordingly, the AI system can make phone calls or make recommendations for restaurants in the vicinity.
With the help of cameras and other sensors, the system can also detect whether a driver is sleeping or warn a driver if he has forgotten something in the car. The messages of the AI system are displayed on screens using interactive characters or interfaces.
Old dog, new tricks
The Metaverse concept isn’t new – it’s been around since Linden Labs Second Life or games like The Sims. Nvidia hopes to break down proprietary walls and create a unified metaverse so that users can theoretically switch between universes created by different companies.
During the briefing, Nvidia didn’t mention that Facebook was helped to realize its vision of a future around the metaverse, which is at the center of the renaming to meta.
But Nvidia is getting other companies to bring their 3D work to the Omniverse platform through its software connectors. That list includes Esri’s ArcGIS cityEngine, which can create urban environments in 3D, and Replica Studio’s AI voice engine, which can simulate real voice for animated characters
“What makes all of this possible is the basis of USD or Universal Scene Description. USD is the HTML of 3D – an important element as it enables all of these software products to take advantage of the virtual worlds we are talking about, ”said Kerris. USD was developed by Pixar to share 3D assets in a collaborative way.
Nvidia also announced Omniverse Enterprise – a subscription offering with a software stack that helps companies create 3D workflows that can be connected to the Omniverse platform. Priced at $ 9,000 per year, the offering is aimed at industries such as tech and entertainment and will be available through resellers such as Dell, Lenovo, PNY and Supermicro.
The company also uses the Omniverse platform to generate synthetic data on which to train “digital twins” or virtual simulations of real-world objects. The ISAAC SIM can train robots using synthetic data based on real and virtual information. The SIM enables the introduction of new objects, camera views, and lighting to create custom datasets that robots can be trained on.
An automotive equivalent is Drive SIM, which can generate realistic scenes using simulated cameras for autonomous driving. The SIM takes real data into account to train AI models for autonomous driving. The camera lens models are simulated and record real phenomena such as motion blur, rolling shutter and LED flicker.
Nvidia works closely with sensor manufacturers to accurately replicate the drive SIM data. The camera, radar, lidar and ultrasonic sensor models are all tracked using RTX graphics technology, according to Danny Shapiro, Nvidia’s vice president of automotive.
The company has incorporated some hardware announcements into the overall Omniverse narrative.
Become part of the new generation
The next generation Jetson AGX Orin developer board will be available to manufacturers in the first quarter of next year. It has 12 CPU cores based on Arm Cortex-A78 designs, 32 GB LPDDR5 RAM and delivers 200 TOPS (Tera Operations Per Second) performance.
The Drive Hyperion 8 is a computer platform for cars that has two Drive Orin SoCs and offers a performance of up to 500 TOPS. The platform has 12 cameras, nine radars, a lidar and 12 ultrasonic sensors. It will be used in vehicles produced in 2024 and has a modular structure so that car manufacturers can only use the functions they need. Cars with older Nvidia computers can be upgraded to Drive Hyperion 8.
Nvidia also announced the Quantum-2 InfiniBand switch, which contains 57 billion transistors and is manufactured by Taiwan Semiconductor Manufacturing Co. in the 7nm process. It can process 66.5 billion packets per second and has 64 ports for 400 Gbps data transfers or 128 ports for 200 Gbps transfers, they say.
The company also announced Morpheus – an AI framework that enables cybersecurity providers to discover companies and alert them to irregular behavior in a network or data center. The AI framework identifies subtle changes in applications, users, or network traffic to identify anomalies and suspicious behavior.
Morpheus obtains the required data from Nvidia’s BlueField SmartNICs / Data Processing Units, which have been equipped with new capabilities thanks to an upgrade to the DOCA SDK on Bluefield DPUs such as CUDA on Nvidia GPUs.
The DOCA upgrade – version 1.2 – can also “create metered cloud services that control resource access, validate every application and every user”. [and] Isolate potentially compromised machines “. Run the tools in distributed mode on SmartNICs.
Speaking of AI, the company also announced the Launchpad program, which will provide short-term access to AI hardware and software through Equinix data centers in the US, Europe, Japan and Singapore. The latter three locations are Launchpad’s first presences outside of the US, which gives Nvidia hope that its role as an AI on-ramp could be more widely adopted.
Another new offering is a new twist on the RIVA Conversational AI tool, which is said to be able to create a custom human-like voice in one day based on just 30 minutes of sample language. Nvidia believes this is just the thing for organizations looking to offer custom voice interfaces. ®