Nvidia's goal is to further develop tools for creating digital twins and machine learning models for enterprise use. Even the introduction of quantum computers is to be accelerated with a number of new hardware and software solutions.
Digital twins, numerical models that reflect changes in real-world objects and are useful for design, manufacturing, and service delivery, vary in their level of detail. For some applications, a simple database can be enough to record a product's service history: like when it was manufactured, who it was shipped to, what changes were made. For other scenarios, on the other hand, a complete 3D model with real-time sensor data is required, e.g. B. can be used to warn of a component failure or rain forecast.
At GTC 2022, the company announced new tools for creating digital twins for scientific and engineering applications. Two research groups are already using Nvidia's AI framework Modulus to develop models for machine learning in physics and the Omniverse 3D virtual simulation platform to produce weather forecasts with greater reliability and speed and to optimize wind farm designs.
Siemens Gamesa Renewable Energy engineers use the combination of Modulus and Omniverse, Nvidia's digital twin platform for scientific computing, to model the placement of wind turbines in relation to each other. This is to ensure that the maximum amount of electricity is generated and that the impact of the turbulence generated by the rotor blades on neighboring turbines is minimized.
While the Siemens-Gamesa model studies the effects of wind over a zone a few kilometers across, the ambitions of the researchers working on FourCastNet are much larger. FourCastNet (named after the Fourier operator used to calculate the Fredholm integral) is a weather forecasting tool trained on 10 terabytes of data. It emulates and forecasts extreme weather events such as hurricanes or atmospheric flows that caused the Pacific Northwest and Sydney, Australia, floods in early March. Nvidia claims it can do this up to 45,000 times faster than traditional numerical prediction models.
The system is a first step towards the realization of an even more ambitious project that Nvidia calls Earth-2. Nvidia announced in November 2021 that it would build a supercomputer using its own chips and use it to create a one-meter-resolution digital twin of the Earth in its Omniverse software to model the effects of climate change.
To help other companies create and maintain their own digital twins, later this year Nvidia will offer OVX computing systems running Omniverse software on racks equipped with its own GPUs, memory, and high-speed switch Fabrics are fitted. Nvidia is also introducing the Omniverse cloud to enable creatives, designers and developers to collaborate on 3D designs without requiring access to their own high-performance computing power.
The company is also working with robotics manufacturers and data providers to increase the number of Omniverse connectors that developers can use to better align and interact with the real world with their digital twins.
Companies already using Omniverse now include retailers Kroger and Lowes, alongside Siemens Energy, BMW and Ericsson, according to Nvidia. They use Omniverse to simulate their stores and the logistics chains that supply them.
Running machine learning models can be computationally intensive, but training is even more difficult as the process requires a system that can perform complex calculations on large amounts of data. At GTC2022, Nvidia is introducing a new GPU architecture, Hopper, to succeed the Ampere design. It should speed up such tasks. The first HPC chip based on it is the H100. The GPU is named after computer science pioneer Grace Hopper, who developed one of the first compilers.
According to Nvidia, the chip enables real-time execution of large language models and recommendation systems, which are increasingly used in enterprise applications, and contains new instructions that can accelerate route optimization and genomics applications. With the ability to split the GPU into multiple instances – much like virtual machines in a CPU – it will also be useful for running multiple smaller applications on-premises or in the cloud.
Compared to scientific modelling, training AI models requires less mathematical precision but higher data throughput. The H100's design allows applications to trade off one against the other. The result, according to Nvidia, is that systems built using the H100 are able to train models nine times faster than those using the previous model, the A100.
Nvidia said its new H100 chips will also allow confidential computing capabilities to be extended to the GPU, a feature previously only available on CPUs. Confidential computing enables organizations to securely process healthcare or financial data in the secure enclave of a purpose-built processor, while remaining encrypted outside the processor.
The ability to securely process such data on a GPU, even in a public cloud or colocation facility, could enable organizations to accelerate the development and deployment of machine learning models without increasing capital expenditures.
Quantum computing promises – or perhaps even threatens – to wipe out large parts of today's market for high-performance computers with quantum processors that use subatomic phenomena to solve previously unsolvable optimization problems. When the time comes, Nvidia's sales in the supercomputing market could plummet. In the meantime, however, the manufacturer's chips and software play an important role in simulating quantum computing systems.
Researchers at the interface between quantum and classical computing have developed a low-level machine language called Quantum Intermediate Representation. Nvidia has developed a compiler for this language, nvq++, which will initially be used by researchers at Oak Ridge National Laboratory. There is also an SDK for accelerating quantum workflows, cuQuantum, available as a container and optimized to run on Nvidia's A100 GPU. These tools could help companies build quantum capabilities at a time when true quantum computing is not yet widely available.