Amidst the festivities at the fall 2022 GTC conference, Nvidia took the wraps off new robotics-related hardware and services, aimed at companies that develop and test machines in industries like manufacturing. Nvidia’s robotics simulation platform IsaacSim will soon be available in the cloud, the company said. And Nvidia’s Jetson lineup of system-on-modules is expanding with the Jetson Orion Nano, designed for low-power robots.
Isaac Sim, which launched in open beta last June, allows designers to simulate robots interacting with real-world mockups (think digital re-creations of warehouses and factory floors). Users can generate data sets from simulated sensors to train models on real-world robots, using synthetic data from batches of parallel, unique simulations to improve model performance.
It’s not just marketing bluster, necessarily. Some research suggests that synthetic data has the potential to solve many of the development challenges plaguing companies trying to implement AI. MIT researchers recently discovered a way to classify images using synthetic data, and nearly every major autonomous vehicle company uses simulated data to supplement the real-world data they collect from cars on the road.
The upcoming release of Isaac Sim — available on AWS RoboMaker and Nvidia NGC, from which to any public cloud, and soon on Nvidia’s Omniverse cloud platform — will use the company’s real-time fleet task assignment and route-planning engine, Nvidia cuOpt, to optimize robot path planning. .
“With Isaac Sim in the cloud … teams are around the world as they share a virtual world that simulates and trains robots,” Nvidia senior product marketing manager Gerard Andrews wrote in a blog post. “Running IsaacSim in the cloud means developers are no longer tied to a powerful workstation to run simulations. Any device can set up, manage and review the results of simulations.”
Jetson Orin Nano
Back in March, Nvidia introduced the Jetson Orion, the company’s next generation of ARM-based, single-board PCs for edge computing use cases. First in line is the Jetson AGX Orin, and the Orin Nano expands the portfolio with more affordable configurations.
The aforementioned Orin Nano delivers 40 trillion operations per second (TOPS) — the number of computing operations a chip can perform at 100% utilization — in the smallest Jetson form factor to date. It’s on the entry-level side of the Jetson family, which now includes six Orion-based product modules aimed at robotics and a range of local, offline computing applications.
Coming in modules compatible with Nvidia’s previously announced Orin NX, the Orin Nano supports AI application pipelines with an Ampere architecture GPU — Ampere is a GPU architecture launched by Nvidia in 2020. Two versions will be available in January starting at $199: The Orin Nano 8GB which offers up to 40 TOPS with configurable power from 7W to 15W, and the Orin Nano 4GB which reaches up to 20 TOPS with power options from 5W to 10W.
“NVIDIA has adopted Jetson AGX Orin over 1,000 customers and 150 partners since it adopted Orin just six months ago, and Orin Nano significantly expands this adoption,” said Deepu Talla, Nvidia’s VP of Embedded and Edge Computing, in a statement. (Compared to the Orin Nano, the Jetson AGX Orin costs more than a thousand dollars — not to mention, a significant delta.) “Millions of edge AI and [robotics] Developers Jetson Orion Nano sets a new standard for entry-level edge AI and robotics.