Nvidia last week became the third company ever to exceed a $3 trillion market capitalization. The company’s valuation increased by a trillion dollars in just the last three months.
As the top chipmaker fueling the generative AI (genAI) boom, Nvidia’s over-the-top success feels like a fluke, a blip — a surge based on a bubble.
But I think that’s wrong. In fact, I think the company might be wildly undervalued.
Let’s compare. At the time I wrote this, Apple’s market capitalization was around $3.003 trillion and Nvidia’s was even higher: $3.012 trillion.
The valuation of a company is based on its share price, which itself is based in large measure on the perception of its earning potential in the future.
Apple’s revenue in 2023 was $383 billion (a 3% decline year-over-year from the previous year). Here’s a question: How does Apple double that revenue in the coming years? Sell more iPhones? Add AI to iPhones? Sell more expensive iPhones? Push Apple Vision Pro sales? Deliver more iPads? Add new financial services?
I just don’t see a path for Apple to continue its last decade of growth into the future.
Nvidia, on the other hand, has massive future growth potential. The future of AI processing, the future of self-driving cars, the future of robotics, the future of industrial automation — if you think these realms will expand in years ahead (and they almost certainly will) — then Nvidia’s sales will expand accordingly.
One Nvidia business initiative alone represents the transformation of a $50 trillion industry, according to Jensen Huang, Nvidia’s cofounder and CEO: industrial robots and robotic systems.
Huang, a tech rock star/Steve Jobs-like figure in Taiwan, the country of his birth (here he is signing a groupie’s chest last week), gave a surprisingly Jobsian keynote at 2024 Computex.
He even echoed Jobs’ “iPod, phone, internet communicator. Are you getting it?” (Huang’s version was: “computer, acceleration libraries, pre-trained models.”)
During the keynote, he laid out a breathtaking vision.
The Physical AI concept
Huang unveiled a groundbreaking concept called “Physical AI,” which he described as “AI that understands the laws of physics.”
His vision for “Physical AI” involves a complex virtual environment simulating real-world physics (simulated gravity, inertia, friction, temperature and other factors) and virtual people, objects and environments — say, a factory floor) where exact digital replicas of physical robots and robotic systems “learn” by testing thousands of options and scenarios, then retaining the solutions in software that will control actual robots and robotic systems.
One output from “Physical AI system” is the creation of generalist embodied intelligent AI agents that can operate in both virtual and physical environments.
This is all based on the “digital twin” idea I told you about more than a year ago. In a nutshell, digital twin environments enhance factories in nine major ways:
1. Real-time monitoring and analysis
2. Predictive maintenance
3. Production optimization
4. Quality control and defect detection
5. Enhanced decision-making
6. Training and safety
7. Space and workflow planning
8. Lifecycle management
9. Simulation and testing
The idea could revolutionize the field of robotics and autonomous systems. At the heart of this innovation lies Nvidia’s Omniverse, a powerful platform that combines real-time physically based rendering, physics simulation, and genAI technologies. Nvidia calls Omniverse “the operating system for Physical AI.”
The “Physical AI” concept represents a big change in the way robots and autonomous machines learn. In the past, humanoid, factory and other kinds of robots are tested in physical labs. Balancing robots start tethered, to protect when they fall as they learn to navigate. This painstaking process involves countless hours of trial and error. (Here’s what that looks like at Boston Dynamics.)
In a “Physical AI” scenario, a similar process takes place with a robot’s digital clone or twin in virtual space. The trial and error is radically accelerated without risk to people or equipment, and a vastly larger number of attempts can be made during the training. Once the robot learns or is programmed to navigate the virtual world flawlessly, that software is applied to a real robot, which can be fully updated with all that experience and “knowledge.”
In the “Physical AI” version of Omniverse, digital twin factories can train the robots themselves, and model the development of robots that work together with human workers for greater efficiency and safety, according to Nvidia.
The Omniverse platform integrates several Nvidia technologies, including Metropolis vision AI and Isaac AI for robot development, the Isaac Manipulator and Project GR00T, for simulation and testing.
Nvidia says more than a dozen robotics manufacturers are already using Omniverse to create virtual replicas of physical automated factories.
Why Nvidia will only grow
Look at the stars aligning for Nvidia. The company leads the industry in AI hardware and software, and solutions for data centers, cloud computing, and edge computing. It sells a lot of this stuff, and wants to sell a lot more.
For the foreseeable future, Nvidia will continue to dominate the market for AI chips used for training and processing AI chatbots and a thousand other AI applications.
In the realm of automation and robotics systems, Huang’s claim that just about everything will become an AI-controlled robot in the future, from cars to restaurants to tractors — and that those robots will be built by robots in robotic factories is likely to be realized.
Huang’s further claim that AI-automated robotic systems will be most cost-effectively trained and optimized in digital twin “Physical AI” environments also checks out.
But here’s the mind-blowing part. Nvidia doesn’t have any competition in the “Physical AI” space. And the barriers to entry are gigantic.
The only conclusion: all roads lead to massive future upside growth for Nvidia.
Nvidia will make the chips that power the robots, the chips that power the “Physical AI” environment, the chips that power self-driving cars and the software and platforms, and AI that will enable companies to buy and use all those processors.
It’s Nvidia’s world now. Figuratively, and also digitally. (Huang also unveiled the concept of “Earth 2” — a “digital twin of the Earth” that would enable humanity to “predict the future of our planet.”)
Nvidia’s “Physical AI” idea is the killer concept of our generation, the foundational model upon which our robotic, automated future will be built.
Nvidia last week became the third company ever to exceed a $3 trillion market capitalization. The company’s valuation increased by a trillion dollars in just the last three months.
As the top chipmaker fueling the generative AI (genAI) boom, Nvidia’s over-the-top success feels like a fluke, a blip — a surge based on a bubble.
But I think that’s wrong. In fact, I think the company might be wildly undervalued.
Let’s compare. At the time I wrote this, Apple’s market capitalization was around $3.003 trillion and Nvidia’s was even higher: $3.012 trillion.
The valuation of a company is based on its share price, which itself is based in large measure on the perception of its earning potential in the future.
Apple’s revenue in 2023 was $383 billion (a 3% decline year-over-year from the previous year). Here’s a question: How does Apple double that revenue in the coming years? Sell more iPhones? Add AI to iPhones? Sell more expensive iPhones? Push Apple Vision Pro sales? Deliver more iPads? Add new financial services?
I just don’t see a path for Apple to continue its last decade of growth into the future.
Nvidia, on the other hand, has massive future growth potential. The future of AI processing, the future of self-driving cars, the future of robotics, the future of industrial automation — if you think these realms will expand in years ahead (and they almost certainly will) — then Nvidia’s sales will expand accordingly.
One Nvidia business initiative alone represents the transformation of a $50 trillion industry, according to Jensen Huang, Nvidia’s cofounder and CEO: industrial robots and robotic systems.
Huang, a tech rock star/Steve Jobs-like figure in Taiwan, the country of his birth (here he is signing a groupie’s chest last week), gave a surprisingly Jobsian keynote at 2024 Computex.
He even echoed Jobs’ “iPod, phone, internet communicator. Are you getting it?” (Huang’s version was: “computer, acceleration libraries, pre-trained models.”)
During the keynote, he laid out a breathtaking vision.
The Physical AI concept
Huang unveiled a groundbreaking concept called “Physical AI,” which he described as “AI that understands the laws of physics.”
His vision for “Physical AI” involves a complex virtual environment simulating real-world physics (simulated gravity, inertia, friction, temperature and other factors) and virtual people, objects and environments — say, a factory floor) where exact digital replicas of physical robots and robotic systems “learn” by testing thousands of options and scenarios, then retaining the solutions in software that will control actual robots and robotic systems.
One output from “Physical AI system” is the creation of generalist embodied intelligent AI agents that can operate in both virtual and physical environments.
This is all based on the “digital twin” idea I told you about more than a year ago. In a nutshell, digital twin environments enhance factories in nine major ways:
1. Real-time monitoring and analysis
2. Predictive maintenance
3. Production optimization
4. Quality control and defect detection
5. Enhanced decision-making
6. Training and safety
7. Space and workflow planning
8. Lifecycle management
9. Simulation and testing
The idea could revolutionize the field of robotics and autonomous systems. At the heart of this innovation lies Nvidia’s Omniverse, a powerful platform that combines real-time physically based rendering, physics simulation, and genAI technologies. Nvidia calls Omniverse “the operating system for Physical AI.”
The “Physical AI” concept represents a big change in the way robots and autonomous machines learn. In the past, humanoid, factory and other kinds of robots are tested in physical labs. Balancing robots start tethered, to protect when they fall as they learn to navigate. This painstaking process involves countless hours of trial and error. (Here’s what that looks like at Boston Dynamics.)
In a “Physical AI” scenario, a similar process takes place with a robot’s digital clone or twin in virtual space. The trial and error is radically accelerated without risk to people or equipment, and a vastly larger number of attempts can be made during the training. Once the robot learns or is programmed to navigate the virtual world flawlessly, that software is applied to a real robot, which can be fully updated with all that experience and “knowledge.”
In the “Physical AI” version of Omniverse, digital twin factories can train the robots themselves, and model the development of robots that work together with human workers for greater efficiency and safety, according to Nvidia.
The Omniverse platform integrates several Nvidia technologies, including Metropolis vision AI and Isaac AI for robot development, the Isaac Manipulator and Project GR00T, for simulation and testing.
Nvidia says more than a dozen robotics manufacturers are already using Omniverse to create virtual replicas of physical automated factories.
Why Nvidia will only grow
Look at the stars aligning for Nvidia. The company leads the industry in AI hardware and software, and solutions for data centers, cloud computing, and edge computing. It sells a lot of this stuff, and wants to sell a lot more.
For the foreseeable future, Nvidia will continue to dominate the market for AI chips used for training and processing AI chatbots and a thousand other AI applications.
In the realm of automation and robotics systems, Huang’s claim that just about everything will become an AI-controlled robot in the future, from cars to restaurants to tractors — and that those robots will be built by robots in robotic factories is likely to be realized.
Huang’s further claim that AI-automated robotic systems will be most cost-effectively trained and optimized in digital twin “Physical AI” environments also checks out.
But here’s the mind-blowing part. Nvidia doesn’t have any competition in the “Physical AI” space. And the barriers to entry are gigantic.
The only conclusion: all roads lead to massive future upside growth for Nvidia.
Nvidia will make the chips that power the robots, the chips that power the “Physical AI” environment, the chips that power self-driving cars and the software and platforms, and AI that will enable companies to buy and use all those processors.
It’s Nvidia’s world now. Figuratively, and also digitally. (Huang also unveiled the concept of “Earth 2” — a “digital twin of the Earth” that would enable humanity to “predict the future of our planet.”)
Nvidia’s “Physical AI” idea is the killer concept of our generation, the foundational model upon which our robotic, automated future will be built. Read More