Beyond Words: Why Physical AI Is the Next Frontier
The same way LLMs can predict the next word, robotic models are learning to predict the next physical action.
Kahini Shah |

For the past few years, the major breakthroughs in technology have reshaped the digital world. ChatGPT now has over 5 billion monthly active users and 95 percent of U.S. companies report using generative AI to boost business functions.
As transformative as those digital breakthroughs have been, there’s something even bigger on the horizon. We’re beginning the innovation transition from the digital world to the physical one.
Around us are signs that we’re entering the age of physical agents. These are not the simple robotic systems of the past, like automated vacuums and manufacturing robots. They’re the general-purpose systems that can actually think, adapt, and make decisions in real time. We’re talking about robots in every form you can imagine—from cars and delivery drones to humanoids—all acting with judgment and learning from experience.
Today’s breakthroughs in robotics are driven by the same core technologies behind language models. But while LLMs are trained on vast text datasets to predict the next word, robotic foundation models learn to predict the next physical action using video, simulation, and human demonstrations.
The promise is similar. Just as an LLM can predict the next word or sentence zero-shot or few-shot, that is, with little to no examples, robotic foundation models are beginning to show early signs of the same adaptability to predict the next physical action. They’re letting robots learn general-purpose skills and apply them to new tasks with limited or in some cases no extensive training. We’re already seeing incredible progress, from four-legged robots trained to complete parkour courses to humanoid systems that can load dishwashers and prepare food.
Despite their advancing technology, robotics present challenges that make them far more complex than language models. For one, while LLMs can be trained on internet-scale text, there’s no equivalent corpus of ready-made data for physical work. Each robot must learn from simulations, demonstrations, and real-world trial and error, which makes progress slower and costlier. Second, physical AI is inseparable from hardware. In language, text can be a standard input and output for models that are hardware agonistic. In robotics, inputs and outputs are hardware dependent. A robot may perceive the world through a variety of sensors and output control instructions to a diverse set of motors and actuators. This diversity makes it harder to define a standard input–output representation. Third, robots operate in an endlessly uncertain world. The physical environment is unpredictable and unforgiving, from navigating a cluttered construction site to carefully handling fragile goods in a warehouse.
At Obvious, we see massive opportunity in addressing these challenges, because the companies that do will lead transformative breakthroughs that touch every industry. Imagine solar construction crews made entirely of robots building clean energy infrastructure on a scale we’ve never seen before. In hospitals, bots could help restock supplies and even assist with surgeries. The jobs that are most ripe for automation are those that are dangerous, hard to fill, or that people just don’t want to do. Obvious portfolio companies Dexterity and Pyka are already using robotic systems to take on high-risk tasks, such as loading trucks in warehouses and spraying crops from the air, both jobs historically prone to high injury and fatality rates.
We’re optimistic about the future we can build with these systems, not just to create new companies and products, but to build a new kind of infrastructure that can transform every industry and unlock entirely new ones. We’re focused on the “nervous system” for tomorrow’s change agents, working with founders and innovators to build the systems that will work alongside us, help us grow sustainably, and take on tasks that were once off-limits to automation.
This week at TEDAI in San Francisco, in a “AI’s Next Frontier” session hosted by Obvious, I’ll be talking more about this next wave of robotic AI. I think this change will be bigger and faster than most people expect. We’ll also discuss the critical ways we manage this transition with foresight and moral responsibility.
For a deeper look at our outlook on robotic AI and other world positive technologies, you can find my upcoming essay in Obvious’ 2026 World Positive Report coming this fall.
In the meantime, follow me and Obvious on LinkedIn to stay up-to-date on this and other exciting innovation frontiers.