The promise of physical AI is that engineers could program physical agents as easily as digital ones. However, we’re not there yet. Robotics progress is limited by a lack of data from physical environments. To train machines, companies must create mock-up warehouses for testing, and a new industry is emerging focused on monitoring factory lines and gig workers to develop deep learning models for robots.
Simulation is another approach; accurate virtual replicas of real-world settings can offer the data and environments roboticists require to advance work on a scalable level.
Antioch, a startup creating simulation tools for robotics developers, aims to bridge the sim-to-real gap—the problem of ensuring virtual environments are realistic enough for robots trained within them to function reliably in the real world.
Antioch CEO and cofounder Harry Mellsop said, “How can we do the best possible job reducing that gap, to make simulation feel just like the real world from the perspective of your autonomous system?” The company announced to TechCrunch an $8.5 million seed round valuing it at $60 million, led by venture firm A* and Category Ventures, along with MaC Venture Capital, Abstract, Box Group, and Icehouse Ventures.
Mellsop founded the New York-based company with four others in May last year. Among them, Alex Langshur and Michael Calvey were cofounders of Transpose, a security and intelligence firm sold to Chainalysis, while Collin Schlager and Colton Swingle formerly worked at Google DeepMind and Meta Reality Labs, respectively.
The need for improved simulation is central to what many big autonomy companies are tackling. In self-driving cars, Waymo uses Google DeepMind’s world model to assess its driving model, theoretically reducing the data collection needed for deploying Waymo vehicles in new locations.
Building and utilizing such models to test robots demand different skills than creating a self-driving car, and Antioch seeks to offer the platform solving this issue for newer companies lacking the resources. Smaller companies also lack the financial means to construct physical testing facilities or operate sensor-equipped cars for extensive distances.
“The vast majority of the industry doesn’t use simulation whatsoever, and I think we’re now just really understanding clearly that we need to move faster,” Mellsop said.
Antioch likens its product to Cursor, an AI-powered software development tool. Antioch enables robot builders to create numerous digital versions of their hardware and connect them to simulated sensors that mirror real-world data. These settings allow developers to test edge cases, conduct reinforcement learning, or generate new training data.
The challenge lies in making sure the simulation’s physics align with reality, so when a model controls a real machine, nothing fails. The company begins with models by Nvidia, World Labs, and others, crafting domain-specific libraries for ease of use. Working with various customers grants Antioch a contextual depth in refining simulations unmatched by any single physical AI firm.
“What happened with software engineering and LLMs is just starting to happen with physical AI,” Çağla Kaymaz, a partner at Category Ventures, said. “With software, malfunctioning coding tools typically impact only the digital world. However, in the physical world, the stakes are significantly higher.”
Antioch currently focuses mainly on sensor and perception systems, key to automated vehicles, agricultural machinery, or drones. Ambitions for physical AI to power universal human-task-replicating robots remain further off. Although Antioch targets startups, its earliest engagements have been with major corporations already investing in robotics.
Adrian Macneil, an angel investor backing Antioch, deeply understands this field. At the self-driving startup Cruise, he developed the company’s data infrastructure, and founded Foxglove in 2021, which provides similar data solutions to physical AI startups.
“Simulation proves vital for establishing a safety case or handling high-accuracy tasks,” he stated at the Ride.AI conference in San Francisco. “Driving enough real-world miles isn’t feasible.”
Macneil advocates for the development of tools like those initiating the SaaS revolution—platforms such as Github, Stripe, and Twilio—to assist physical AI. “We need a lot more of the entire toolchain to be available off the shelf,” he told TechCrunch.
Mellsop believes, “Anyone building an autonomous system for the real world will do so in software primarily in two to three years. It’s the first time you can have autonomous agents iterate on a physical autonomy system and actually close the feedback loop.”
Experiments in this vein are underway. David Mayo, a researcher at MIT’s Computer Science and Artificial Intelligence Lab, utilizes Antioch’s platform for evaluating LLMs. In one trial, Mayo’s AI models design robots and test them via Antioch’s simulator, even pitting models against each other in simulated competitions, like pushing a rival bot off a platform. A realistic sandbox for LLMs could provide a fresh benchmarking approach.
Before AI engineers take over, bridging the gap between digital models and reality requires
