Physical Intelligence, a San Francisco-based robotics startup, has published new research showcasing its latest model, π0.7. This model can command robots to perform tasks they weren’t explicitly trained on, surprising the company’s researchers. The startup describes π0.7 as a step towards a general-purpose robot brain, potentially marking an inflection point similar to that seen with large language models. The key claim is compositional generalization—combining learned skills to solve unfamiliar problems. Unlike past rote memorization methods, π0.7 can synthesize fragmented data into functional understanding. An example includes the model using an air fryer it hadn’t practiced with, managing a task with step-by-step instructions. The continued development of the model suggests real-time improvement in new environments without retraining. Challenges remain, including a need for sophisticated command execution capability and a lack of standardized benchmarks. Despite these, π0.7 matched specialist models’ performance in various complex tasks. Physical Intelligence, valued at $5.6 billion, is discussing a funding round to potentially raise that value to $11 billion, with no timeline for product deployment given.
