
Unveiled at GTC 2026, the leader-follower imitation learning platform collects force, motion, and visual data directly on production hardware, bridging the divide between AI research labs and factory settings.
Universal Robots has introduced the UR AI Trainer, a hardware-software system developed with Scale AI. It enables operators to create high-fidelity robot training data directly on the cobots they use in production.
Revealed at NVIDIA’s GTC 2026 conference in San Jose on 16 March, the system aims to address the lab-to-factory gap: the challenge of transferring AI models trained in controlled research environments to real-world manufacturing settings.
The platform uses a leader-follower mechanism where a human guides a leader robot through a task, such as smartphone packaging, and a follower robot replicates the motion in real time. During each demonstration, the system records motion trajectories, force feedback, and visual data, generating multimodal datasets needed for training Vision-Language-Action models.
A crucial advantage is using the same industrial cobots UR provides for production: training data collected on a UR3e or UR7e in a controlled setting can train models that will operate on identical hardware in a factory.
“Our clients, from large enterprises to AI research labs, no longer just want AI features. They require a way to gather high-fidelity, synchronized robot and vision data to train AI models on the robots they plan to deploy. Our AI Trainer offers the industry’s first direct lab-to-factory solution for AI model training.” – Anders Beck, VP of AI Robotics Products, Universal Robots
The impact of force feedback on robot training physics
Most current robot training data is collected on research platforms using only vision. This method works for tasks where position is adequate but fails for tasks involving delicate contact, such as screwing, pressing, inserting, or any operation where the robot must respond to resistance.
Universal Robots asserts that its Direct Torque Control and force feedback capabilities provide the AI Trainer with a physical fidelity edge: robots can learn not just what to do visually but also how it should feel to do it correctly.
This is particularly important for tasks the robotics research community describes as contact-rich manipulation, especially assembly operations where components need precise alignment and the robot must adjust its grip based on feedback. These tasks have historically been among the most challenging to automate reliably and constitute a significant portion of manufacturing tasks that depend on human intervention.
Creating a data flywheel with Scale AI
The UR AI Trainer operates on UR’s AI Accelerator platform and incorporates Scale AI’s software stack to gather, organize, and manage the training data from demonstrations. This collaboration is framed as a flywheel: operators collect demonstration data, models are trained on this data, deployed robots enhance performance, and improved performance feeds into further training.
“Universal Robots is a leader in industrial robotics, and its extensive global presence provides the perfect foundation for data capture and AI deployment. Together, we’ve developed an integrated