There’s a world where robots integrate seamlessly into our daily work, and eventually our cities and home lives.

When the RAI Institute was founded in 2022, it was with the goal of solving the toughest research challenges surrounding robotics and AI.  Three years in, the Institute focuses on cutting-edge research designed to help usher in a future where robots can realize their full potential.

These types of challenges are complex: How do we best design robot arms for control of both dynamic (fast and strong) tasks and delicate tasks? How do we eliminate the sim-to-real gap while achieving high performance across a variety of robot designs and morphologies? Solving these fundamental problems and others will determine whether robots can move beyond controlled factory floors and research labs and into the real world, performing unstructured tasks like troubleshooting, repair, and construction. Additionally, these solutions can be applied to broader settings than just factories — we could see this work applied in hospitals, warehouses, and especially homes.

In 2025, the RAI Institute did work that advances the state-of-the-art to help realize a future where robots achieve their full potential. The Institute’s progress includes pushing the limits of robot dexterity, expanding the foundations of robot control, building new robot structures and morphologies, and advancing their ability to navigate complex environments.

Expanding Physical Robotic Capabilities with Reinforcement Learning

The Capability Gap: Traditional robot control uses algorithms designed for a specific task. Reinforcement learning promises robots that learn new tasks through millions of computer simulations, letting the robot discover how to perform tasks under a variety of conditions. Bridging the gap between simulated training and real-world deployment remains a fundamental challenge and requires repeatable performance from the robot and accurate simulation models of the robot and the physics of the environment. Policies that perform perfectly in simulation often fail on hardware, creating what has come to be called the sim-to-real gap. Solving this problem will make it easier to ensure reliable, consistent performance for robots doing new things — whether that’s a new task or moving in a brand new way.

RAI Institute’s Progress in 2025: RAI Institute’s reinforcement learning pipeline was deployed in 2025 to both quadruped and humanoid robots to unlock highly dynamic levels of performance. This work plays a critical role in advancing the capabilities of robots by expanding their skillsets and dramatically streamlining the process required to learn new skills and behaviors. In March 2025, the trained policies for Spot allowed the robot to achieve record running speeds.

Using RL policies developed at RAI Institute, Spot ran at speeds that significantly surpassed its previously performance, increasing running speed from 1.7 m/s (3.8 mph) to 5.2 m/s (11.5 mph) — over three times faster than Spot’s default max speed.

RAI Institute RL policies were also deployed on the Atlas robot, a humanoid developed by Boston Dynamics.  The work helped showcase the robot’s physical capabilities. At the heart of the learning process was a physics-based simulator that generated training data for a variety of maneuvers. The control policy retargeted human motion capture data, then learned a control policy using RL from about 150 million runs of the simulator. That control policy was then transferred zero-shot to the hardware, meaning that robot executed the behavior successfully without any physical-world training. 

To push the boundaries of robotic performance, teams at the RAI Institute built custom robotic platforms that demonstrated a novel level of athletic intelligence. One such platform was the Ultra Mobility Vehicle (UMV), a wheeled robot designed to combine the efficiency of bicycles with the dynamic capabilities of legged robots. UMV can drive, roll, hop, jump, flip and land autonomously.

The robot features a unique Z-shaped design with a lightweight carbon fiber bike frame in its lower half and a jumping mechanism in its upper half. The robot was equipped with various sensors including joint encoders, IMUs, time-of-flight sensors, LiDAR, and cameras to navigate its environment and perform athletic maneuvers. The most recent version, the UMV mini, features over 90 custom mechanical parts built in house by the RAI Institute hardware team.

The UMV team used RL to train the robot through millions of physics-based simulations, where policies learned to perform complex behavior such as jumping up onto a one-meter table, executing a front flip, holding a sustained wheelie, and hopping on its rear wheel. The training process rewarded desired tasks such as driving, turning, jumping, and maintaining balance, allowing the simulated robot to learn optimal behavior sequences. The development process for UMV did not rely on a source of target motion data as did the Atlas and Unitree work.  Instead, the behavior is specified with high level descriptions such as ‘jump onto table’, ‘pop a wheelie’, ‘hop in place’. The deployment process involved an iterative cycle of training policies in simulation, testing on hardware, and refining rewards based on observed performance.

As with reinforcement learning on humanoid robots, addressing the sim-to-real gap was critical to successful policy deployment. The team employed two main strategies: randomizing simulation parameters such as robot masses, motor torque constants, and ground friction to account for real-world uncertainties, and improving the accuracy of the robot model through focused empirical testing. These tests included drop-tests to measure wheel bouncing properties and detailed power system modeling from motor torque production to battery voltage behavior. These refinements significantly reduced the sim-to-real gap and enabled more successful policy deployment on the physical robot.

Boston Dynamics Partnership: Technology Transfer in Action

Early in 2025, the RAI Institute announced its partnership with Boston Dynamics, collaborating on RL pipelines deployed on both Spot and Atlas robots. This partnership validates the technology transfer model: foundational capabilities developed at the RAI Institute flowed to demonstrations on stage at “America’s Got Talent” (quadruped flips) and the CES presentation of Atlas walking on stage.

Moving forward, Boston Dynamics and the RAI Institute will work together on whole-body loco-manipulation and full-body contact strategies, pushing toward robots that can handle the complex, dynamic work that will enable them to be deployed in automotive manufacturing and beyond.

Pushing the Boundaries of Robotic Dexterous Manipulation

The Capability Gap: Robots can use simple grasps to grip static objects precisely and reliably, but they can’t manipulate them dynamically the way humans do. Reorienting objects in hand, adjusting grip in response to slippage, catching, and throwing all require hardware that senses and responds rapidly based on the trajectory of the object. Such skills will make robots more able to handle a wider variety of objects with dexterity at human-like speeds.

RAI Institute’s Progress in 2025: Towards the end of 2025, the RAI Institute debuted a robot that shares similar mass and torque properties as a human being.  We call it AthenaZero.  It is a custom built, low-impedance platform designed to study dynamic robot manipulation.

This robot is capable of throwing a baseball at 31.3 m/s (70 mph) and can catch and bat at short distances 7 meters (23 ft). These skills require robots with perception and actions that work at high response rates.

The AthenaZero platform uses custom actuator linkages and motors that were designed and built at RAI specifically for this project. For the throwing, catching, and batting tasks, perception was achieved by leveraging IR-based motion capture that runs at 240Hz, allowing the robot to fit accurate ballistic trajectories to the ball’s path after release.

Control for each primitive (throwing, catching, and batting) relied on dynamic trajectory optimization, local online adaptation, and temporal trajectory selection.

Whole Body Control for Manipulation

The ReLIC (Reinforcement Learning for Interlimb Coordination) framework was introduced to advance loco-manipulation by enabling robots to dynamically reassign limb roles during task execution. Unlike traditional approaches that fixed limbs into predefined roles, ReLIC allows the robot to fluidly switch which limbs were used for support or manipulation, without prior specification.

The team implemented contact-time-based gait regularization and addressed the sim-to-real gap through domain randomization and motor calibration procedures. ReLIC was evaluated on 12 real-world loco-manipulation tasks across three coordination types: mobile interlimb coordination tasks like putting a yoga ball into a basket, stationary interlimb coordination tasks such as opening and placing an item in a drawer, and foot-assisted manipulation tasks such as moving objects out of the way while rolling a chair.

Across all tasks, ReLIC achieved a 78.9% average success rate, consistently outperforming baseline methods.

Building on the initial promise of ReLIC results, the team further explored dynamic whole-body manipulation with a Spot quadruped and a stack of tires. It used a hierarchical control architecture that combined sampling-based control for tasks like tire uprighting, dragging, and stacking, and RL for the tire rolling.

The system demonstrated impressive real-world performance, completing tire uprighting sequences in an average of 5.9 seconds and efficiently manipulating a 15 kg (33 lb) tire — well above the robot’s peak lift capacity of 11 kg (24 lb). The behavior demonstrated in the video is fully autonomous, including the dynamic selection of contacts on the arm, legs and body, and coordination between the manipulation and locomotion processes. While the system showed significant progress in dynamic and forceful manipulation, it relies on motion capture for state estimation and used offboard computation for control. Future work will aim to achieve higher levels of dexterity on tasks with shorter timelines and tighter tolerances.

Growing the Institute

Progress isn’t just defined by technical accomplishments, but by the community we’ve built. The RAI Institute has grown to more than 250 researchers, engineers, and staff united by a shared culture of technical fearlessness, perseverance, and steady progress.

In 2025, our researchers published over 30 papers at leading conferences and in top-tier journals, covering topics from reinforcement learning to dynamic manipulation. Team members attended and presented at several key industry conferences including ICRA 2025, SIGGRAPH 2025, CoRL 2025, and the Grace Hopper Celebration. We developed innovations that will help bring our research from the lab into the world.

Bringing the Lab to the Public

Speaking of taking the research into the world, over the summerm we created an outreach center that operated at a local shopping mall, the Robot Lab at the Cambridgeside Galleria. This free, hands-on experience allowed the general public to see a variety of historical robots going back over 40 years as well as modern working robots, including an opportunity for visitors to drive one of the robots themselves. Over the summer, over 10,000 visitors came to the Robot Lab, ranging in age from about three years old to 80. We surveyed visitors and learned that visitors who drove Spot felt more confident about the usefulness of robots in a variety of different scenarios (home, medical, and office). Insights like this can provide clues on how to successfully integrate robots into useful roles in society.

Guests take turns driving a Spot robot
A custom controller with large push buttons and arrows was installed on a table so it could be raised or lowered as necessary, making it accessible to all ages

Looking at the Year Ahead

The capabilities demonstrated in 2025 are steps toward a larger goal: robots that work alongside humans in unstructured environments. However, significant capability gaps remain. Looking at the year ahead, we’re seeing the convergence of improved tactile sensing, more sophisticated learning algorithms, larger and more diverse datasets, and better sim-to-real transfer. These problems require integrated hardware-software development, technology-agnostic exploration, and a commitment to solving the difficult and important problems wherever they take us.