Abstract

Classical manipulator motion planners work across different robot embodiments. However they plan on a pre-specified static environment representation, and are not scalable to unseen dynamic environments. Neural Motion Planners (NMPs) are an appealing alternative to conventional planners as they incorporate different environmental constraints to learn motion policies directly from raw sensor observations. Contemporary state-of-the-art NMPs can successfully plan across different environments. However none of the existing NMPs generalize across robot embodiments. In this paper we propose Cross-Embodiment Motion Policy (XMoP), a neural policy for learning to plan over a distribution of manipulators. XMoP implicitly learns to satisfy kinematic constraints for a distribution of robots and zero-shot transfers the planning behavior to unseen robotic manipulators within this distribution. We achieve this generalization by formulating a whole-body control policy that is trained on planning demonstrations from over three million procedurally sampled robotic manipulators in different simulated environments. Despite being completely trained on synthetic embodiments and environments, our policy exhibits strong sim-to-real generalization across manipulators with different kinematic variations and degrees of freedom with a single set of frozen policy parameters. We evaluate XMoP on 7 commercial manipulators and show successful cross-embodiment motion planning, achieving an average 70% success rate on baseline benchmarks. Furthermore, we demonstrate our policy sim-to-real on two unseen manipulators solving novel planning problems across three real-world domains even with dynamic obstacles.



Cross-Embodiment Configuration-Space Control Method

Image description

XMoP is a novel configuration-space neural policy that solves motion planning problems zero-shot for unseen robotic manipulators, which has not been achieved by any prior robot learning algorithm. Our work demonstrates for the first time that configuration-space behavior cloning policies can be learned without embodiment bias and that these learned behaviors can be transferred to novel unseen embodiments in a zero-shot manner. Follow this tutorial to add your own robot. It's zero-shot !


All rollouts shown in the videos (both simulated and real) use XMoP with a fixed set of frozen policy parameters

Zero-shot Sim-to-Real Rollout on Franka FR3 and Sawyer Robots

Planning goal and policy observation are shown at the top-right corner

Closed-loop Rollout in Dynamic Environments

XMoP-S, an ablated policy that uses vanilla Transformer model for zero-shot cross-embodiment 6-DoF reaching

Fully Synthetic Planning Demonstration Data

Failure Modes

Hits the walls of the bin while approaching

Collision when goal is too close to the obstacle

Partially observable obstacle leads to collision

BibTeX

@article{rath2024xmop,
      title={XMoP: Whole-Body Control Policy for Zero-shot Cross-Embodiment Neural Motion Planning}, 
      author={Prabin Kumar Rath and Nakul Gopalan},
      year={2024},
      eprint={2409.15585},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2409.15585}, 
}