Skip to content

yildirimyigit/cnep

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Conditional Neural Expert Processes for Learning Movement Primitives from Demonstration

This repository contains the source code for the paper "Conditional Neural Expert Processes for Learning Movement Primitives from Demonstration" by Yigit Yildirim and Emre Ugur. We both are members of the CoLoRs Lab, Bogazici University.

CNEP is a novel deep learning architecture for Learning from Demonstration (LfD) in robotics aiming to encode diverse sensorimotor trajectories from demonstrations with varying movements, leveraging a novel gating mechanism, multiple decoders, and an entropy-based loss calculation to promote decoder specialization. This work has been submitted to the IEEE RA-L for possible publication on July 5, 2024. You can find the preprint here: https://arxiv.org/abs/2402.08424

Architecture overview:

__over

Some results:

Here are some videos from this work: https://youtube.com/playlist?list=PLXWw0F-8m_ZZD7fpGOKclzVJONXUifDiY


We assessed the performance of CNEP in comparison to Probabilistic Movement Primitives (ProMP) and Gaussian Mixture Models-Gaussian Mixture Regression (GMM-GMR) on a complex robotic task involving grasping and placing wine glasses onto a dish rack. This task involved high-dimensional sensorimotor trajectories of 1288 dimensions. Each model was trained on 40 expert demonstrations and expected to generate the necessary control commands to achieve successful task completion. CNEP demonstrated superior performance by successfully completing the task, while ProMP and GMM-GMR were unable to achieve a successful grasp of the glass. A video explaining the experiment is available for further analysis: https://youtu.be/ffnIhrmjwgo


With continuous conditioning on the current configuration of the tabletop, CNEP can adapt to the changes in the environment on the fly and select among multiple experts to properly control the robot. The video showing the online control experiment is here: https://youtu.be/ffnIhrmjwgo

| |


When there are multiple ways to complete a real-life task, multiple sensorimotor trajectories serve the same goal of achieving that task. If the number of the modalities of the training trajectories increases, modeling them separately, as CNEP does, becomes advantageous. In this comparison, we compared CNEP with ProMP, GMM-GMR, CNMP, and Stable MP (https://github.com/rperezdattari/Stable-Motion-Primitives-via-Imitation-and-Contrastive-Learning). Also, we included several CNEP variants for a quantitative comparison, which is given on the right. For explanations, please refer to the paper.

|


We trained a CNMP and a CNEP with two experts on two demonstration trajectories. When queried from novel start and end points, CNEP produces trajectories similar to the demonstrations (shown in red and purple). In contrast, other methods (CNMP in this case) produce a mean response (shown in blue). This may lead to suboptimal behavior, as highlighted in the obstacle avoidance tests with a real robot.

|

|


If only a single demonstration trajectory passes through an observation (conditioning) point, both models can successfully synthesize expert-like trajectories. This is the case for the plots in the right column. On the other hand, if there are multiple candidate trajectories passing through the same observation point that the trajectory-producing models are conditioned on, it is reasonable to expect a trajectory close to one of these candidates. Plots on the left show that CNEP successfully picks one of the modes and generates similar trajectories whereas CNMP produces average trajectories.


To demonstrate that CNEP can model MP trajectories of higher dimensions just as easily, we trained a CNEP with a 56-dimensional trajectory of an actual stuntman realizing a cartwheel motion ( [CMU Mocap dataset](http://mocap.cs.cmu.edu/) ). Then, we reproduced the same motion and ran the trajectory on a simulated humanoid.

Getting Started

Requirements

The entire project is developed with

  1. Python 3.8
  2. Pytorch 2.0.1+cu117

However, most of the code should run cleanly with Python 3.8+ and Pytorch 2+. I tested with Python3.10 and several Pytorch 2+ versions.

Running

  1. Clone the repo
  2. Run the training script: for example, python -u compare_cnp_wta_sine.py
  3. Upon training, run the corresponding test script: comparison_sine.ipynb
  4. Naming convention: files starting with compare are training scripts whereas files starting with comparison are test scripts.
  5. Files with the same name and different extensions:
    1. Files with .ipynb extension are good for inspection & visualization
    2. Files with .py extension are used to run the code on a remote server (an HPC, for example).

If you use the code in your work, kindly consider citing https://arxiv.org/abs/2402.08424

About

Conditional Neural Expert Processes

Resources

Stars

Watchers

Forks