Skip to content

HumEnv is an SMPL humanoid environment enabling systematic model comparison and reproducibility

License

Notifications You must be signed in to change notification settings

facebookresearch/humenv

Repository files navigation

    HumEnv: Humanoid Environment for Reinforcement Learning

Overview

HumEnv is an environment based on SMPL humanoid which aims for reproducible studies of humanoid control. It is designed to facilitate algorithmic research on reinforcement learning (RL), goal-based RL, unsupervised RL, and imitation learning. It consists of a basic environment interface, as well as an optional benchmark to evaluate agents on different tasks.

Features

  • An environment that enables simulation of a realistic humanoid on a range of proprioceptive tasks
  • A MuJoCo-based humanoid robot definition tuned for more realistic behaviors (friction, joint actuation, and movement range)
  • 9 configurable reward classes to enable learning basic humanoid skills, including locomotion, spinning, jumping, crawling, and more
  • Benchmarking code to evaluate RL agents on three classes of tasks: reward-based, goal-reaching and motion tracking
  • Various initialisation options: a static "T-pose", random fall, frame from MoCap data, and their combinations
  • Full compatibility with Gymnasium

Installation

Basic installation with full support of the environment functionalities (it requires Python 3.9+):

pip install "git+https://github.com/facebookresearch/HumEnv.git"

To use the MoCap and MoCapAndFall initalization schemes, you must prepare licensed datasets according to these instructions.

Full installation that includes all the benchmarking features:

pip install "humenv[bench] @ git+https://github.com/facebookresearch/HumEnv.git"

Quickstart

Once installed, you can create an environment using humenv.make_humenv which has a similar interface as gymnasium.make_vec. Here is a simple example:

from humenv import make_humenv
env, _ = make_humenv()
observation, info = env.reset()
frames = [env.render()]
for i in range(60):
    observation, reward, terminated, truncated, info = env.step(env.action_space.sample())
    frames.append(env.render())
# render frames at 30fps

More examples are available in the tutorial.

Citation

@article{tirinzoni2024metamotivo,
  title={Zero-shot Whole-Body Humanoid Control via Behavioral Foundation Models},
  author={Tirinzoni, Andrea and Touati, Ahmed and Farebrother, Jesse and Guzek, Mateusz and Kanervisto, Anssi and Xu, Yingchen and Lazaric, Alessandro and Pirotta, Matteo},
}

Acknowledgments

  • SMPL and AMASS for the humanoid skeleton and motions used to initialise realistic positions for the tracking benchmark
  • PHC for the data process and calculation of some Goal-reaching metrics
  • SMPLSm for scripts used to process SMPL and AMASS datasets, and for humanoid processing utils
  • smplx for removing chumpy dependency
  • MuJoCo for the backend simulation engine
  • Gymnasium for the API

License

Humenv is licensed under the CC BY-NC 4.0 license. See LICENSE for details.

About

HumEnv is an SMPL humanoid environment enabling systematic model comparison and reproducibility

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published