Soft-Robot Control Environment (gym-softrobot)

The environment is designed to leverage reinforcement learning methods into soft-robotics control, inspired from slender-body living creatures.
The code is built on PyElastica, an open-source physics simulation for slender structure.
We intend this package to be easy-to-install and fully compatible to OpenAI Gym.


  • Python 3.8+
  • OpenAI Gym
  • PyElastica 0.2+
  • Matplotlib (optional for display rendering and plotting)

Please use this bibtex to cite in your publications:

  author = {Chia-Hsien Shih, Seung Hyun Kim, Mattia Gazzola},
  title = {Soft Robotics Environment for OpenAI Gym},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{}},


pip install gym-softrobot

To test the installation, you can run couple steps of the environment as the following.

import gym 
import gym_softrobot
env = gym.make('OctoFlat-v0', policy_mode='centralized')

# env is created, now we can use it: 
for episode in range(2): 
    observation = env.reset()
    for step in range(50):
        action = env.action_space.sample() 
        observation, reward, done, info = env.step(action[None])
        print(f"{episode=:2} |{step=:2}, {reward=}, {done=}")
        if done:

Reinforcement Learning Example

We tested the environment using Stable Baselines3 for centralized control.
More advanced algorithms are still under development.

Environment Design

Included Environments

Octopus[Multi-arm control]

  • octo-flat [2D]
  • octo-reach
  • octo-swim
  • octo-flat


We are currently developing the package internally.


View Github