Soft-Robot Control Environment (gym-softrobot)

The environment is designed to leverage reinforcement learning methods into soft-robotics control, inspired from slender-body living creatures.
The code is built on PyElastica, an open-source physics simulation for slender structure.
We intend this package to be easy-to-install and fully compatible to OpenAI Gym.

Requirements:

  • Python 3.8+
  • OpenAI Gym
  • PyElastica 0.2+
  • Matplotlib (optional for display rendering and plotting)

Please use this bibtex to cite in your publications:

@misc{gym_softrobot,
  author = {Chia-Hsien Shih, Seung Hyun Kim, Mattia Gazzola},
  title = {Soft Robotics Environment for OpenAI Gym},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/skim0119/gym-softrobot}},
}

Installation

pip install gym-softrobot

To test the installation, you can run couple steps of the environment as the following.

import gym 
import gym_softrobot
env = gym.make('OctoFlat-v0', policy_mode='centralized')

# env is created, now we can use it: 
for episode in range(2): 
    observation = env.reset()
    for step in range(50):
        action = env.action_space.sample() 
        observation, reward, done, info = env.step(action[None])
        print(f"{episode=:2} |{step=:2}, {reward=}, {done=}")
        if done:
            break

Reinforcement Learning Example

We tested the environment using Stable Baselines3 for centralized control.
More advanced algorithms are still under development.

Environment Design

Included Environments

Octopus[Multi-arm control]

  • octo-flat [2D]
  • octo-reach
  • octo-swim
  • octo-flat

Contribution

We are currently developing the package internally.

GitHub

View Github