/ Machine Learning

A fork of OpenAI Baselines implementations of reinforcement learning algorithms

A fork of OpenAI Baselines implementations of reinforcement learning algorithms

Stable Baselines

Stable Baselines is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines.

You can read a detailed presentation of Stable Baselines in the Medium article.

These algorithms will make it easier for the research community and industry to replicate, refine, and identify new ideas, and will create good baselines to build projects on top of. We expect these tools will be used as a base around which new ideas can be added, and as a tool for comparing a new approach against existing ones. We also hope that the simplicity of these tools will allow beginners to experiment with a more advanced toolset, without being buried in implementation details.

Main differences with OpenAI Baselines

This toolset is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups:

  • Unified structure for all algorithms
  • PEP8 compliant (unified code style)
  • Documented functions and classes
  • More tests & more code coverage
Features Stable-Baselines OpenAI Baselines
State of the art RL methods :heavy_check_mark: (1) :heavy_check_mark:
Documentation :heavy_check_mark: :x:
Custom environments :heavy_check_mark: :heavy_check_mark:
Custom policies :heavy_check_mark: :heavy_minus_sign: (2)
Common interface :heavy_check_mark: :heavy_minus_sign: (3)
Tensorboard support :heavy_check_mark: :heavy_minus_sign: (4)
Ipython / Notebook friendly :heavy_check_mark: :x:
PEP8 code style :heavy_check_mark: :heavy_minus_sign: (5)
Custom callback :heavy_check_mark: :heavy_minus_sign: (6)

(1): Forked from previous version of OpenAI baselines, however missing refactoring for HER.

(2): Currently not available for DDPG, and only from the run script.

(3): Only via the run script.

(4): Rudimentary logging of training information (no loss nor graph).

(5): WIP on OpenAI's side (you can do it OpenAI! :cat:)

(6): Passing a callback function is only available for DQN

Installation

Prerequisites

Baselines requires python3 (>=3.5) with the development headers. You'll also need system packages CMake, OpenMPI and zlib. Those can be installed as follows

Ubuntu

sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev

Mac OS X

Installation of system packages on Mac requires Homebrew. With Homebrew installed, run the follwing:

brew install cmake openmpi

Install using pip

Install the Stable Baselines package

Using pip from pypi:

pip install stable-baselines

Example

Most of the library tries to follow a sklearn-like syntax for the Reinforcement Learning algorithms.

Here is a quick example of how to train and run PPO2 on a cartpole environment:

import gym

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import PPO2

env = gym.make('CartPole-v1')
env = DummyVecEnv([lambda: env])  # The algorithms require a vectorized environment to run

model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=10000)

obs = env.reset()
for i in range(1000):
    action, _states = model.predict(obs)
    obs, rewards, dones, info = env.step(action)
    env.render()

Or just train a model with a one liner if the environment is registered in Gym and if the policy is registered:

from stable_baselines import PPO2

model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000)

GitHub