Training ML-Agents with Google Colab

Dhyey Thumar
2 min readOct 19, 2020

Is Google Colab compatible with ML-Agents?

Answer to the above question is Yes, Google Colab can be used to train our ML-Agents, and given below are the four simple steps to train any Unity environment using ML-Agents with colab.

Introduction

After struggling with a particular question (How to run ML-Agents in Google Colab?) for days, I thought it would be great to write an article to share my findings on this topic as there is little to no information on the internet. This article gives information on the above question by testing an example environment on colab. (This example environment is taken from ML-Agents repo)

So when I started working on Reinforcement Learning with ML-Agents (an interface provided by Unity, which is used to communicate with the learning environment made using the Unity engine). I felt the need to have one more computing device because training takes hours to get the required agent’s behavior 🙁. So I decided to shift the training process to Google Colab. And here I am gonna tell how I did this.

Prerequisites Tools

  • Unity Game Engine: It is used to create, modify, and build the training environment in server/headless mode for the Linux system. Get the latest version of this software here.

Important details:

  • Unity engine version used to build the environment = 2019.4.17f1 (Different versions will work but make sure that you don’t get any errors while generating the binaries, generally certain versions of burst library in Unity are not working with ML-Agents [for me 1.3.0 burst version is working])
  • ML-Agents branch = release_1 (Higher versions of ML-Agents also works)
  • Environment name = 3dball (sample environment provided in ML-Agents repo). The environment used is built for Linux with server/headless mode enabled with the help of the Unity engine.

Now let’s get started with the steps required to training ML-Agents on Google Colab.

Step 1: Setup the required github repos and environment details

Cloning and Installation of ML-Agents repo:

Fetch the environment binaries stored on my repo and set the required execution permissions. Also, add the environment name and train/run id:

Step 2: Add the training configuration details

Step 3: Enable the TensorBoard and Starting the training process

Step 4: Download the training data

You can directly open this python notebook in your Colab which also contains the complete implementation.

Here we come to the end of this article. And if you have any doubts, suggestions, improvements then please do let me know.

Check out this github repo for the reference (readme file have more or less the same explanation as given in this article).

--

--

Dhyey Thumar

Working with Reinforcement Learning, Backend web development, & Computer Vision.