Skip to content

Commit

Permalink
Merge pull request #908 from Unity-Technologies/hotfix-0
Browse files Browse the repository at this point in the history
Release v0.4.0a
  • Loading branch information
awjuliani authored Jun 29, 2018
2 parents 20569f9 + 168b8e2 commit d37bfb6
Show file tree
Hide file tree
Showing 56 changed files with 455 additions and 401 deletions.
6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Contribution Guidelines

Thank you for your interest in contributing to ML-Agents! We are incredibly
excited to see how members of our community will use and extend ML-Agents.
Thank you for your interest in contributing to the ML-Agents toolkit! We are incredibly
excited to see how members of our community will use and extend the ML-Agents toolkit.
To facilitate your contributions, we've outlined a brief set of guidelines
to ensure that your extensions can be easily integrated.

Expand All @@ -11,7 +11,7 @@ First, please read through our [code of conduct](CODE_OF_CONDUCT.md),
as we expect all our contributors to follow it.

Second, before starting on a project that you intend to contribute
to ML-Agents (whether environments or modifications to the codebase),
to the ML-Agents toolkit (whether environments or modifications to the codebase),
we **strongly** recommend posting on our
[Issues page](https://github.com/Unity-Technologies/ml-agents/issues) and
briefly outlining the changes you plan to make. This will enable us to provide
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
<img src="docs/images/unity-wide.png" align="middle" width="3000"/>

# Unity ML-Agents (Beta)
# Unity ML-Agents Toolkit (Beta)

**Unity Machine Learning Agents** (ML-Agents) is an open-source Unity plugin
**The Unity Machine Learning Agents Toolkit** (ML-Agents) is an open-source Unity plugin
that enables games and simulations to serve as environments for training
intelligent agents. Agents can be trained using reinforcement learning,
imitation learning, neuroevolution, or other machine learning methods through
Expand All @@ -12,7 +12,7 @@ and hobbyists to easily train intelligent agents for 2D, 3D and VR/AR games.
These trained agents can be used for multiple purposes, including
controlling NPC behavior (in a variety of settings such as multi-agent and
adversarial), automated testing of game builds and evaluating different game
design decisions pre-release. ML-Agents is mutually beneficial for both game
design decisions pre-release. The ML-Agents toolkit is mutually beneficial for both game
developers and AI researchers as it provides a central platform where advances
in AI can be evaluated on Unity’s rich environments and then made accessible
to the wider research and game developer communities.
Expand All @@ -34,7 +34,7 @@ to the wider research and game developer communities.
* For more information, in addition to installation and usage
instructions, see our [documentation home](docs/Readme.md).
* If you have
used a version of ML-Agents prior to v0.4, we strongly recommend
used a version of the ML-Agents toolkit prior to v0.4, we strongly recommend
our [guide on migrating from earlier versions](docs/Migrating.md).

## References
Expand All @@ -56,7 +56,7 @@ In addition to our own documentation, here are some additional, relevant article

## Community and Feedback

ML-Agents is an open-source project and we encourage and welcome contributions.
The ML-Agents toolkit is an open-source project and we encourage and welcome contributions.
If you wish to contribute, be sure to review our
[contribution guidelines](CONTRIBUTING.md) and
[code of conduct](CODE_OF_CONDUCT.md).
Expand All @@ -65,10 +65,10 @@ You can connect with us and the broader community
through Unity Connect and GitHub:
* Join our
[Unity Machine Learning Channel](https://connect.unity.com/messages/c/035fba4f88400000)
to connect with others using ML-Agents and Unity developers enthusiastic
to connect with others using the ML-Agents toolkit and Unity developers enthusiastic
about machine learning. We use that channel to surface updates
regarding ML-Agents (and, more broadly, machine learning in games).
* If you run into any problems using ML-Agents,
regarding the ML-Agents toolkit (and, more broadly, machine learning in games).
* If you run into any problems using the ML-Agents toolkit,
[submit an issue](https://github.com/Unity-Technologies/ml-agents/issues) and
make sure to include as much detail as possible.

Expand All @@ -77,7 +77,7 @@ team at [email protected].

## Translations

To make Unity ML-Agents accessible to the global research and
To make the Unity ML-Agents toolkit accessible to the global research and
Unity developer communities, we're attempting to create and maintain
translations of our documentation. We've started with translating a subset
of the documentation to one language (Chinese), but we hope to continue
Expand Down
2 changes: 1 addition & 1 deletion docs/API-Reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ the following command within the `docs/` directory:

doxygen dox-ml-agents.conf

`dox-ml-agents.conf` is a Doxygen configuration file for ML-Agents
`dox-ml-agents.conf` is a Doxygen configuration file for the ML-Agents toolkit
that includes the classes that have been properly formatted.
The generated HTML files will be placed
in the `html/` subdirectory. Open `index.html` within that subdirectory to
Expand Down
10 changes: 5 additions & 5 deletions docs/Background-Machine-Learning.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Background: Machine Learning

Given that a number of users of ML-Agents might not have a formal machine
Given that a number of users of the ML-Agents toolkit might not have a formal machine
learning background, this page provides an overview to facilitate the
understanding of ML-Agents. However, We will not attempt to provide a thorough
understanding of the ML-Agents toolkit. However, We will not attempt to provide a thorough
treatment of machine learning as there are fantastic resources online.

Machine learning, a branch of artificial intelligence, focuses on learning
Expand Down Expand Up @@ -77,7 +77,7 @@ tasks are active areas of machine learning research and, in practice, require
several iterations to achieve good performance.

We now switch to reinforcement learning, the third class of
machine learning algorithms, and arguably the one most relevant for ML-Agents.
machine learning algorithms, and arguably the one most relevant for the ML-Agents toolkit.

## Reinforcement Learning

Expand Down Expand Up @@ -132,8 +132,8 @@ in many ways, one can view a non-playable character (NPC) as a virtual
robot, with its own observations about the environment, its own set of actions
and a specific objective. Thus it is natural to explore how we can
train behaviors within Unity using reinforcement learning. This is precisely
what ML-Agents offers. The video linked below includes a reinforcement
learning demo showcasing training character behaviors using ML-Agents.
what the ML-Agents toolkit offers. The video linked below includes a reinforcement
learning demo showcasing training character behaviors using the ML-Agents toolkit.

<p align="center">
<a href="http://www.youtube.com/watch?feature=player_embedded&v=fiQsmdwEGT8" target="_blank">
Expand Down
10 changes: 5 additions & 5 deletions docs/Background-TensorFlow.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@

As discussed in our
[machine learning background page](Background-Machine-Learning.md), many of the
algorithms we provide in ML-Agents leverage some form of deep learning.
algorithms we provide in the ML-Agents toolkit leverage some form of deep learning.
More specifically, our implementations are built on top of the open-source
library [TensorFlow](https://www.tensorflow.org/). This means that the models
produced by ML-Agents are (currently) in a format only understood by
produced by the ML-Agents toolkit are (currently) in a format only understood by
TensorFlow. In this page we provide a brief overview of TensorFlow, in addition
to TensorFlow-related tools that we leverage within ML-Agents.
to TensorFlow-related tools that we leverage within the ML-Agents toolkit.

## TensorFlow

[TensorFlow](https://www.tensorflow.org/) is an open source library for
performing computations using data flow graphs, the underlying representation
of deep learning models. It facilitates training and inference on CPUs and
GPUs in a desktop, server, or mobile device. Within ML-Agents, when you
GPUs in a desktop, server, or mobile device. Within the ML-Agents toolkit, when you
train the behavior of an Agent, the output is a TensorFlow model (.bytes)
file that you can then embed within an Internal Brain. Unless you implement
a new algorithm, the use of TensorFlow is mostly abstracted away and behind
Expand Down Expand Up @@ -47,5 +47,5 @@ that contains an Internal Brain is built, inference is performed via
TensorFlowSharp. We provide an additional in-depth overview of how to
leverage [TensorFlowSharp within Unity](Using-TensorFlow-Sharp-in-Unity.md)
which will become more relevant once you install and start training
behaviors within ML-Agents. Given the reliance on TensorFlowSharp, the
behaviors within the ML-Agents toolkit. Given the reliance on TensorFlowSharp, the
Internal Brain is currently marked as experimental.
2 changes: 1 addition & 1 deletion docs/Background-Unity.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ we highly recommend the
[Tutorials page](https://unity3d.com/learn/tutorials). The
[Roll-a-ball tutorial](https://unity3d.com/learn/tutorials/s/roll-ball-tutorial)
is a fantastic resource to learn all the basic concepts of Unity to get started
with ML-Agents:
with the ML-Agents toolkit:
* [Editor](https://docs.unity3d.com/Manual/UsingTheEditor.html)
* [Interface](https://docs.unity3d.com/Manual/LearningtheInterface.html)
* [Scene](https://docs.unity3d.com/Manual/CreatingScenes.html)
Expand Down
12 changes: 6 additions & 6 deletions docs/Basic-Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ This guide will show you how to use a pretrained model in an example Unity envir
If you are not familiar with the [Unity Engine](https://unity3d.com/unity),
we highly recommend the [Roll-a-ball tutorial](https://unity3d.com/learn/tutorials/s/roll-ball-tutorial) to learn all the basic concepts of Unity.

## Setting up ML-Agents within Unity
## Setting up the ML-Agents Toolkit within Unity

In order to use ML-Agents within Unity, you need to change some Unity settings first. Also [TensorFlowSharp plugin](https://s3.amazonaws.com/unity-ml-agents/0.4/TFSharpPlugin.unitypackage) is needed for you to use pretrained model within Unity, which is based on the [TensorFlowSharp repo](https://github.com/migueldeicaza/TensorFlowSharp).
In order to use the ML-Agents toolkit within Unity, you need to change some Unity settings first. Also [TensorFlowSharp plugin](https://s3.amazonaws.com/unity-ml-agents/0.4/TFSharpPlugin.unitypackage) is needed for you to use pretrained model within Unity, which is based on the [TensorFlowSharp repo](https://github.com/migueldeicaza/TensorFlowSharp).

1. Launch Unity
2. On the Projects dialog, choose the **Open** option at the top of the window.
3. Using the file dialog that opens, locate the `unity-environment` folder within the ML-Agents project and click **Open**.
3. Using the file dialog that opens, locate the `unity-environment` folder within the the ML-Agents toolkit project and click **Open**.
4. Go to **Edit** > **Project Settings** > **Player**
5. For **each** of the platforms you target
(**PC, Mac and Linux Standalone**, **iOS** or **Android**):
1. Option the **Other Settings** section.
2. Select **Scripting Runtime Version** to
**Experimental (.NET 4.6 Equivalent)**
**Experimental (.NET 4.6 Equivalent or .NET 4.x Equivalent)**
3. In **Scripting Defined Symbols**, add the flag `ENABLE_TENSORFLOW`.
After typing in the flag name, press Enter.
6. Go to **File** > **Save Project**
Expand Down Expand Up @@ -67,7 +67,7 @@ object.

### Training the environment
1. Open a command or terminal window.
2. Nagivate to the folder where you installed ML-Agents.
2. Nagivate to the folder where you installed the ML-Agents toolkit.
3. Change to the `python` directory.
4. Run `python3 learn.py --run-id=<run-identifier> --train`
Where:
Expand Down Expand Up @@ -99,7 +99,7 @@ to the **Graph Model** placeholder in the **Ball3DBrain** inspector window.

## Next Steps

* For more information on ML-Agents, in addition to helpful background, check out the [ML-Agents Overview](ML-Agents-Overview.md) page.
* For more information on the ML-Agents toolkit, in addition to helpful background, check out the [ML-Agents Toolkit Overview](ML-Agents-Overview.md) page.
* For a more detailed walk-through of our 3D Balance Ball environment, check out the [Getting Started](Getting-Started-with-Balance-Ball.md) page.
* For a "Hello World" introduction to creating your own learning environment, check out the [Making a New Learning Environment](Learning-Environment-Create-New.md) page.
* For a series of Youtube video tutorials, checkout the [Machine Learning Agents PlayList](https://www.youtube.com/playlist?list=PLX2vGYjWbI0R08eWQkO7nQkGiicHAX7IX) page.
6 changes: 3 additions & 3 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@

### Scripting Runtime Environment not setup correctly

If you haven't switched your scripting runtime version from .NET 3.5 to .NET 4.6, you will see such error message:
If you haven't switched your scripting runtime version from .NET 3.5 to .NET 4.6 or .NET 4.x, you will see such error message:

```
error CS1061: Type `System.Text.StringBuilder' does not contain a definition for `Clear' and no extension method `Clear' of type `System.Text.StringBuilder' could be found. Are you missing an assembly reference?
```

This is because .NET 3.5 doesn't support method Clear() for StringBuilder, refer to [Setting Up ML-Agents Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
This is because .NET 3.5 doesn't support method Clear() for StringBuilder, refer to [Setting Up The ML-Agents Toolkit Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.

### TensorFlowSharp flag not turned on.

Expand All @@ -19,7 +19,7 @@ If you have already imported the TensorFlowSharp plugin, but havn't set ENABLE_T
You need to install and enable the TensorFlowSharp plugin in order to use the internal brain.
```

This error message occurs because the TensorFlowSharp plugin won't be usage without the ENABLE_TENSORFLOW flag, refer to [Setting Up ML-Agents Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.
This error message occurs because the TensorFlowSharp plugin won't be usage without the ENABLE_TENSORFLOW flag, refer to [Setting Up The ML-Agents Toolkit Within Unity](Installation.md#setting-up-ml-agent-within-unity) for solution.

### Tensorflow epsilon placeholder error

Expand Down
18 changes: 9 additions & 9 deletions docs/Getting-Started-with-Balance-Ball.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Getting Started with the 3D Balance Ball Environment

This tutorial walks through the end-to-end process of opening an ML-Agents
This tutorial walks through the end-to-end process of opening a ML-Agents toolkit
example environment in Unity, building the Unity executable, training an agent
in it, and finally embedding the trained model into the Unity environment.

ML-Agents includes a number of [example environments](Learning-Environment-Examples.md)
which you can examine to help understand the different ways in which ML-Agents
The ML-Agents toolkit includes a number of [example environments](Learning-Environment-Examples.md)
which you can examine to help understand the different ways in which the ML-Agents toolkit
can be used. These environments can also serve as templates for new
environments or as ways to test new ML algorithms. After reading this tutorial,
you should be able to explore and build the example environments.
Expand All @@ -24,7 +24,7 @@ Let's get started!

## Installation

In order to install and set up ML-Agents, the Python dependencies and Unity,
In order to install and set up the ML-Agents toolkit, the Python dependencies and Unity,
see the [installation instructions](Installation.md).

## Understanding a Unity Environment (3D Balance Ball)
Expand Down Expand Up @@ -108,7 +108,7 @@ when you embed the trained model in the Unity application, you will change the
**Vector Observation Space**

Before making a decision, an agent collects its observation about its state
in the world. ML-Agents classifies vector observations into two types:
in the world. The ML-Agents toolkit classifies vector observations into two types:
**Continuous** and **Discrete**. The **Continuous** vector observation space
collects observations in a vector of floating point numbers. The **Discrete**
vector observation space is an index into a table of states. Most of the example
Expand All @@ -124,7 +124,7 @@ values are defined in the agent's `CollectObservations()` function.)
**Vector Action Space**

An agent is given instructions from the brain in the form of *actions*. Like
states, ML-Agents classifies actions into two types: the **Continuous**
states, ML-Agents toolkit classifies actions into two types: the **Continuous**
vector action space is a vector of numbers that can vary continuously. What
each element of the vector means is defined by the agent logic (the PPO
training process just learns what values are better given particular state
Expand Down Expand Up @@ -193,7 +193,7 @@ In order to train an agent to correctly balance the ball, we will use a
Reinforcement Learning algorithm called Proximal Policy Optimization (PPO).
This is a method that has been shown to be safe, efficient, and more general
purpose than many other RL algorithms, as such we have chosen it as the
example algorithm for use with ML-Agents. For more information on PPO,
example algorithm for use with ML-Agents toolkit. For more information on PPO,
OpenAI has a recent [blog post](https://blog.openai.com/openai-baselines-ppo/)
explaining it.

Expand All @@ -217,7 +217,7 @@ When the message _"Start training by pressing the Play button in the Unity Edito

**Note**: If you're using Anaconda, don't forget to activate the ml-agents environment first.

The `--train` flag tells ML-Agents to run in training mode.
The `--train` flag tells the ML-Agents toolkit to run in training mode.

**Note**: You can train using an executable rather than the Editor. To do so, follow the intructions in
[Using an Execuatble](Learning-Environment-Executable.md).
Expand Down Expand Up @@ -271,7 +271,7 @@ Because TensorFlowSharp support is still experimental, it is disabled by
default. In order to enable it, you must follow these steps. Please note that
the `Internal` Brain mode will only be available once completing these steps.

To set up the TensorFlowSharp Support, follow [Setting up ML-Agents within Unity](Basic-Guide.md#setting-up-ml-agents-within-unity) section.
To set up the TensorFlowSharp Support, follow [Setting up ML-Agents Toolkit within Unity](Basic-Guide.md#setting-up-ml-agents-within-unity) section.
of the Basic Guide page.

### Embedding the trained model into Unity
Expand Down
2 changes: 1 addition & 1 deletion docs/Glossary.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ML-Agents Glossary
# ML-Agents Toolkit Glossary

* **Academy** - Unity Component which controls timing, reset, and
training/inference settings of the environment.
Expand Down
Loading

0 comments on commit d37bfb6

Please sign in to comment.