- Project Overview
- Features
- Prerequisites
- Installation
- Usage
- Implementation Details
- Next Steps
- Contributing
- License
- Contact
- Acknowledgements
This project implements a basic version of a maze runner game trained using the Q-learning algorithm. The game is built using OpenAI Gym, providing a customized environment for reinforcement learning. The main components include a custom environment and a usage example.
- Custom maze environment with configurable grid size
- Walls and goal position
- Four possible actions: up, right, down, left
- Reward system: positive reward for reaching the goal, small negative reward for each step
- Visualization of the maze and agent's position
Before you begin, ensure you have met the following requirements:
- You have installed Python 3.7 or later
- You have a Windows/Linux/Mac machine
- You have installed the required dependencies
To install the Maze Runner Game, follow these steps:
- Clone the repository:
git clone https://github.com/yourusername/maze-runner-qlearning.git
- Navigate to the project directory:
cd maze-runner-qlearning
- Create a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
- Install the required dependencies:
pip install -r requirements.txt
For more details on the project structure and implementation, see the Implementation Details section.
To use the Maze Runner Game:
-
Run the main script:
python main.py
-
To use the environment in your own script:
import gym
env = gym.make('CustomMaze-v0')
env.reset()
done = False
while not done:
action = env.action_space.sample() # Take random action
state, reward, done, _ = env.step(action)
env.render()
print(f"State: {state}, Reward: {reward}, Done: {done}")
This example demonstrates how to interact with the CustomMazeEnv using the Gym interface.
The CustomMazeEnv
class extends gym.Env
and implements the following methods:
__init__()
: Initializes the environment with a 5x5 grid, sets the goal and wall positions.reset()
: Resets the environment to the initial state.step(action)
: Executes the given action and returns the new state, reward, and done flag.render()
: Visualizes the current state of the maze.
For usage examples, see the Usage section.
The custom environment is registered with Gym using the ID 'CustomMaze-v0'. This allows you to create the environment using gym.make('CustomMaze-v0')
as shown in the Usage section.
- Implement the Q-learning algorithm
- Train the agent to navigate the maze efficiently
- Experiment with different maze configurations and learning parameters
- Analyze and visualize the learning progress
- Implement a more advanced reinforcement learning algorithm (e.g., Deep Q-Network)
For more information on how to contribute to these next steps, see the Contributing section.
Contributions to this project are welcome. To contribute:
- Fork the repository.
- Create a new branch:
git checkout -b <branch_name>
. - Make your changes and commit them:
git commit -m '<commit_message>'
- Push to the original branch:
git push origin <project_name>/<location>
- Create the pull request.
Alternatively, see the GitHub documentation on creating a pull request.
Before contributing, please review the project features and next steps to align your contributions with the project goals.
This project is licensed under the MIT License - see the LICENSE.md file for details.
Author- Chirag Sindhwani
If you want to contact me, you can reach me at [email protected]
. For bug reports or feature requests, please use the GitHub issue tracker.
- OpenAI Gym for the reinforcement learning framework
- NumPy for numerical computing
- Matplotlib for visualization
These libraries are essential for running the project. Make sure to install them as described in the Installation section.