Skip to content

Commit

Permalink
Fixes torch index url in setup.py and links in the documentation (isa…
Browse files Browse the repository at this point in the history
…ac-sim#459)

Fixes installation and documentation:
- Now installs torch with cuda support on Windows
- Fixes some links in the documentation

Fixes isaac-sim#404 

## Type of change

- Bug fix

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./isaaclab.sh --format`
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [ ] I have run all the tests with `./isaaclab.sh --test` and they pass
- [ ] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [x] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there
  • Loading branch information
Dhoeller19 authored Jun 4, 2024
1 parent ab3a126 commit 8816fb7
Show file tree
Hide file tree
Showing 8 changed files with 52 additions and 121 deletions.
32 changes: 19 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
[![IsaacSim](https://img.shields.io/badge/IsaacSim-4.0-silver.svg)](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html)
[![Python](https://img.shields.io/badge/python-3.10-blue.svg)](https://docs.python.org/3/whatsnew/3.10.html)
[![Linux platform](https://img.shields.io/badge/platform-linux--64-orange.svg)](https://releases.ubuntu.com/20.04/)
[![Windows platform](https://img.shields.io/badge/platform-windows--64-orange.svg)](https://www.microsoft.com/en-us/)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)
[![Docs status](https://img.shields.io/badge/docs-passing-brightgreen.svg)](https://isaac-sim.github.io/IsaacLab)
[![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause)
Expand All @@ -20,16 +21,6 @@ simulation capabilities for photo-realistic scenes and fast and accurate simulat
Please refer to our [documentation page](https://isaac-sim.github.io/IsaacLab) to learn more about the
installation steps, features, tutorials, and how to set up your project with Isaac Lab.

## Announcements

* [17.04.2024] [**v0.3.0**](https://github.com/isaac-sim/IsaacLab/releases/tag/v0.3.0):
Several improvements and bug fixes to the framework. Includes cabinet opening and dexterous manipulation environments,
terrain-aware patch sampling, and animation recording.

* [22.12.2023] [**v0.2.0**](https://github.com/isaac-sim/IsaacLab/releases/tag/v0.2.0):
Significant breaking updates to enhance the modularity and user-friendliness of the framework. Also includes
procedural terrain generation, warp-based custom ray-casters, and legged-locomotion environments.

## Contributing to Isaac Lab

We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone.
Expand All @@ -49,8 +40,23 @@ or opening a question on its [forums](https://forums.developer.nvidia.com/c/agx-
* Please use GitHub [Discussions](https://github.com/isaac-sim/IsaacLab/discussions) for discussing ideas, asking questions, and requests for new features.
* Github [Issues](https://github.com/isaac-sim/IsaacLab/issues) should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates.

## Acknowledgement

NVIDIA Isaac Sim is available freely under [individual license](https://www.nvidia.com/en-us/omniverse/download/). For more information about its license terms, please check [here](https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html#software-support-supplement).
## License

The Isaac Lab framework is released under [BSD-3 License](LICENSE). The license files of its dependencies and assets are present in the [`docs/licenses`](docs/licenses) directory.

## Acknowledgement

Isaac Lab development initiated from the [Orbit](https://isaac-orbit.github.io/) framework. We would appreciate if you would cite it in academic publications as well:

```
@article{mittal2023orbit,
author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
journal={IEEE Robotics and Automation Letters},
title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
year={2023},
volume={8},
number={6},
pages={3740-3747},
doi={10.1109/LRA.2023.3270034}
}
```
20 changes: 18 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,26 @@ For more information about the framework, please refer to the `paper <https://ar
License
=======

NVIDIA Isaac Sim is provided under the NVIDIA End User License Agreement. However, the
Isaac Lab framework is open-sourced under the BSD-3-Clause license.
The Isaac Lab framework is open-sourced under the BSD-3-Clause license.
Please refer to :ref:`license` for more details.

Acknowledgement
===============
Isaac Lab development initiated from the `Orbit <https://isaac-orbit.github.io/>`_ framework. We would appreciate if you would cite it in academic publications as well:

.. code:: bibtex
@article{mittal2023orbit,
author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh},
journal={IEEE Robotics and Automation Letters},
title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments},
year={2023},
volume={8},
number={6},
pages={3740-3747},
doi={10.1109/LRA.2023.3270034}
}
Table of Contents
=================
Expand Down
97 changes: 0 additions & 97 deletions docs/source/features/workflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,100 +139,3 @@ An example of implementing the reward function for the Cartpole task using the D

We provide a more detailed tutorial for setting up a RL environment using the direct workflow at
`Creating a Direct Workflow RL Environment <../tutorials/03_envs/create_direct_rl_env.html>`_.


Multi-GPU Training
------------------

For complex reinforcement learning environments, it may be desirable to scale up training across multiple GPUs.
This is possible in Isaac Lab with the ``rl_games`` RL library through the use of the
`PyTorch distributed <https://pytorch.org/docs/stable/distributed.html>`_ framework.
In this workflow, ``torch.distributed`` is used to launch multiple processes of training, where the number of
processes must be equal to or less than the number of GPUs available. Each process runs on
a dedicated GPU and launches its own instance of Isaac Sim and the Isaac Lab environment.
Each process collects its own rollouts during the training process and has its own copy of the policy
network. During training, gradients are aggregated across the processes and broadcasted back to the process
at the end of the epoch.

.. image:: ../_static/multigpu.png
:align: center
:alt: Multi-GPU training paradigm


To train with multiple GPUs, use the following command, where ``--proc_per_node`` represents the number of available GPUs:

.. code-block:: shell
python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
Multi-Node Training
-------------------

To scale up training beyond multiple GPUs on a single machine, it is also possible to train across multiple nodes.
To train across multiple nodes/machines, it is required to launch an individual process on each node.
For the master node, use the following command, where ``--proc_per_node`` represents the number of available GPUs, and ``--nnodes`` represents the number of nodes:

.. code-block:: shell
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --rdzv_id=123 --rdzv_backend=c10d --rdzv_endpoint=localhost:5555 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
Note that the port (``5555``) can be replaced with any other available port.

For non-master nodes, use the following command, replacing ``--node_rank`` with the index of each machine:

.. code-block:: shell
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --rdzv_id=123 --rdzv_backend=c10d --rdzv_endpoint=ip_of_master_machine:5555 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
For more details on multi-node training with PyTorch, please visit the `PyTorch documentation <https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html>`_. As mentioned in the PyTorch documentation, "multinode training is bottlenecked by inter-node communication latencies". When this latency is high, it is possible multi-node training will perform worse than running on a single node instance.


Tiled Rendering
---------------

Tiled rendering APIs provide a vectorized interface for collecting data from camera sensors.
This is useful for reinforcement learning environments requiring vision in the loop.
Tiled rendering works by concatenating camera outputs from multiple cameras and rending
one single large image instead of multiple smaller images that would have been produced
by each individual camera. This reduces the amount of time required for rendering and
provides a more efficient API for working with vision data.

Isaac Lab provides tiled rendering APIs for RGB and depth data through the :class:`~sensors.TiledCamera`
class. Configurations for the tiled rendering APIs can be defined through the :class:`~sensors.TiledCameraCfg`
class, specifying parameters such as the regex expression for all camera paths, the transform
for the cameras, the desired data type, the type of cameras to add to the scene, and the camera
resolution.

.. code-block:: python
tiled_camera: TiledCameraCfg = TiledCameraCfg(
prim_path="/World/envs/env_.*/Camera",
offset=TiledCameraCfg.OffsetCfg(pos=(-7.0, 0.0, 3.0), rot=(0.9945, 0.0, 0.1045, 0.0), convention="world"),
data_types=["rgb"],
spawn=sim_utils.PinholeCameraCfg(
focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 20.0)
),
width=80,
height=80,
)
To access the tiled rendering interface, a :class:`~sensors.TiledCamera` object can be created and used
to retrieve data from the cameras.

.. code-block:: python
tiled_camera = TiledCamera(cfg.tiled_camera)
data_type = "rgb"
data = tiled_camera.data.output[data_type]
The returned data will be transformed into the shape (num_cameras, height, width, num_channels), which
can be used directly as observation for reinforcement learning.

When working with rendering, make sure to add the ``--enable_cameras`` argument when launching the
environment. For example:

.. code-block:: shell
python source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-RGB-Camera-Direct-v0 --headless --enable_cameras
10 changes: 5 additions & 5 deletions docs/source/how-to/master_omniverse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -99,11 +99,11 @@ USD basics <https://www.sidefx.com/docs/houdini/solaris/usd.html>`__ by
Houdini, which is a 3D animation software.
Make sure to go through the following sections:

- `Quick example <https://www.sidefx.com/docs/houdini/solaris/usd.html%23quick-example>`__
- `Attributes and primvars <https://www.sidefx.com/docs/houdini/solaris/usd.html%23attrs>`__
- `Composition <https://www.sidefx.com/docs/houdini/solaris/usd.html%23compose>`__
- `Schemas <https://www.sidefx.com/docs/houdini/solaris/usd.html%23schemas>`__
- `Instances <https://www.sidefx.com/docs/houdini/solaris/usd.html%23instancing>`__
- `Quick example <https://www.sidefx.com/docs/houdini/solaris/usd.html#quick-example>`__
- `Attributes and primvars <https://www.sidefx.com/docs/houdini/solaris/usd.html#attrs>`__
- `Composition <https://www.sidefx.com/docs/houdini/solaris/usd.html#compose>`__
- `Schemas <https://www.sidefx.com/docs/houdini/solaris/usd.html#schemas>`__
- `Instances <https://www.sidefx.com/docs/houdini/solaris/usd.html#instancing>`__
and `Scene-graph Instancing <https://openusd.org/dev/api/_usd__page__scenegraph_instancing.html>`__

As a test of understanding, make sure you can answer the following:
Expand Down
2 changes: 1 addition & 1 deletion isaaclab.bat
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ if %errorlevel% equ 0 (
echo [INFO] Conda environment named '%env_name%' already exists.
) else (
echo [INFO] Creating conda environment named '%env_name%'...
call conda env create --name %env_name% -f %build_path%\environment.yml
call conda create -y --name %env_name% python=3.10
)
rem cache current paths for later
set "cache_pythonpath=%PYTHONPATH%"
Expand Down
2 changes: 1 addition & 1 deletion isaaclab.sh
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ setup_conda_env() {
echo -e "[INFO] Conda environment named '${env_name}' already exists."
else
echo -e "[INFO] Creating conda environment named '${env_name}'..."
conda env create --name ${env_name} -f ${build_path}/environment.yml
conda create -y --name ${env_name} python=3.10
fi
# cache current paths for later
cache_pythonpath=$PYTHONPATH
Expand Down
5 changes: 4 additions & 1 deletion source/extensions/omni.isaac.lab/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
INSTALL_REQUIRES = [
# generic
"numpy",
"torch>=2.2.2",
"torch==2.2.2",
"prettytable==3.3.0",
"tensordict",
"toml",
Expand All @@ -32,6 +32,8 @@
"pyglet<2",
]

PYTORCH_INDEX_URL = ["https://download.pytorch.org/whl/cu118"]

# Installation operation
setup(
name="omni-isaac-lab",
Expand All @@ -45,6 +47,7 @@
include_package_data=True,
python_requires=">=3.10",
install_requires=INSTALL_REQUIRES,
dependency_links=PYTORCH_INDEX_URL,
packages=["omni.isaac.lab"],
classifiers=[
"Natural Language :: English",
Expand Down
5 changes: 4 additions & 1 deletion source/extensions/omni.isaac.lab_tasks/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
INSTALL_REQUIRES = [
# generic
"numpy",
"torch>=2.2.2",
"torch==2.2.2",
"torchvision>=0.14.1", # ensure compatibility with torch 1.13.1
# 5.26.0 introduced a breaking change, so we restricted it for now.
# See issue https://github.com/tensorflow/tensorboard/issues/6808 for details.
Expand All @@ -34,6 +34,8 @@
"moviepy",
]

PYTORCH_INDEX_URL = ["https://download.pytorch.org/whl/cu118"]

# Extra dependencies for RL agents
EXTRAS_REQUIRE = {
"sb3": ["stable-baselines3>=2.1"],
Expand Down Expand Up @@ -63,6 +65,7 @@
include_package_data=True,
python_requires=">=3.10",
install_requires=INSTALL_REQUIRES,
dependency_links=PYTORCH_INDEX_URL,
extras_require=EXTRAS_REQUIRE,
packages=["omni.isaac.lab_tasks"],
classifiers=[
Expand Down

0 comments on commit 8816fb7

Please sign in to comment.