diff --git a/README.md b/README.md index ee91564483..4e0d0d9470 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,7 @@ [![IsaacSim](https://img.shields.io/badge/IsaacSim-4.0-silver.svg)](https://docs.omniverse.nvidia.com/isaacsim/latest/overview.html) [![Python](https://img.shields.io/badge/python-3.10-blue.svg)](https://docs.python.org/3/whatsnew/3.10.html) [![Linux platform](https://img.shields.io/badge/platform-linux--64-orange.svg)](https://releases.ubuntu.com/20.04/) +[![Windows platform](https://img.shields.io/badge/platform-windows--64-orange.svg)](https://www.microsoft.com/en-us/) [![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/) [![Docs status](https://img.shields.io/badge/docs-passing-brightgreen.svg)](https://isaac-sim.github.io/IsaacLab) [![License](https://img.shields.io/badge/license-BSD--3-yellow.svg)](https://opensource.org/licenses/BSD-3-Clause) @@ -20,16 +21,6 @@ simulation capabilities for photo-realistic scenes and fast and accurate simulat Please refer to our [documentation page](https://isaac-sim.github.io/IsaacLab) to learn more about the installation steps, features, tutorials, and how to set up your project with Isaac Lab. -## Announcements - -* [17.04.2024] [**v0.3.0**](https://github.com/isaac-sim/IsaacLab/releases/tag/v0.3.0): - Several improvements and bug fixes to the framework. Includes cabinet opening and dexterous manipulation environments, - terrain-aware patch sampling, and animation recording. - -* [22.12.2023] [**v0.2.0**](https://github.com/isaac-sim/IsaacLab/releases/tag/v0.2.0): - Significant breaking updates to enhance the modularity and user-friendliness of the framework. Also includes - procedural terrain generation, warp-based custom ray-casters, and legged-locomotion environments. - ## Contributing to Isaac Lab We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone. @@ -49,8 +40,23 @@ or opening a question on its [forums](https://forums.developer.nvidia.com/c/agx- * Please use GitHub [Discussions](https://github.com/isaac-sim/IsaacLab/discussions) for discussing ideas, asking questions, and requests for new features. * Github [Issues](https://github.com/isaac-sim/IsaacLab/issues) should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates. -## Acknowledgement - -NVIDIA Isaac Sim is available freely under [individual license](https://www.nvidia.com/en-us/omniverse/download/). For more information about its license terms, please check [here](https://docs.omniverse.nvidia.com/app_isaacsim/common/NVIDIA_Omniverse_License_Agreement.html#software-support-supplement). +## License The Isaac Lab framework is released under [BSD-3 License](LICENSE). The license files of its dependencies and assets are present in the [`docs/licenses`](docs/licenses) directory. + +## Acknowledgement + +Isaac Lab development initiated from the [Orbit](https://isaac-orbit.github.io/) framework. We would appreciate if you would cite it in academic publications as well: + +``` +@article{mittal2023orbit, + author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh}, + journal={IEEE Robotics and Automation Letters}, + title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments}, + year={2023}, + volume={8}, + number={6}, + pages={3740-3747}, + doi={10.1109/LRA.2023.3270034} +} +``` diff --git a/docs/index.rst b/docs/index.rst index 8a9e0d99ae..8435e01947 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -31,10 +31,26 @@ For more information about the framework, please refer to the `paper `_ framework. We would appreciate if you would cite it in academic publications as well: + +.. code:: bibtex + + @article{mittal2023orbit, + author={Mittal, Mayank and Yu, Calvin and Yu, Qinxi and Liu, Jingzhou and Rudin, Nikita and Hoeller, David and Yuan, Jia Lin and Singh, Ritvik and Guo, Yunrong and Mazhar, Hammad and Mandlekar, Ajay and Babich, Buck and State, Gavriel and Hutter, Marco and Garg, Animesh}, + journal={IEEE Robotics and Automation Letters}, + title={Orbit: A Unified Simulation Framework for Interactive Robot Learning Environments}, + year={2023}, + volume={8}, + number={6}, + pages={3740-3747}, + doi={10.1109/LRA.2023.3270034} + } + Table of Contents ================= diff --git a/docs/source/features/workflows.rst b/docs/source/features/workflows.rst index 60d608a787..61f57b9485 100644 --- a/docs/source/features/workflows.rst +++ b/docs/source/features/workflows.rst @@ -139,100 +139,3 @@ An example of implementing the reward function for the Cartpole task using the D We provide a more detailed tutorial for setting up a RL environment using the direct workflow at `Creating a Direct Workflow RL Environment <../tutorials/03_envs/create_direct_rl_env.html>`_. - - -Multi-GPU Training ------------------- - -For complex reinforcement learning environments, it may be desirable to scale up training across multiple GPUs. -This is possible in Isaac Lab with the ``rl_games`` RL library through the use of the -`PyTorch distributed `_ framework. -In this workflow, ``torch.distributed`` is used to launch multiple processes of training, where the number of -processes must be equal to or less than the number of GPUs available. Each process runs on -a dedicated GPU and launches its own instance of Isaac Sim and the Isaac Lab environment. -Each process collects its own rollouts during the training process and has its own copy of the policy -network. During training, gradients are aggregated across the processes and broadcasted back to the process -at the end of the epoch. - -.. image:: ../_static/multigpu.png - :align: center - :alt: Multi-GPU training paradigm - - -To train with multiple GPUs, use the following command, where ``--proc_per_node`` represents the number of available GPUs: - -.. code-block:: shell - - python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed - - - -Multi-Node Training -------------------- - -To scale up training beyond multiple GPUs on a single machine, it is also possible to train across multiple nodes. -To train across multiple nodes/machines, it is required to launch an individual process on each node. -For the master node, use the following command, where ``--proc_per_node`` represents the number of available GPUs, and ``--nnodes`` represents the number of nodes: - -.. code-block:: shell - - python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --rdzv_id=123 --rdzv_backend=c10d --rdzv_endpoint=localhost:5555 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed - -Note that the port (``5555``) can be replaced with any other available port. - -For non-master nodes, use the following command, replacing ``--node_rank`` with the index of each machine: - -.. code-block:: shell - - python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --rdzv_id=123 --rdzv_backend=c10d --rdzv_endpoint=ip_of_master_machine:5555 source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed - -For more details on multi-node training with PyTorch, please visit the `PyTorch documentation `_. As mentioned in the PyTorch documentation, "multinode training is bottlenecked by inter-node communication latencies". When this latency is high, it is possible multi-node training will perform worse than running on a single node instance. - - -Tiled Rendering ---------------- - -Tiled rendering APIs provide a vectorized interface for collecting data from camera sensors. -This is useful for reinforcement learning environments requiring vision in the loop. -Tiled rendering works by concatenating camera outputs from multiple cameras and rending -one single large image instead of multiple smaller images that would have been produced -by each individual camera. This reduces the amount of time required for rendering and -provides a more efficient API for working with vision data. - -Isaac Lab provides tiled rendering APIs for RGB and depth data through the :class:`~sensors.TiledCamera` -class. Configurations for the tiled rendering APIs can be defined through the :class:`~sensors.TiledCameraCfg` -class, specifying parameters such as the regex expression for all camera paths, the transform -for the cameras, the desired data type, the type of cameras to add to the scene, and the camera -resolution. - -.. code-block:: python - - tiled_camera: TiledCameraCfg = TiledCameraCfg( - prim_path="/World/envs/env_.*/Camera", - offset=TiledCameraCfg.OffsetCfg(pos=(-7.0, 0.0, 3.0), rot=(0.9945, 0.0, 0.1045, 0.0), convention="world"), - data_types=["rgb"], - spawn=sim_utils.PinholeCameraCfg( - focal_length=24.0, focus_distance=400.0, horizontal_aperture=20.955, clipping_range=(0.1, 20.0) - ), - width=80, - height=80, - ) - -To access the tiled rendering interface, a :class:`~sensors.TiledCamera` object can be created and used -to retrieve data from the cameras. - -.. code-block:: python - - tiled_camera = TiledCamera(cfg.tiled_camera) - data_type = "rgb" - data = tiled_camera.data.output[data_type] - -The returned data will be transformed into the shape (num_cameras, height, width, num_channels), which -can be used directly as observation for reinforcement learning. - -When working with rendering, make sure to add the ``--enable_cameras`` argument when launching the -environment. For example: - -.. code-block:: shell - - python source/standalone/workflows/rl_games/train.py --task=Isaac-Cartpole-RGB-Camera-Direct-v0 --headless --enable_cameras diff --git a/docs/source/how-to/master_omniverse.rst b/docs/source/how-to/master_omniverse.rst index cdeb78e817..113b123248 100644 --- a/docs/source/how-to/master_omniverse.rst +++ b/docs/source/how-to/master_omniverse.rst @@ -99,11 +99,11 @@ USD basics `__ by Houdini, which is a 3D animation software. Make sure to go through the following sections: -- `Quick example `__ -- `Attributes and primvars `__ -- `Composition `__ -- `Schemas `__ -- `Instances `__ +- `Quick example `__ +- `Attributes and primvars `__ +- `Composition `__ +- `Schemas `__ +- `Instances `__ and `Scene-graph Instancing `__ As a test of understanding, make sure you can answer the following: diff --git a/isaaclab.bat b/isaaclab.bat index 129fb04f1f..f48c331b0f 100644 --- a/isaaclab.bat +++ b/isaaclab.bat @@ -110,7 +110,7 @@ if %errorlevel% equ 0 ( echo [INFO] Conda environment named '%env_name%' already exists. ) else ( echo [INFO] Creating conda environment named '%env_name%'... - call conda env create --name %env_name% -f %build_path%\environment.yml + call conda create -y --name %env_name% python=3.10 ) rem cache current paths for later set "cache_pythonpath=%PYTHONPATH%" diff --git a/isaaclab.sh b/isaaclab.sh index 8ebcbb22bb..1ef84f013b 100755 --- a/isaaclab.sh +++ b/isaaclab.sh @@ -115,7 +115,7 @@ setup_conda_env() { echo -e "[INFO] Conda environment named '${env_name}' already exists." else echo -e "[INFO] Creating conda environment named '${env_name}'..." - conda env create --name ${env_name} -f ${build_path}/environment.yml + conda create -y --name ${env_name} python=3.10 fi # cache current paths for later cache_pythonpath=$PYTHONPATH diff --git a/source/extensions/omni.isaac.lab/setup.py b/source/extensions/omni.isaac.lab/setup.py index 567acd74ed..c63d1fb586 100644 --- a/source/extensions/omni.isaac.lab/setup.py +++ b/source/extensions/omni.isaac.lab/setup.py @@ -19,7 +19,7 @@ INSTALL_REQUIRES = [ # generic "numpy", - "torch>=2.2.2", + "torch==2.2.2", "prettytable==3.3.0", "tensordict", "toml", @@ -32,6 +32,8 @@ "pyglet<2", ] +PYTORCH_INDEX_URL = ["https://download.pytorch.org/whl/cu118"] + # Installation operation setup( name="omni-isaac-lab", @@ -45,6 +47,7 @@ include_package_data=True, python_requires=">=3.10", install_requires=INSTALL_REQUIRES, + dependency_links=PYTORCH_INDEX_URL, packages=["omni.isaac.lab"], classifiers=[ "Natural Language :: English", diff --git a/source/extensions/omni.isaac.lab_tasks/setup.py b/source/extensions/omni.isaac.lab_tasks/setup.py index 3686c63b7a..fa654fa321 100644 --- a/source/extensions/omni.isaac.lab_tasks/setup.py +++ b/source/extensions/omni.isaac.lab_tasks/setup.py @@ -21,7 +21,7 @@ INSTALL_REQUIRES = [ # generic "numpy", - "torch>=2.2.2", + "torch==2.2.2", "torchvision>=0.14.1", # ensure compatibility with torch 1.13.1 # 5.26.0 introduced a breaking change, so we restricted it for now. # See issue https://github.com/tensorflow/tensorboard/issues/6808 for details. @@ -34,6 +34,8 @@ "moviepy", ] +PYTORCH_INDEX_URL = ["https://download.pytorch.org/whl/cu118"] + # Extra dependencies for RL agents EXTRAS_REQUIRE = { "sb3": ["stable-baselines3>=2.1"], @@ -63,6 +65,7 @@ include_package_data=True, python_requires=">=3.10", install_requires=INSTALL_REQUIRES, + dependency_links=PYTORCH_INDEX_URL, extras_require=EXTRAS_REQUIRE, packages=["omni.isaac.lab_tasks"], classifiers=[