Skip to content

Commit

Permalink
Fixes camera sensor for Isaac Sim 2023.1 update (isaac-sim#333)
Browse files Browse the repository at this point in the history
# Description

The camera sensor no longer worked for semantic types with Isaac Sim
2023.1 update. This was because of various Replicator pipeline changes
that directly affected the camera.

This MR fixed the Camera sensor for the new Replicator APIs. It has a
few breaking changes listed in the changelog.

Additionally, the sensor tutorial `run_usd_camera.py` had a couple of
issues. It still used the old method of directly creating RigidPrims,
NVIDIA debug API for drawing markers, and had some bugs. This MR also
updates it to use Orbit's APIs closely and adds an option to specify
which camera to use for `--save` and `--draw`.

Fixes isaac-sim#225

## Type of change

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)

## Checklist

- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with
`./orbit.sh --format`
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my
feature works
- [x] I have updated the changelog and the corresponding version in the
extension's `config/extension.toml` file
- [x] I have added my name to the `CONTRIBUTORS.md` or my name already
exists there

---------

Co-authored-by: AutonomousHansen <[email protected]>
Co-authored-by: Mayank Mittal <[email protected]>
  • Loading branch information
3 people authored Mar 12, 2024
1 parent e444df7 commit 8dea21a
Show file tree
Hide file tree
Showing 13 changed files with 432 additions and 162 deletions.
3 changes: 2 additions & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ per-file-ignores=*/__init__.py:F401
# R505: Unnecessary elif after return statement
# SIM102: Use a single if-statement instead of nested if-statements
# SIM117: Merge with statements for context managers that have same scope.
ignore=E402,E501,W503,E203,D401,R504,R505,SIM102,SIM117
# SIM118: Checks for key-existence checks against dict.keys() calls.
ignore=E402,E501,W503,E203,D401,R504,R505,SIM102,SIM117,SIM118
max-line-length = 120
max-complexity = 30
exclude=_*,.vscode,.git,docs/**
Expand Down
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,7 @@
"omni.isaac.version",
"omni.isaac.motion_generation",
"omni.isaac.ui",
"omni.syntheticdata",
"omni.timeline",
"omni.ui",
"gym",
Expand Down
42 changes: 30 additions & 12 deletions docs/source/how-to/save_camera_output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ directory.

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:emphasize-lines: 137-139, 172-196, 200-204, 214-232
:emphasize-lines: 171-179, 229-247, 251-264
:linenos:


Expand All @@ -27,24 +27,24 @@ images in a numpy format. For more information on the basic writer, please check

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:lines: 137-139
:dedent:
:start-at: rep_writer = rep.BasicWriter(
:end-before: # Camera positions, targets, orientations

While stepping the simulator, the images can be saved to the defined folder. Since the BasicWriter only supports
saving data using NumPy format, we first need to convert the PyTorch sensors to NumPy arrays before packing
them in a dictionary.

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:lines: 172-192
:dedent:
:start-at: # Save images from camera at camera_index
:end-at: single_cam_info = camera.data.info[camera_index]

After this step, we can save the images using the BasicWriter.

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:lines: 193-196
:dedent:
:start-at: # Pack data back into replicator format to save them using its writer
:end-at: rep_writer.write(rep_output)


Projection into 3D Space
Expand All @@ -53,18 +53,32 @@ Projection into 3D Space
We include utilities to project the depth image into 3D Space. The re-projection operations are done using
PyTorch operations which allows faster computation.

.. code-block:: python
from omni.isaac.orbit.utils.math import transform_points, unproject_depth
# Pointcloud in world frame
points_3d_cam = unproject_depth(
camera.data.output["distance_to_image_plane"], camera.data.intrinsic_matrices
)
points_3d_world = transform_points(points_3d_cam, camera.data.pos_w, camera.data.quat_w_ros)
Alternately, we can use the :meth:`omni.isaac.orbit.sensors.camera.utils.create_pointcloud_from_depth` function
to create a point cloud from the depth image and transform it to the world frame.

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:lines: 200-204
:dedent:
:start-at: # Derive pointcloud from camera at camera_index
:end-before: # In the first few steps, things are still being instanced and Camera.data

The resulting point cloud can be visualized using the :mod:`omni.isaac.debug_draw` extension from Isaac Sim.
This makes it easy to visualize the point cloud in the 3D space.

.. literalinclude:: ../../../source/standalone/tutorials/04_sensors/run_usd_camera.py
:language: python
:lines: 214-232
:dedent:
:start-at: # In the first few steps, things are still being instanced and Camera.data
:end-at: pc_markers.visualize(translations=pointcloud)


Executing the script
Expand All @@ -74,7 +88,11 @@ To run the accompanying script, execute the following command:

.. code-block:: bash
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --draw
# Usage with saving and drawing
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --draw
# Usage with saving only in headless mode
./orbit.sh -p source/standalone/tutorials/04_sensors/run_usd_camera.py --save --headless --offscreen_render
The simulation should start, and you can observe different objects falling down. An output folder will be created
Expand Down
6 changes: 3 additions & 3 deletions docs/source/refs/issues.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,16 @@ For more information, please refer to the `PhysX Determinism documentation`_.
Blank initial frames from the camera
------------------------------------

When using the :class:`Camera` sensor in standalone scripts, the first few frames may be blank.
This is a known issue with the simulator where it needs a few steps to load the material
When using the :class:`omni.isaac.orbit.sensors.Camera` sensor in standalone scripts, the first few frames
may be blank. This is a known issue with the simulator where it needs a few steps to load the material
textures properly and fill up the render targets.

A hack to work around this is to add the following after initializing the camera sensor and setting
its pose:

.. code-block:: python
from omni.isaac.core.simulation_context import SimulationContext
from omni.isaac.orbit.sim import SimulationContext
sim = SimulationContext.instance()
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/01_assets/run_rigid_object.rst
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ inside the :class:`assets.RigidObject.data` attribute. This is done using the :m
.. literalinclude:: ../../../../source/standalone/tutorials/01_assets/run_rigid_object.py
:language: python
:start-at: # update buffers
:end-at: cone_object.update(sim-dt)
:end-at: cone_object.update(sim_dt)


The Code Execution
Expand Down
2 changes: 1 addition & 1 deletion source/extensions/omni.isaac.orbit/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.12.4"
version = "0.13.0"

# Description
title = "ORBIT framework for Robot Learning"
Expand Down
34 changes: 34 additions & 0 deletions source/extensions/omni.isaac.orbit/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,40 @@
Changelog
---------

0.13.0 (2024-03-12)
~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added support for the following data types inside the :class:`omni.isaac.orbit.sensors.Camera` class:
``instance_segmentation_fast`` and ``instance_id_segmentation_fast``. These are are GPU-supported annotations
and are faster than the regular annotations.

Fixed
^^^^^

* Fixed handling of semantic filtering inside the :class:`omni.isaac.orbit.sensors.Camera` class. Earlier,
the annotator was given ``semanticTypes`` as an argument. However, with Isaac Sim 2023.1, the annotator
does not accept this argument. Instead the mapping needs to be set to the synthetic data interface directly.
* Fixed the return shape of colored images for segmentation data types inside the
:class:`omni.isaac.orbit.sensors.Camera` class. Earlier, the images were always returned as ``int32``. Now,
they are casted to ``uint8`` 4-channel array before returning if colorization is enabled for the annotation type.

Removed
^^^^^^^

* Dropped support for ``instance_segmentation`` and ``instance_id_segmentation`` annotations in the
:class:`omni.isaac.orbit.sensors.Camera` class. Their "fast" counterparts should be used instead.
* Renamed the argument :attr:`omni.isaac.orbit.sensors.CameraCfg.semantic_types` to
:attr:`omni.isaac.orbit.sensors.CameraCfg.semantic_filter`. This is more aligned with Replicator's terminology
for semantic filter predicates.
* Replaced the argument :attr:`omni.isaac.orbit.sensors.CameraCfg.colorize` with separate colorized
arguments for each annotation type (:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_instance_segmentation`,
:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_instance_id_segmentation`, and
:attr:`~omni.isaac.orbit.sensors.CameraCfg.colorize_semantic_segmentation`).


0.12.4 (2024-03-11)
~~~~~~~~~~~~~~~~~~~

Expand Down
Loading

0 comments on commit 8dea21a

Please sign in to comment.