Isaac Sensor Extension [omni.isaac.sensor]

The Isaac Sensor Extension provides a set of simulated physics based sensors like contact sensor, inertial measurement unit (IMU) sensor, effort sensor, RTX lidar, and interfaces to access them in the simulator

Contact Sensor

IMU sensor

Effort sensor

Lidar RTX sensor

Rotating Lidar PhysX sensor

Camera sensor

class Camera(prim_path: str, name: str = 'camera', frequency: Optional[int] = None, dt: Optional[str] = None, resolution: Optional[Tuple[int, int]] = None, position: Optional[numpy.ndarray] = None, orientation: Optional[numpy.ndarray] = None, translation: Optional[numpy.ndarray] = None, render_product_path: Optional[str] = None)

Provides high level functions to deal with a camera prim and its attributes/ properties. If there is a camera prim present at the path, it will use it. Otherwise, a new Camera prim at the specified prim path will be created.

Parameters
  • prim_path (str) – prim path of the Camera Prim to encapsulate or create.

  • name (str, optional) – shortname to be used as a key by Scene class. Note: needs to be unique if the object is added to the Scene. Defaults to “camera”.

  • frequency (Optional[int], optional) – Frequency of the sensor (i.e: how often is the data frame updated). Defaults to None.

  • dt (Optional[str], optional) – dt of the sensor (i.e: period at which a the data frame updated). Defaults to None.

  • resolution (Optional[Tuple[int, int]], optional) – resolution of the camera (width, height). Defaults to None.

  • position (Optional[Sequence[float]], optional) – position in the world frame of the prim. shape is (3, ). Defaults to None, which means left unchanged.

  • translation (Optional[Sequence[float]], optional) – translation in the local frame of the prim (with respect to its parent prim). shape is (3, ). Defaults to None, which means left unchanged.

  • orientation (Optional[Sequence[float]], optional) – quaternion orientation in the world/ local frame of the prim (depends if translation or position is specified). quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.

  • render_product_path (str) – path to an existing render product, will be used instead of creating a new render product the resolution and camera attached to this render product will be set based on the input arguments. Note: Using same render product path on two Camera objects with different camera prims, resolutions is not supported Defaults to None

add_bounding_box_2d_loose_to_frame() None

Attach the bounding_box_2d_loose annotator to this camera. The bounding_box_2d_loose annotator returns:

np.array shape: (num_objects, 1) dtype: np.dtype([

(“semanticId”, “<u4”), (“x_min”, “<i4”), (“y_min”, “<i4”), (“x_max”, “<i4”), (“y_max”, “<i4”), (“occlusionRatio”, “<f4”),

])

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#bounding-box-2d-loose

add_bounding_box_2d_tight_to_frame() None

Attach the bounding_box_2d_tight annotator to this camera. The bounding_box_2d_tight annotator returns:

np.array shape: (num_objects, 1) dtype: np.dtype([

(“semanticId”, “<u4”), (“x_min”, “<i4”), (“y_min”, “<i4”), (“x_max”, “<i4”), (“y_max”, “<i4”), (“occlusionRatio”, “<f4”),

])

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#bounding-box-2d-tight

add_bounding_box_3d_to_frame() None
add_distance_to_camera_to_frame() None

Attach the distance_to_camera_to_frame annotator to this camera. The distance_to_camera_to_frame annotator returns:

np.array shape: (width, height, 1) dtype: np.float32

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#distance-to-camera

add_distance_to_image_plane_to_frame() None

Attach the distance_to_image_plane annotator to this camera. The distance_to_image_plane annotator returns:

np.array shape: (width, height, 1) dtype: np.float32

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#distance-to-image-plane

add_instance_id_segmentation_to_frame() None

Attach the instance_id_segmentation annotator to this camera. The instance_id_segmentation annotator returns:

np.array shape: (width, height, 1) or (width, height, 4) if colorize is set to true dtype: np.uint32 or np.uint8 if colorize is set to true

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#instance-id-segmentation

add_instance_segmentation_to_frame() None

Attach the instance_segmentation annotator to this camera. The main difference between instance id segmentation and instance segmentation are that instance segmentation annotator goes down the hierarchy to the lowest level prim which has semantic labels, which instance id segmentation always goes down to the leaf prim. The instance_segmentation annotator returns:

np.array shape: (width, height, 1) or (width, height, 4) if colorize is set to true dtype: np.uint32 or np.uint8 if colorize is set to true

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#instance-segmentation

add_motion_vectors_to_frame() None

Attach the motion vectors annotator to this camera. The motion vectors annotator returns:

np.array shape: (width, height, 4) dtype: np.float32

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#motion-vectors

add_normals_to_frame() None

Attach the normals annotator to this camera. The normals annotator returns:

np.array shape: (width, height, 4) dtype: np.float32

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#normals

add_occlusion_to_frame() None

Attach the occlusion annotator to this camera. The occlusion annotator returns:

np.array shape: (num_objects, 1) dtype: np.dtype([(“instanceId”, “<u4”), (“semanticId”, “<u4”), (“occlusionRatio”, “<f4”)])

add_pointcloud_to_frame(include_unlabelled: bool = False)

Attach the pointcloud annotator to this camera. The pointcloud annotator returns:

np.array shape: (num_points, 3) dtype: np.float32

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#point-cloud

add_semantic_segmentation_to_frame() None

Attach the semantic_segmentation annotator to this camera. The semantic_segmentation annotator returns:

np.array shape: (width, height, 1) or (width, height, 4) if colorize is set to true dtype: np.uint32 or np.uint8 if colorize is set to true

See more details: https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/annotators_details.html#semantic-segmentation

apply_visual_material(visual_material: omni.isaac.core.materials.visual_material.VisualMaterial, weaker_than_descendants: bool = False) None

Apply visual material to the held prim and optionally its descendants.

Parameters
  • visual_material (VisualMaterial) – visual material to be applied to the held prim. Currently supports PreviewSurface, OmniPBR and OmniGlass.

  • weaker_than_descendants (bool, optional) – True if the material shouldn’t override the descendants materials, otherwise False. Defaults to False.

Example:

>>> from omni.isaac.core.materials import OmniGlass
>>>
>>> # create a dark-red glass visual material
>>> material = OmniGlass(
...     prim_path="/World/material/glass",  # path to the material prim to create
...     ior=1.25,
...     depth=0.001,
...     thin_walled=False,
...     color=np.array([0.5, 0.0, 0.0])
... )
>>> prim.apply_visual_material(material)
get_applied_visual_material() omni.isaac.core.materials.visual_material.VisualMaterial

Return the current applied visual material in case it was applied using apply_visual_material or it’s one of the following materials that was already applied before: PreviewSurface, OmniPBR and OmniGlass.

Returns

the current applied visual material if its type is currently supported.

Return type

VisualMaterial

Example:

>>> # given a visual material applied
>>> prim.get_applied_visual_material()
<omni.isaac.core.materials.omni_glass.OmniGlass object at 0x7f36263106a0>
get_aspect_ratio() float
Returns

ratio between width and height

Return type

float

get_clipping_range() Tuple[float, float]
Returns

near_distance and far_distance respectively.

Return type

Tuple[float, float]

get_current_frame(clone=False) dict
Parameters

clone (bool, optional) – if True, returns a deepcopy of the current frame. Defaults to False.

Returns

returns the current frame of data

Return type

dict

get_default_state() omni.isaac.core.utils.types.XFormPrimState

Get the default prim states (spatial position and orientation).

Returns

an object that contains the default state of the prim (position and orientation)

Return type

XFormPrimState

Example:

>>> state = prim.get_default_state()
>>> state
<omni.isaac.core.utils.types.XFormPrimState object at 0x7f33addda650>
>>>
>>> state.position
[-4.5299529e-08 -1.8347054e-09 -2.8610229e-08]
>>> state.orientation
[1. 0. 0. 0.]
get_depth() numpy.ndarray
Returns

(n x m x 1) depth data for each point.

Return type

depth (np.ndarray)

get_dt() float
Returns

gets the dt to acquire new data frames

Return type

float

get_fisheye_polynomial_properties() Tuple[float, float, float, float, float, List]
Returns

nominal_width, nominal_height, optical_centre_x,

optical_centre_y, max_fov and polynomial respectively.

Return type

Tuple[float, float, float, float, float, List]

get_focal_length() float
Returns

Longer Lens Lengths Narrower FOV, Shorter Lens Lengths Wider FOV

Return type

float

get_focus_distance() float
Returns

Distance from the camera to the focus plane (in stage units).

Return type

float

get_frequency() float
Returns

gets the frequency to acquire new data frames

Return type

float

get_horizontal_aperture() float

_ :returns: Emulates sensor/film width on a camera :rtype: float

get_horizontal_fov() float
Returns

horizontal field of view in pixels

Return type

float

get_image_coords_from_world_points(points_3d: numpy.ndarray) numpy.ndarray
Using pinhole perspective projection, this method projects 3d points in the world frame to the image

plane giving the pixel coordinates [[0, width], [0, height]]

Parameters

points_3d (np.ndarray) – 3d points (X, Y, Z) in world frame. shape is (n, 3) where n is the number of points.

Returns

2d points (u, v) corresponds to the pixel coordinates. shape is (n, 2) where n is the number of points.

Return type

np.ndarray

get_intrinsics_matrix() numpy.ndarray
Returns

the intrinsics of the camera (used for calibration)

Return type

np.ndarray

get_lens_aperture() float
Returns

controls lens aperture (i.e focusing). 0 turns off focusing.

Return type

float

get_local_pose(camera_axes: str = 'world') None

Gets prim’s pose with respect to the local frame (the prim’s parent frame in the world axes).

Parameters

camera_axes (str, optional) – camera axes, world is (+Z up, +X forward), ros is (+Y up, +Z forward) and usd is (+Y up and -Z forward). Defaults to “world”.

Returns

first index is position in the local frame of the prim. shape is (3, ).

second index is quaternion orientation in the local frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ).

Return type

Tuple[np.ndarray, np.ndarray]

get_local_scale() numpy.ndarray

Get prim’s scale with respect to the local frame (the parent’s frame)

Returns

scale applied to the prim’s dimensions in the local frame. shape is (3, ).

Return type

np.ndarray

Example:

>>> prim.get_local_scale()
[1. 1. 1.]
get_pointcloud() numpy.ndarray
Returns

(N x 3) 3d points (X, Y, Z) in camera frame. Shape is (N x 3) where N is the number of points.

Return type

pointcloud (np.ndarray)

Note

This currently uses the depth annotator to generate the pointcloud. In the future, this will be switched to use the pointcloud annotator.

get_projection_mode() str
Returns

perspective or orthographic.

Return type

str

get_projection_type() str
Returns

pinhole, fisheyeOrthographic, fisheyeEquidistant, fisheyeEquisolid, fisheyePolynomial or fisheyeSpherical

Return type

str

get_render_product_path() str
Returns

gets the path to the render product attached to this camera

Return type

string

get_resolution() Tuple[int, int]
Returns

width and height respectively.

Return type

Tuple[int, int]

get_rgb() numpy.ndarray
Returns

(N x 3) RGB color data for each point.

Return type

rgb (np.ndarray)

get_rgba() numpy.ndarray
Returns

(N x 4) RGBa color data for each point.

Return type

rgba (np.ndarray)

get_shutter_properties() Tuple[float, float]
Returns

delay_open and delay close respectively.

Return type

Tuple[float, float]

get_stereo_role() str
Returns

mono, left or right.

Return type

str

get_vertical_aperture() float
Returns

Emulates sensor/film height on a camera.

Return type

float

get_vertical_fov() float
Returns

vertical field of view in pixels

Return type

float

get_view_matrix_ros()

3D points in World Frame -> 3D points in Camera Ros Frame

Returns

the view matrix that transforms 3d points in the world frame to 3d points in the camera axes

with ros camera convention.

Return type

np.ndarray

get_visibility() bool
Returns

true if the prim is visible in stage. false otherwise.

Return type

bool

Example:

>>> # get the visible state of an visible prim on the stage
>>> prim.get_visibility()
True
get_world_points_from_image_coords(points_2d: numpy.ndarray, depth: numpy.ndarray)
Using pinhole perspective projection, this method does the inverse projection given the depth of the

pixels

Parameters
  • points_2d (np.ndarray) – 2d points (u, v) corresponds to the pixel coordinates. shape is (n, 2) where n is the number of points.

  • depth (np.ndarray) – depth corresponds to each of the pixel coords. shape is (n,)

Returns

(n, 3) 3d points (X, Y, Z) in world frame. shape is (n, 3) where n is the number of points.

Return type

np.ndarray

get_world_pose(camera_axes: str = 'world') Tuple[numpy.ndarray, numpy.ndarray]

Gets prim’s pose with respect to the world’s frame (always at [0, 0, 0] and unity quaternion not to be confused with /World Prim)

Parameters

camera_axes (str, optional) – camera axes, world is (+Z up, +X forward), ros is (+Y up, +Z forward) and usd is (+Y up and -Z forward). Defaults to “world”.

Returns

first index is position in the world frame of the prim. shape is (3, ).

second index is quaternion orientation in the world frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ).

Return type

Tuple[np.ndarray, np.ndarray]

get_world_scale() numpy.ndarray

Get prim’s scale with respect to the world’s frame

Returns

scale applied to the prim’s dimensions in the world frame. shape is (3, ).

Return type

np.ndarray

Example:

>>> prim.get_world_scale()
[1. 1. 1.]
initialize(physics_sim_view=None) None

To be called before using this class after a reset of the world

Parameters

physics_sim_view (_type_, optional) – _description_. Defaults to None.

is_paused() bool
Returns

is data collection paused.

Return type

bool

is_valid() bool

Check if the prim path has a valid USD Prim at it

Returns

True is the current prim path corresponds to a valid prim in stage. False otherwise.

Return type

bool

Example:

>>> # given an existing and valid prim
>>> prims.is_valid()
True
is_visual_material_applied() bool

Check if there is a visual material applied

Returns

True if there is a visual material applied. False otherwise.

Return type

bool

Example:

>>> # given a visual material applied
>>> prim.is_visual_material_applied()
True
property name: Optional[str]

Returns: str: name given to the prim when instantiating it. Otherwise None.

Used to query if the prim is a non root articulation link

Returns

True if the prim itself is a non root link

Return type

bool

Example:

>>> # for a wrapped articulation (where the root prim has the Physics Articulation Root property applied)
>>> prim.non_root_articulation_link
False
pause() None

pauses data collection and updating the data frame

post_reset() None

Reset the prim to its default state (position and orientation).

Note

For an articulation, in addition to configuring the root prim’s default position and spatial orientation (defined via the set_default_state method), the joint’s positions, velocities, and efforts (defined via the set_joints_default_state method) are imposed

Example:

>>> prim.post_reset()
property prim: pxr.Usd.Prim

Returns: Usd.Prim: USD Prim object that this object holds.

property prim_path: str

Returns: str: prim path in the stage

remove_bounding_box_2d_loose_from_frame() None
remove_bounding_box_2d_tight_from_frame() None
remove_bounding_box_3d_from_frame() None
remove_distance_to_camera_from_frame() None
remove_distance_to_image_plane_from_frame() None
remove_instance_id_segmentation_from_frame() None
remove_instance_segmentation_from_frame() None
remove_motion_vectors_from_frame() None
remove_normals_from_frame() None
remove_occlusion_from_frame() None
remove_pointcloud_from_frame() None
remove_semantic_segmentation_from_frame() None
resume() None

resumes data collection and updating the data frame

set_clipping_range(near_distance: Optional[float] = None, far_distance: Optional[float] = None) None

Clips the view outside of both near and far range values.

Parameters
  • near_distance (Optional[float], optional) – value to be used for near clipping. Defaults to None.

  • far_distance (Optional[float], optional) – value to be used for far clipping. Defaults to None.

set_default_state(position: Optional[Sequence[float]] = None, orientation: Optional[Sequence[float]] = None) None

Set the default state of the prim (position and orientation), that will be used after each reset.

Parameters
  • position (Optional[Sequence[float]], optional) – position in the world frame of the prim. shape is (3, ). Defaults to None, which means left unchanged.

  • orientation (Optional[Sequence[float]], optional) – quaternion orientation in the world frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.

Example:

>>> # configure default state
>>> prim.set_default_state(position=np.array([1.0, 0.5, 0.0]), orientation=np.array([1, 0, 0, 0]))
>>>
>>> # set default states during post-reset
>>> prim.post_reset()
set_dt(value: float) None
Parameters

value (float) – sets the dt to acquire new data frames

set_fisheye_polynomial_properties(nominal_width: Optional[float], nominal_height: Optional[float], optical_centre_x: Optional[float], optical_centre_y: Optional[float], max_fov: Optional[float], polynomial: Optional[Sequence[float]]) None
Parameters
  • nominal_width (Optional[float]) – Rendered Width (pixels)

  • nominal_height (Optional[float]) – Rendered Height (pixels)

  • optical_centre_x (Optional[float]) – Horizontal Render Position (pixels)

  • optical_centre_y (Optional[float]) – Vertical Render Position (pixels)

  • max_fov (Optional[float]) – maximum field of view (pixels)

  • polynomial (Optional[Sequence[float]]) – polynomial equation coefficients (sequence of 5 numbers) starting from A0, A1, A2, A3, A4

set_focal_length(value: float)
Parameters

value (float) – Longer Lens Lengths Narrower FOV, Shorter Lens Lengths Wider FOV

set_focus_distance(value: float)

The distance at which perfect sharpness is achieved.

Parameters

value (float) – Distance from the camera to the focus plane (in stage units).

set_frequency(value: int) None
Parameters

value (int) – sets the frequency to acquire new data frames

set_horizontal_aperture(value: float) None
Parameters

value (Optional[float], optional) – Emulates sensor/film width on a camera. Defaults to None.

set_kannala_brandt_properties(nominal_width: float, nominal_height: float, optical_centre_x: float, optical_centre_y: float, max_fov: Optional[float], distortion_model: Sequence[float]) None

Approximates kannala brandt distortion with ftheta fisheye polynomial coefficients. :param nominal_width: Rendered Width (pixels) :type nominal_width: float :param nominal_height: Rendered Height (pixels) :type nominal_height: float :param optical_centre_x: Horizontal Render Position (pixels) :type optical_centre_x: float :param optical_centre_y: Vertical Render Position (pixels) :type optical_centre_y: float :param max_fov: maximum field of view (pixels) :type max_fov: Optional[float] :param distortion_model: kannala brandt generic distortion model coefficients (k1, k2, k3, k4) :type distortion_model: Sequence[float]

set_lens_aperture(value: float)
Controls Distance Blurring. Lower Numbers decrease focus range, larger

numbers increase it.

Parameters

value (float) – controls lens aperture (i.e focusing). 0 turns off focusing.

set_local_pose(translation: Optional[Sequence[float]] = None, orientation: Optional[Sequence[float]] = None, camera_axes: str = 'world') None

Sets prim’s pose with respect to the local frame (the prim’s parent frame in the world axes).

Parameters
  • translation (Optional[Sequence[float]], optional) – translation in the local frame of the prim (with respect to its parent prim). shape is (3, ). Defaults to None, which means left unchanged.

  • orientation (Optional[Sequence[float]], optional) – quaternion orientation in the local frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.

  • camera_axes (str, optional) – camera axes, world is (+Z up, +X forward), ros is (+Y up, +Z forward) and usd is (+Y up and -Z forward). Defaults to “world”.

set_local_scale(scale: Optional[Sequence[float]]) None

Set prim’s scale with respect to the local frame (the prim’s parent frame).

Parameters

scale (Optional[Sequence[float]]) – scale to be applied to the prim’s dimensions. shape is (3, ). Defaults to None, which means left unchanged.

Example:

>>> # scale prim 10 times smaller
>>> prim.set_local_scale(np.array([0.1, 0.1, 0.1]))
set_matching_fisheye_polynomial_properties(nominal_width: float, nominal_height: float, optical_centre_x: float, optical_centre_y: float, max_fov: Optional[float], distortion_model: Sequence[float], distortion_fn: Callable) None

Approximates given distortion with ftheta fisheye polynomial coefficients. :param nominal_width: Rendered Width (pixels) :type nominal_width: float :param nominal_height: Rendered Height (pixels) :type nominal_height: float :param optical_centre_x: Horizontal Render Position (pixels) :type optical_centre_x: float :param optical_centre_y: Vertical Render Position (pixels) :type optical_centre_y: float :param max_fov: maximum field of view (pixels) :type max_fov: Optional[float] :param distortion_model: distortion model coefficients :type distortion_model: Sequence[float] :param distortion_fn: distortion function that takes points and returns distorted points :type distortion_fn: Callable

set_projection_mode(value: str) None

Sets camera to perspective or orthographic mode.

Parameters

value (str) – perspective or orthographic.

set_projection_type(value: str) None
Parameters

value (str) – pinhole: Standard Camera Projection (Disable Fisheye) fisheyeOrthographic: Full Frame using Orthographic Correction fisheyeEquidistant: Full Frame using Equidistant Correction fisheyeEquisolid: Full Frame using Equisolid Correction fisheyePolynomial: 360 Degree Spherical Projection fisheyeSpherical: 360 Degree Full Frame Projection

set_rational_polynomial_properties(nominal_width: float, nominal_height: float, optical_centre_x: float, optical_centre_y: float, max_fov: Optional[float], distortion_model: Sequence[float]) None

Approximates rational polynomial distortion with ftheta fisheye polynomial coefficients. :param nominal_width: Rendered Width (pixels) :type nominal_width: float :param nominal_height: Rendered Height (pixels) :type nominal_height: float :param optical_centre_x: Horizontal Render Position (pixels) :type optical_centre_x: float :param optical_centre_y: Vertical Render Position (pixels) :type optical_centre_y: float :param max_fov: maximum field of view (pixels) :type max_fov: Optional[float] :param distortion_model: rational polynomial distortion model coefficients (k1, k2, p1, p2, k3, k4, k5, k6) :type distortion_model: Sequence[float]

set_resolution(value: Tuple[int, int]) None
Parameters

value (Tuple[int, int]) – width and height respectively.

set_shutter_properties(delay_open: Optional[float] = None, delay_close: Optional[float] = None) None
Parameters
  • delay_open (Optional[float], optional) – Used with Motion Blur to control blur amount, increased values delay shutter opening. Defaults to None.

  • delay_close (Optional[float], optional) – Used with Motion Blur to control blur amount, increased values forward the shutter close. Defaults to None.

set_stereo_role(value: str) None
Parameters

value (str) – mono, left or right.

set_vertical_aperture(value: float) None
Parameters

value (Optional[float], optional) – Emulates sensor/film height on a camera. Defaults to None.

set_visibility(visible: bool) None

Set the visibility of the prim in stage

Parameters

visible (bool) – flag to set the visibility of the usd prim in stage.

Example:

>>> # make prim not visible in the stage
>>> prim.set_visibility(visible=False)
set_world_pose(position: Optional[Sequence[float]] = None, orientation: Optional[Sequence[float]] = None, camera_axes: str = 'world') None

Sets prim’s pose with respect to the world’s frame (always at [0, 0, 0] and unity quaternion not to be confused with /World Prim).

Parameters
  • position (Optional[Sequence[float]], optional) – position in the world frame of the prim. shape is (3, ). Defaults to None, which means left unchanged.

  • orientation (Optional[Sequence[float]], optional) – quaternion orientation in the world frame of the prim. quaternion is scalar-first (w, x, y, z). shape is (4, ). Defaults to None, which means left unchanged.

  • camera_axes (str, optional) – camera axes, world is (+Z up, +X forward), ros is (+Y up, +Z forward) and usd is (+Y up and -Z forward). Defaults to “world”.

property supported_annotators: List[str]

Returns: List[str]: annotators supported by the camera

distort_point_kannala_brandt(camera_matrix, distortion_model, x, y)

This helper function distorts point(s) using Kannala Brandt fisheye model. It should be equivalent to the following reference that uses OpenCV:

def distort_point_kannala_brandt2(camera_matrix, distortion_model, x, y):

import cv2 ((fx,_,cx),(_,fy,cy),(_,_,_)) = camera_matrix pt_x, pt_y, pt_z = (x-cx)/fx, (y-cy)/fy, np.full(x.shape, 1.0) points3d = np.stack((pt_x, pt_y, pt_z), axis = -1) rvecs, tvecs = np.array([0.0,0.0,0.0]), np.array([0.0,0.0,0.0]) cameraMatrix, distCoeffs = np.array(camera_matrix), np.array(distortion_model) points, jac = cv2.fisheye.projectPoints(np.expand_dims(points3d, 1), rvecs, tvecs, cameraMatrix, distCoeffs) return np.array([points[:,0,0], points[:,0,1]])

distort_point_rational_polynomial(camera_matrix, distortion_model, x, y)

This helper function distorts point(s) using rational polynomial model. It should be equivalent to the following reference that uses OpenCV:

def distort_point_rational_polynomial(x, y)

import cv2 ((fx,_,cx),(_,fy,cy),(_,_,_)) = camera_matrix pt_x, pt_y, pt_z = (x-cx)/fx, (y-cy)/fy, np.full(x.shape, 1.0) points3d = np.stack((pt_x, pt_y, pt_z), axis = -1) rvecs, tvecs = np.array([0.0,0.0,0.0]), np.array([0.0,0.0,0.0]) cameraMatrix, distCoeffs = np.array(camera_matrix), np.array(distortion_coefficients) points, jac = cv2.projectPoints(points3d, rvecs, tvecs, cameraMatrix, distCoeffs) return np.array([points[:,0,0], points[:,0,1]])

get_all_camera_objects(root_prim: str = '/')

Retrieve omni.isaac.sensor Camera objects for each camera in the scene.

Parameters

root_prim (str) – Root prim where the world exists.

Returns

A list of omni.isaac.sensor Camera objects

Return type

Camera[]

point_to_theta(camera_matrix, x, y)

This helper function returns the theta angle of the point.

Contact Sensor Interface

This submodule provides an interface to a simulated contact sensor. A simplified command is provided to create a contact sensor in the stage:

Once the contact sensor is created, you must first acquire this interface and then you can use this interface to access the contact sensor

Also, the offset of the contact sensor is also affect by the parent’s transformations.

1from omni.isaac.sensor import _sensor
2_cs = _sensor.acquire_contact_sensor_interface()

Note

if the contact sensor is not initially created under a valid rigid body parent, the contact sensor will not output any valid data even if the contact sensor is later attached to a valid rigid body parent.

Acquiring Extension Interface

To collect the most recent reading, call the interface get_sensor_reading(/path/to/sensor, use_latest_data=True). The result will be most recent sensor reading.

reading = _cs.get_sensor_reading("/World/Cube/Contact_Sensor", use_latest_data)

To collect the reading at the last sensor measurement time based on the sensor period, call the interface get_sensor_reading(/path/to/sensor). This will give you the physics step data closest to the sensor measurement time.

reading = _cs.get_sensor_reading("/World/Cube/Contact_Sensor")

To collect raw reading, call the interface get_contact_sensor_raw_data(/path/to/sensor). The result will return a list of raw contact data for that body.

raw_Contact = _cs.get_contact_sensor_raw_data("/World/Cube/Contact_Sensor")

Output Types

Interface Methods

IMU sensor Interface

This submodule provides an interface to a simulate IMU sensor, which provides ground truth linear acceleration, angular velocity, orientation data.

A simplified command is provided to create an IMU sensor:

Similiarly, once an IMU sensor is created, you can use this interface to interact with the simulated IMU sensor. You must first call the acquire_imu_sensor_interface.

1from omni.isaac.sensor import _sensor
2_is = _sensor.acquire_imu_sensor_interface()

Note

if the IMU sensor is not initially created under a valid rigid body parent, the IMU sensor will not output any valid data even if the IMU sensor is later attached to a valid rigid body parent. Also, the offset and orientation of the IMU sensor is also affect by the parent’s transformations.

Acquiring Extension Interface

To collect the most recent reading, call the interface get_sensor_reading(/path/to/sensor, use_latest_data = True). The result will be most recent sensor reading.

reading = _is.get_sensor_reading("/World/Cube/Imu_Sensor", use_latest_data = True)

To collect the reading at the last sensor measurement time based on the sensor period, call the interface get_sensor_reading(/path/to/sensor).

reading = _is.get_sensor_reading("/World/Cube/Imu_Sensor")

Since the sensor reading time is usually between two physics steps, linear interpolation method is used by default to get the reading at sensor time between the physics steps. However the get_sensor_reading can also accept a custom function in the event that a different interpolation strategy is prefered..

from typing import List

# Input Param: List of past IsSensorReadings, time of the expected sensor reading
def interpolation_function(data:List[_sensor.IsSensorReading], time:float) -> _sensor.IsSensorReading:
    interpolated_reading = _sensor.IsSensorReading()
    # do interpolation
    return interpolated_reading

reading = _is.get_sensor_reading("/World/Cube/Imu_Sensor", interpolation_function = interpolation_function)

Note

The interpolation function will only be used if the sensor frequency is lower than the physics frequency and use_latest_data flag is .

Output Types

Interface Methods

Effort Sensor

Effort sensor is a python class for reading gronud truth joint effort measurements. The Effort sensor can be created directly in Python using the path of the joint of interest.

1from omni.isaac.sensor.scripts.effort_sensor import EffortSensor
2import numpy as np
3
4sensor = EffortSensor(prim_path="/World/Robot/revolute_joint")

Note

If the sensor was created with the incorrect prim path, simply delete the sensor and recreate it. If the measured joint needs to be changed and the new joint has the same parent, update_dof_name(dof_name:str) function maybe used.

Acquiring Sensor data

To collect the most recent reading, call the interface get_sensor_reading(use_latest_data = True). The result will be most recent sensor reading.

reading = sensor.get_sensor_reading(use_latest_data = True)

To collect the reading at the last sensor measurement time based on the sensor period, call the interface get_sensor_reading().

reading = sensor.get_sensor_reading()

Since the sensor reading time is usually between two physics steps, linear interpolation method is used by default to get the reading at sensor time between the physics steps. However the get_sensor_reading can also accept a custom function in the event that a different interpolation strategy is prefered..

1from omni.isaac.sensor.scripts.effort_sensor import EsSensorReading
2
3# Input Param: List of past EsSensorReading, time of the expected sensor reading
4def interpolation_function(data, time):
5    interpolated_reading = EsSensorReading()
6    # do interpolation
7    return interpolated_reading
8
9reading = sensor.get_sensor_reading(interpolation_function)

Note

The interpolation function will only be used if the sensor frequency is lower than the physics frequency and use_latest_data flag is not enabled.

Output Types

EsSensorReading

  • time (float): The time of the sensor reading.

  • value (float): The measured effort on the joint.

  • is_valid (boolean): The validitty of the sensor measurement.

LightBeam sensor Interface

This submodule provides an interface to simulate a LightBeam sensor, which provides linear depth and hit positions of each raycast.

A simplified command is provided to create a LightBeam sensor:

Similiarly, once a LightBeam sensor is created, you can use this interface to interact with the simulated LightBeam sensor. You must first call the acquire_lightbeam_sensor_interface.

1from omni.isaac.sensor import _sensor
2_ls = _sensor.acquire_lightbeam_sensor_interface()

Acquiring Extension Interface

To collect the most recent data, call the following interface methods:

# This will return a vector of uint8_t (0 false, 1 true)
beam_hit = _ls.get_beam_hit_data("/World/LightBeam_Sensor")
# This will the number of rays in the light curtain
num_rays = _ls.get_num_rays("/World/LightBeam_Sensor")
# This will return a vector of floats of the linear depth of each raycast
linear_depth_data = _ls.get_linear_depth_data("/World/LightBeam_Sensor")
# This will return a vector of xyz points of hit positions of each raycast
hit_pos_data = _ls.get_hit_pos_data("/World/LightBeam_Sensor")

Interface Methods

Omnigraph Nodes