pydrake.systems.sensors

Bindings for the sensors portion of the Systems framework.

pydrake.systems.sensors.ApplyCameraConfig(config: pydrake.systems.sensors.CameraConfig, builder: pydrake.systems.framework.DiagramBuilder, lcm_buses: pydrake.systems.lcm.LcmBuses = None, plant: drake::multibody::MultibodyPlant<double> = None, scene_graph: pydrake.geometry.SceneGraph = None, lcm: pydrake.lcm.DrakeLcmInterface = None) None

Constructs a simulated camera sensor (rgbd sensor and publishing systems) within builder. As specified, the RGB, depth, and/or label images from the camera are published via lcm on the channel DRAKE_RGBD_CAMERA_IMAGES_{camera_config.name}.

Parameter config:

The camera configuration.

Parameter builder:

The diagram to add sensor and publishing systems into.

Parameter lcm_buses:

(Optional) The available LCM buses to use for camera message publication. When not provided, uses the lcm interface if provided, or else the config.lcm_bus must be set to “default” in which case an appropriate drake::lcm::DrakeLcm object is constructed and used internally.

Parameter plant:

(Optional) The MultibodyPlant to use for kinematics. In the common case where a MultibodyPlant has already been added to builder using either AddMultibodyPlant() or AddMultibodyPlantSceneGraph(), the default value (nullptr) here is suitable and generally should be preferred. When provided, it must be a System that’s been added to the given builder. When not provided, uses the system named “plant” in the given builder.

Parameter scene_graph:

(Optional) The SceneGraph to use for rendering. In the common case where a SceneGraph has already been added to builder using either AddMultibodyPlant() or AddMultibodyPlantSceneGraph(), the default value (nullptr) here is suitable and generally should be preferred. When provided, it must be a System that’s been added to the given builder. When not provided, uses the system named “scene_graph” in the given builder.

Parameter lcm:

(Optional) The LCM interface used for visualization message publication. When not provided, uses the config.lcm_bus value to look up the appropriate interface from lcm_buses.

Raises

RuntimeError if camera_config contains invalid values.

Precondition:

The builder is non-null.

Precondition:

Either the config.lcm_bus is set to “default”, or else lcm_buses is non-null and contains a bus named config.lcm_bus, or else lcm is non-null.

Precondition:

Either the given builder contains a MultibodyPlant system named “plant” or else the provided plant is non-null.

Precondition:

Either the given builder contains a SceneGraph system named “scene_graph” or else the provided scene_graph is non-null.

See also

drake::multibody::AddMultibodyPlant()

See also

drake::systems::lcm::ApplyLcmBusConfig()

class pydrake.systems.sensors.CameraConfig

Configuration of a camera. This covers all of the parameters for both color (see geometry::render::ColorRenderCamera) and depth (see geometry::render::DepthRenderCamera) cameras.

The various properties have restrictions on what they can be.

  • Values must be finite.

  • Some values must be positive (see notes on individual properties).

  • Ranges must be specified such that the “minimum” value is less than or

equal to the “maximum” value. This includes the clipping range [clipping_near, clipping_far`] and depth range [`z_near, z_far]. - The depth range must lie entirely within the clipping range.

The values are only checked when the configuration is operated on: during serialization, after deserialization, and when applying the configuration (see ApplyCameraConfig().)

__init__(self: pydrake.systems.sensors.CameraConfig, **kwargs) None
property background

The “background” color. This is the color drawn where there are no objects visible. Its default value matches the default value for render::RenderEngineVtkParams::default_clear_color. See the documentation for geometry::Rgba::Serialize for how to define this value in YAML.

This value is used only if the render_class specifies either "RenderEngineVtk" or "RenderEngineGl" by name (RenderEngineGltfClient doesn’t have a configurable background color.)

property capture_offset

Phase offset (in seconds) for image capture, relative to the simulator’s time zero. This can be useful to stagger multiple cameras. Refer to the RgbdSensorAsync class for a comprehensive description.

Precondition:

capture_offset is non-negative and finite.

property center_x

The x-position of the principal point (in pixels). To query what the current value is, use principal_point().

Precondition:

0 < center_x < width or is std::nullopt.

property center_y

The y-position of the principal point (in pixels). To query what the current value is, use principal_point().

Precondition:

0 < center_y < height or is std::nullopt.

property clipping_far

The distance (in meters) from sensor origin to far clipping plane.

Precondition:

clipping_far is a positive, finite number.

property clipping_near

The distance (in meters) from sensor origin to near clipping plane.

Precondition:

clipping_near is a positive, finite number.

property depth

If true, depth images will be produced and published via LCM.

property do_compress

Controls whether the images are broadcast in a compressed format.

property focal

The focal properties of the camera. It can be specified in one of two ways:

  • A FocalLength which specifies the focal lengths (in pixels) in the x-

and y-directions. - An FovDegrees which defines focal length implicitly by deriving it from field of view measures (in degrees).

For both specifications, if only one value is given (in either direction), the focal length (as reported by focal_x() and focal_y()) is determined by that single value and is assumed to be symmetric for both directions.

Precondition:

focal length(s) is a/are finite, positive number(s).

focal_x(self: pydrake.systems.sensors.CameraConfig) float

Returns the focal length (in pixels) in the x-direction.

focal_y(self: pydrake.systems.sensors.CameraConfig) float

Returns the focal length (in pixels) in the y-direction.

class FocalLength

Specification of a camera’s intrinsic focal properties as focal length (in pixels). One or both values can be given. When only one value is given, the other is assumed to match. At least one value must be provided.

__init__(self: pydrake.systems.sensors.CameraConfig.FocalLength, **kwargs) None
property x

If specified, the focal length along this axis; otherwise, use focal length in the y-direction.

property y

If specified, the focal length along this axis; otherwise, use focal length in the x-direction.

class FovDegrees

Specification of focal length via fields of view (in degrees).

__init__(self: pydrake.systems.sensors.CameraConfig.FovDegrees, **kwargs) None
property x

If specified, compute focal length along this axis; otherwise, use focal length given computation for y.

property y

If specified, compute focal length along this axis; otherwise, use focal length given computation for x.

property fps

Publishing rate (in Hz) for both RGB and depth cameras (as requested).

Precondition:

fps is a positive, finite number.

property height

%Image height (in pixels).

Precondition:

height > 0.

property label

If true, label images will be produced and published via LCM.

property lcm_bus

Which LCM URL to use.

See also

drake::systems::lcm::LcmBuses

MakeCameras(self: pydrake.systems.sensors.CameraConfig) Tuple[pydrake.geometry.ColorRenderCamera, pydrake.geometry.DepthRenderCamera]

Creates color and depth camera data from this configuration.

Raises
  • RuntimeError if configuration values do not satisfy the documented

  • prerequisites.

property name

The camera name. This name is used as part of the corresponding RgbdSensor system’s name and serves as a suffix of the LCM channel name.

Precondition:

name is not empty.

property output_delay

Delay (in seconds) between when the scene graph geometry is “captured” and when the output image is published. Refer to the RgbdSensorAsync class for a comprehensive description.

Precondition:

output_delay is non-negative and strictly less than 1/fps.

principal_point(self: pydrake.systems.sensors.CameraConfig) numpy.ndarray[numpy.float64[2, 1]]

Returns the position of the principal point. This respects the semantics that undefined center_x and center_y place the principal point in the center of the image.

property renderer_class

The choice of render engine implementation to use. It can be specified simply by providing a string containing the class name of the RenderEngine to use (i.e., "RenderEngineVtk", “RenderEngineGl”, or "RenderEngineGltfClient"). Or, it can be specified by providing parameters for one of those engines: RenderEngineVtkParams, RenderEngineGlParams, or RenderEngineGltfClientParams.

If a string containing the render engine class name is provided, the engine instantiated will use the default parameters – equivalent to passing the set of default parameters.

It is possible for multiple cameras to reference the same renderer_name but configure the renderer differently. This would be a configuration error. The following rules will help prevent problematic configurations:

  • Multiple cameras can reference the same value for renderer_name.

  • If multiple cameras reference the same value for renderer_name, only

the first can use the engine parameters to specify it. Attempting to do so with a later camera will produce an error. - The later cameras can use engine class name. - If a later camera names a different engine class, that will result in an error.

In YAML, it can be a bit trickier. Depending on how a collection of cameras is articulated, the concept of “first” may be unclear. In examples/hardware_sim/scenario.h the collection is a map. So, the camera configurations are not necessarily processed in the order they appear in the YAML file. Instead, the processing order depends on the mnemonic camera key. So, be aware, if you use a similar mapping, you may have to massage the key names to achieve the requisite processing order. Alternatively, a vector of CameraConfig in your own scenario file, would guarantee that processing order is the same as file order.

We intend to relax these restrictions in time, allowing equivalent, redundant specifications of a render engine (and throwing only on inconsistent specifications).

Passing the empty string is equivalent to saying, “I don’t care”. If a render engine with that name has already been configured, the camera will use it (regardless of type). If the name doesn’t already exist, the default, slower, more portable, and more robust RenderEngineVtk will be instantiated. RenderEngineGl can be selected if you are on Ubuntu and need the improved performance (at the possible cost of lesser image fidelity). In YAML, omitting renderer_class from the camera specification is equivalent to “passing the empty string”.

Configuring in YAML

It isn’t always obvious what the proper spelling in YAML is. The following examples illustrate how the renderer_class field can be defined in YAML files.

1. Use the RenderEngineVtk with all default parameters - providing the class name as a string.

Click to expand C++ code...
{yaml}
renderer_class: RenderEngineVtk

2. Use the RenderEngineVtk with all default parameters - providing the engine parameters with default values.

Click to expand C++ code...
{yaml}
renderer_class: !RenderEngineVtkParams {}
  1. Use the RenderEngineVtk with a customized clear color (black).

Click to expand C++ code...
{yaml}
renderer_class: !RenderEngineVtkParams
default_clear_color: [0, 0, 0]
  1. Use the RenderEngineGl with a customized clear color (black).

Click to expand C++ code...
{yaml}
renderer_class: !RenderEngineGlParams
default_clear_color:
rgba: [0, 0, 0, 1]
  1. Use the RenderEngineGltfClient with fully specified properties.

Click to expand C++ code...
{yaml}
renderer_class: !RenderEngineGltfClientParams
base_url: http://10.10.10.1
render_endpoint: server
verbose: true
cleanup: false
  1. Explicitly request Drake’s default render engine.

Click to expand C++ code...
{yaml}
renderer_class: ""

Things to note:

  • When providing the parameters for the engine, the declaration must

begin with ! to announce it as a type (examples 2, 3, and 4). - A defaulted set of parameters must have a trailing {} (example 2). - Two engine parameter sets may have the same semantic parameter but spell it differently (default_clear_color in examples 3 and 4).

Precondition:

renderer_class is a string and either empty or one of "RenderEngineVtk", “RenderEngineGl”, or "RenderEngineGltfClient", or, it is a parameter type of one of Drake’s RenderEngine implementations.

See also

drake::geometry::SceneGraph::GetRendererTypeName().

property renderer_name

The name of the geometry::render::RenderEngine that this camera uses. Generally, it is more efficient for multiple cameras to share a single render engine instance; they should, therefore, share a common renderer_name value.

Precondition:

renderer_name is not empty.

property rgb

If true, RGB images will be produced and published via LCM.

property show_rgb

Controls whether the rendered RGB and/or label images are displayed (in separate windows controlled by the thread in which the camera images are rendered). As both image types are rendered from ColorRenderCamera, it applies to both of them and depends on whether the RenderEngine instance supports it.

Note: This flag is intended for quick debug use during development instead of serving as an image viewer. Currently, there are known issues, e.g., flickering windows when multiple cameras share the same renderer or upside-down images if RenderEngineGl is set. See issue #18862 for the proposal to visualize images via Meldis.

property width

%Image width (in pixels).

Precondition:

width > 0.

property X_BC

The pose of the color sensor relative to the camera body frame.

Precondition:

X_BC.base_frame is empty.

property X_BD

The pose of the depth sensor relative to the camera body frame.

Precondition:

X_BD.base_frame is empty.

property X_PB

The pose of the body camera relative to a parent frame P. If X_PB.base_frame is unspecified, then the world frame is assumed to be the parent frame.

Precondition:

X_PB.base_frame is empty or refers to a valid, unique frame.

property z_far

The distance (in meters) from sensor origin to farthest measurable plane.

Precondition:

z_near is a positive, finite number.

property z_near

The distance (in meters) from sensor origin to closest measurable plane.

Precondition:

z_near is a positive, finite number.

class pydrake.systems.sensors.CameraInfo

Simple class for characterizing the Drake camera model. The camera model is based on the pinhole *model_, which is related to but distinct from an actual pinhole _camera_. The former is a mathematical model for producing images, the latter is a physical object.

The camera info members are directly tied to the underlying model’s mathematical parameters. In order to understand the model parameters, we will provide a discussion of how cameras are treated in Drake with some common terminology (liberally borrowed from computer vision).

Pinhole camera model

(To get an exact definition of the terms used here, please refer to the glossary below.)

Intuitively, a camera produces images of an observed environment. The pinhole model serves as a camera for a virtually modeled environment. Its parameters are those required to determine where in the resultant image a virtual object appears. In essence, the parameters of the camera model define a mapping from points in 3D space to a point on the image (2D space).

The full discussion of this mapping can be found in the OpenCV documentation. Here, we’ll highlight one or two points as it relates to this struct.

The mapping from 3D to 2D is decomposed into two sets of properties: extrinsic and intrinsic. The extrinsic properties define the pose of the camera in the environment – specifically, given the camera frame C, it defines X_WC. Once a point Q is measured and expressed in the camera’s frame (i.e., p_CQ_C), the intrinsic matrix projects it into the 2D image. CameraInfo does _not* concern itself with the extrinsic properties (other classes are responsible for that – see RgbdSensor).

CameraInfo defines the parameters of the intrinsic projection. The projection can be captured by the camera or intrinsic matrix which essentially maps points in the camera frame C to the image plane. The camera looks along the positive Cz axis, and the Cy axis points down. The projection matrix is called A in the OpenCV documentation, but typically called K in computer vision literature:

│ f_x 0 c_x │ K = │ 0 f_y c_y │, i.e., │ 0 0 1 │

  • This matrix maps a point in the camera frame C to the projective space of the

image (via homogeneous coordinates). The resulting image coordinate (u, v) is extracted by dividing out the homogeneous scale factor. - (c_x, c_y) defines the principal point. - (f_x, f_y) defines the model focal length.

In other words, for point Q in the world frame, its projected position in the 2D image (u_Q, v_Q) is calculated as:

│ u_Q │ s│ v_Q │ = K ⋅ X_CW ⋅ p_WQ │ 1 │

Note: The expression on the right will generally produce a homogeneous coordinate vector of the form (s * u_Q, s * v_Q, s). The texture coordinate is defined as the first two measures when the third measure is 1. The magic of homogeneous coordinates allows us to simply factor out s.

Alignment of the camera frame with the image

When looking at the resulting image and reasoning about the camera that produced it, one can say that Cz points into the image, Cx is parallel with the image rows, pointing to the right, and Cy is parallel with the image columns, pointing down leading to language such as: “X-right”, “Y-down”, and “Z-forward”.

Glossary

These terms are important to the discussion. Some refer to real world concepts and some to the model. The application domain of the term will be indicated and, where necessary, ambiguities will be resolved. The terms are ordered alphabetically for ease of reference.

  • aperture: the opening in a camera through which light passes. The origin

of the camera frame Co is located at the aperture’s center. - in a physical camera it may contain a lens and the size of the camera affects optical artifacts such as depth of field. - in the pinhole model, there is no lens and the aperture is a single point. - focal length: a camera property that determines the field of view angle and the scale of objects in the image (large focal length –> small field of view angle). - In a physical camera, it is the distance (in meters) between the center of the lens and the plane at which all incoming, parallel light rays converge (assuming a radially symmetric lens). - in the pinhole model, (f_x, f_y) is described as “focal length”, but the units are in pixels and the interpretation is different. The relationship between (f_x, f_y) and the physical focal length F is f_i = F * s_i, where (s_x, s_y) are the number of pixels per meter in the x- and y-directions (of the image). Both values described as “focal length” have analogous effects on field of view and scale of objects in the image. - frame: (Also “pose”) an origin point and a space-spanning basis in 2D/3D, as in multibody_frames_and_bodies. - Incidentally, the word “frame” is also used in conjunction with a single image in a video (such as a “video frame”). Drake doesn’t use this sense in discussing cameras (unless explicitly noted). - image: an array of measurements such that the measured values have spatial relationships based on where they are in the array. - image frame: the 2D frame embedded in 3D space spanning the image plane. The image frame’s x- and y-axes are parallel with Cx and Cy, respectively. Coordinates are expressed as the pair (u, v) and the camera’s image lies on the plane spanned by the frame’s basis. The center of the pixel in the first row and first column is at (u=0, v=0). - image plane: a plane in 3D which is perpendicular to the camera’s viewing direction. Conceptually, the image lies on this plane. - In a physical pinhole camera, the aperture is between the image plane and the environment being imaged. - In the pinhole model, the image plane lies between the environment and aperture (i.e., in the positive Cz direction from Co). - imager: a sensor whose measurements are reported in images. - principal point: The projection of the camera origin, Co, on the image plane. Its value is measured from the image’s origin in pixels. - sensor: a measurement device. - viewing direction: the direction the camera is facing. Defined as being parallel with Cz.

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.CameraInfo, width: int, height: int, fov_y: float) -> None

Constructs this instance from image size and vertical field of view. We assume the principal point is in the center of the image; (center x, center_y) is equal to (width / 2.0 - 0.5, height / 2.0 - 0.5). We also assume the focal lengths focal_x and focal_y are identical (modeling a radially symmetric lens). The value is derived from field of view and image size as:

focal_x = focal_y = height * 0.5 / tan(0.5 * fov_y)

Parameter width:

The image width in pixels, must be positive.

Parameter height:

The image height in pixels, must be positive.

Parameter fov_y:

The vertical field of view in radians, must be positive and finite.

Raises
  • RuntimeError if the provided values don't satisfy the listed

  • requirements.

  1. __init__(self: pydrake.systems.sensors.CameraInfo, width: int, height: int, intrinsic_matrix: numpy.ndarray[numpy.float64[3, 3]]) -> None

Constructs this instance by extracting focal_x, focal_y, center_x, and center_y from the provided intrinsic_matrix.

Parameter width:

The image width in pixels, must be positive.

Parameter height:

The image height in pixels, must be positive.

Parameter intrinsic_matrix:

The intrinsic matrix (K) as documented above. Where K is defined to be non-zero, the values must be finite and the focal length values must be positive.

Raises
  • RuntimeError if intrinsic_matrix is not of the form indicated

  • above for the pinhole camera model (representing an affine /

  • homogeneous transform) or the non-zero values don't meet the

  • documented requirements.

  1. __init__(self: pydrake.systems.sensors.CameraInfo, width: int, height: int, focal_x: float, focal_y: float, center_x: float, center_y: float) -> None

Constructor that directly sets the image size, principal point, and focal lengths.

Parameter width:

The image width in pixels, must be positive.

Parameter height:

The image height in pixels, must be positive.

Parameter focal_x:

The model “focal length” x in pixels (see above), must be positive and finite.

Parameter focal_y:

The model “focal length” y in pixels (see above), must be positive and finite.

Parameter center_x:

The x coordinate of the principal point in pixels (see above), must lie in the range 0 < center_x < width.

Parameter center_y:

The y coordinate of the principal point in pixels (see above), must lie in the range 0 < center_y < height.

Raises
  • RuntimeError if the provided values don't satisfy the listed

  • requirements.

center_x(self: pydrake.systems.sensors.CameraInfo) float

Returns the principal point’s x coordinate in pixels.

center_y(self: pydrake.systems.sensors.CameraInfo) float

Returns the principal point’s y coordinate in pixels.

focal_x(self: pydrake.systems.sensors.CameraInfo) float

Returns the focal length x in pixels.

focal_y(self: pydrake.systems.sensors.CameraInfo) float

Returns the focal length y in pixels.

fov_x(self: pydrake.systems.sensors.CameraInfo) float

Returns the field of view in the x-direction (in radians).

fov_y(self: pydrake.systems.sensors.CameraInfo) float

Returns the field of view in the y-direction (in radians).

height(self: pydrake.systems.sensors.CameraInfo) int

Returns the height of the image in pixels.

intrinsic_matrix(self: pydrake.systems.sensors.CameraInfo) numpy.ndarray[numpy.float64[3, 3]]

Returns the camera intrinsic matrix, K.

width(self: pydrake.systems.sensors.CameraInfo) int

Returns the width of the image in pixels.

pydrake.systems.sensors.ConvertDepth16UTo32F(input: pydrake.systems.sensors.Image[PixelType.kDepth16U], output: pydrake.systems.sensors.Image[PixelType.kDepth32F]) None
pydrake.systems.sensors.ConvertDepth32FTo16U(input: pydrake.systems.sensors.Image[PixelType.kDepth32F], output: pydrake.systems.sensors.Image[PixelType.kDepth16U]) None
template pydrake.systems.sensors.Image

Instantiations: Image[PixelType.kRgba8U], Image[PixelType.kRgb8U], Image[PixelType.kBgra8U], Image[PixelType.kBgr8U], Image[PixelType.kDepth16U], Image[PixelType.kDepth32F], Image[PixelType.kLabel16I], Image[PixelType.kGrey8U]

class pydrake.systems.sensors.Image[PixelType.kBgr8U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgr8U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgr8U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgr8U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kBgr8U], x: int, y: int) numpy.ndarray[numpy.uint8[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kBgr8U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kBgr8U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kBgr8U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kBgr8U]

width(self: pydrake.systems.sensors.Image[PixelType.kBgr8U]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kBgra8U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgra8U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgra8U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kBgra8U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kBgra8U], x: int, y: int) numpy.ndarray[numpy.uint8[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kBgra8U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kBgra8U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kBgra8U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kBgra8U]

width(self: pydrake.systems.sensors.Image[PixelType.kBgra8U]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kDepth16U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth16U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth16U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth16U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kDepth16U], x: int, y: int) numpy.ndarray[numpy.uint16[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kDepth16U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kDepth16U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kDepth16U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kDepth16U]

width(self: pydrake.systems.sensors.Image[PixelType.kDepth16U]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kDepth32F]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth32F]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth32F], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kDepth32F], width: int, height: int, initial_value: float) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kDepth32F], x: int, y: int) numpy.ndarray[numpy.float32[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kDepth32F]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kDepth32F], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kDepth32F]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kDepth32F]

width(self: pydrake.systems.sensors.Image[PixelType.kDepth32F]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kGrey8U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kGrey8U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kGrey8U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kGrey8U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kGrey8U], x: int, y: int) numpy.ndarray[numpy.uint8[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kGrey8U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kGrey8U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kGrey8U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kGrey8U]

width(self: pydrake.systems.sensors.Image[PixelType.kGrey8U]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kLabel16I]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kLabel16I]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kLabel16I], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kLabel16I], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kLabel16I], x: int, y: int) numpy.ndarray[numpy.int16[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kLabel16I]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kLabel16I], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kLabel16I]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kLabel16I]

width(self: pydrake.systems.sensors.Image[PixelType.kLabel16I]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kRgb8U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgb8U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgb8U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgb8U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kRgb8U], x: int, y: int) numpy.ndarray[numpy.uint8[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kRgb8U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kRgb8U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kRgb8U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kRgb8U]

width(self: pydrake.systems.sensors.Image[PixelType.kRgb8U]) int

Returns the size of width for the image

class pydrake.systems.sensors.Image[PixelType.kRgba8U]
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgba8U]) -> None

Constructs a zero-sized image.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgba8U], width: int, height: int) -> None

Image size only constructor. Specifies a width and height for the image. The width and height can be either both zero or both strictly positive. All the channel values in all the pixels are initialized with zero.

  1. __init__(self: pydrake.systems.sensors.Image[PixelType.kRgba8U], width: int, height: int, initial_value: int) -> None

Image size and initial value constructor. Specifies a width, height and an initial value for all the channels in all the pixels. The width and height can be either both zero or both strictly positive.

at(self: pydrake.systems.sensors.Image[PixelType.kRgba8U], x: int, y: int) numpy.ndarray[numpy.uint8[m, 1], flags.writeable]

Access to the pixel located at (x, y) in image coordinate system where x is the variable for horizontal direction and y is one for vertical direction. To access to the each channel value in the pixel (x, y), you can do:

ImageRgbaU8 image(640, 480, 255); uint8_t red = image.at(x, y)[0]; uint8_t green = image.at(x, y)[1]; uint8_t blue = image.at(x, y)[2]; uint8_t alpha = image.at(x, y)[3];

property data
height(self: pydrake.systems.sensors.Image[PixelType.kRgba8U]) int

Returns the size of height for the image

property mutable_data
resize(self: pydrake.systems.sensors.Image[PixelType.kRgba8U], arg0: int, arg1: int) None

Changes the sizes of the width and height for the image. The new width and height can be either both zero or both strictly positive. All the values in the pixels become zero after resize.

property shape
size(self: pydrake.systems.sensors.Image[PixelType.kRgba8U]) int

Returns the result of the number of pixels in a image by the number of channels in a pixel

Traits

alias of pydrake.systems.sensors.ImageTraits[PixelType.kRgba8U]

width(self: pydrake.systems.sensors.Image[PixelType.kRgba8U]) int

Returns the size of width for the image

pydrake.systems.sensors.ImageBgr8U

alias of pydrake.systems.sensors.Image[PixelType.kBgr8U]

pydrake.systems.sensors.ImageBgra8U

alias of pydrake.systems.sensors.Image[PixelType.kBgra8U]

pydrake.systems.sensors.ImageDepth16U

alias of pydrake.systems.sensors.Image[PixelType.kDepth16U]

pydrake.systems.sensors.ImageDepth32F

alias of pydrake.systems.sensors.Image[PixelType.kDepth32F]

class pydrake.systems.sensors.ImageFileFormat

Members:

kJpeg

kPng

kTiff

__init__(self: pydrake.systems.sensors.ImageFileFormat, value: int) None
kJpeg = <ImageFileFormat.kJpeg: 0>
kPng = <ImageFileFormat.kPng: 1>
kTiff = <ImageFileFormat.kTiff: 2>
property name
property value
pydrake.systems.sensors.ImageGrey8U

alias of pydrake.systems.sensors.Image[PixelType.kGrey8U]

class pydrake.systems.sensors.ImageIo

Utility functions for reading and writing images, from/to either files or memory buffers.

The only file formats supported are JPEG, PNG, and TIFF.

The only format that supports floating-point scalars (e.g., ImageDepth32F) is TIFF. Trying to load or save a floating-point image from/to a PNG or JPEG file will throw an exception.

__init__(self: pydrake.systems.sensors.ImageIo) None

Default constructor.

Load(*args, **kwargs)

Overloaded function.

  1. Load(self: pydrake.systems.sensors.ImageIo, path: os.PathLike, format: Optional[pydrake.systems.sensors.ImageFileFormat] = None) -> Union[pydrake.systems.sensors.Image[PixelType.kBgr8U], pydrake.systems.sensors.Image[PixelType.kBgra8U], pydrake.systems.sensors.Image[PixelType.kDepth16U], pydrake.systems.sensors.Image[PixelType.kDepth32F], pydrake.systems.sensors.Image[PixelType.kGrey8U], pydrake.systems.sensors.Image[PixelType.kRgb8U], pydrake.systems.sensors.Image[PixelType.kRgba8U], pydrake.systems.sensors.Image[PixelType.kLabel16I]]

Loads and returns an image from disk.

Parameter format:

(Optionally) establishes the required image file format. When set, images that are a different file format will throw an exception. When not set, the filename extension has no bearing on the result; only the actual file contents determine the file format.

Raises

RuntimeError for any kind of error loading the image file.

  1. Load(self: pydrake.systems.sensors.ImageIo, buffer: bytes, format: Optional[pydrake.systems.sensors.ImageFileFormat] = None) -> Union[pydrake.systems.sensors.Image[PixelType.kBgr8U], pydrake.systems.sensors.Image[PixelType.kBgra8U], pydrake.systems.sensors.Image[PixelType.kDepth16U], pydrake.systems.sensors.Image[PixelType.kDepth32F], pydrake.systems.sensors.Image[PixelType.kGrey8U], pydrake.systems.sensors.Image[PixelType.kRgb8U], pydrake.systems.sensors.Image[PixelType.kRgba8U], pydrake.systems.sensors.Image[PixelType.kLabel16I]]

Loads and returns an image from a memory buffer.

Parameter format:

(Optionally) establishes the required image file format. When set, images that are a different file format will throw an exception.

Raises

RuntimeError for any kind of error loading the image data.

LoadMetadata(*args, **kwargs)

Overloaded function.

  1. LoadMetadata(self: pydrake.systems.sensors.ImageIo, path: os.PathLike) -> Optional[pydrake.systems.sensors.ImageIo.Metadata]

Returns the metadata of the given image file, or nullopt if the metadata cannot be determined or is unsupported. The filename extension has no bearing on the result; only the actual file contents determine the file format.

  1. LoadMetadata(self: pydrake.systems.sensors.ImageIo, buffer: bytes) -> Optional[pydrake.systems.sensors.ImageIo.Metadata]

Returns the metadata of the given image buffer, or nullopt if the metadata cannot be determined or is unsupported.

class Metadata

Some characteristics of an image file. Note that Drake’s Image<> class can only express depth == 1.

__init__(self: pydrake.systems.sensors.ImageIo.Metadata, **kwargs) None
property channels
property depth
property format
property height
property scalar
property width
Save(*args, **kwargs)

Overloaded function.

  1. Save(self: pydrake.systems.sensors.ImageIo, image: Union[pydrake.systems.sensors.Image[PixelType.kBgr8U], pydrake.systems.sensors.Image[PixelType.kBgra8U], pydrake.systems.sensors.Image[PixelType.kDepth16U], pydrake.systems.sensors.Image[PixelType.kDepth32F], pydrake.systems.sensors.Image[PixelType.kGrey8U], pydrake.systems.sensors.Image[PixelType.kRgb8U], pydrake.systems.sensors.Image[PixelType.kRgba8U], pydrake.systems.sensors.Image[PixelType.kLabel16I]], path: os.PathLike, format: Optional[pydrake.systems.sensors.ImageFileFormat] = None) -> None

Saves an image to disk.

Parameter format:

(Optionally) chooses the image file format. When not set, the filename extension will determine the format and the extension must be a supported choice (i.e., .jpg, .jpeg, .png, .tif, or .tiff).

Raises

RuntimeError for any kind of error saving the image file.

  1. Save(self: pydrake.systems.sensors.ImageIo, image: Union[pydrake.systems.sensors.Image[PixelType.kBgr8U], pydrake.systems.sensors.Image[PixelType.kBgra8U], pydrake.systems.sensors.Image[PixelType.kDepth16U], pydrake.systems.sensors.Image[PixelType.kDepth32F], pydrake.systems.sensors.Image[PixelType.kGrey8U], pydrake.systems.sensors.Image[PixelType.kRgb8U], pydrake.systems.sensors.Image[PixelType.kRgba8U], pydrake.systems.sensors.Image[PixelType.kLabel16I]], format: pydrake.systems.sensors.ImageFileFormat) -> bytes

Saves an image to a new memory buffer, returning the buffer.

Raises

RuntimeError for any kind of error saving the image data.

pydrake.systems.sensors.ImageLabel16I

alias of pydrake.systems.sensors.Image[PixelType.kLabel16I]

pydrake.systems.sensors.ImageRgb8U

alias of pydrake.systems.sensors.Image[PixelType.kRgb8U]

pydrake.systems.sensors.ImageRgba8U

alias of pydrake.systems.sensors.Image[PixelType.kRgba8U]

class pydrake.systems.sensors.ImageToLcmImageArrayT

Bases: pydrake.systems.framework.LeafSystem

An ImageToLcmImageArrayT takes as input an ImageRgba8U, ImageDepth32F and ImageLabel16I. This system outputs an AbstractValue containing a Value<lcmt_image_array> LCM message that defines an array of images (lcmt_image). This message can then be sent to other processes that sbscribe it using LcmPublisherSystem. Note that you should NOT assume any particular order of those images stored in lcmt_image_array, instead check the semantic of those images with lcmt_image::pixel_format before using them.

(user assigned port name)→
...→
(user assigned port name)→
ImageToLcmImageArrayT
→ y0

Note

The output message’s header field seq is always zero.

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.ImageToLcmImageArrayT, color_frame_name: str, depth_frame_name: str, label_frame_name: str, do_compress: bool = False) -> None

An ImageToLcmImageArrayT constructor. Declares three input ports – one color image, one depth image, and one label image.

Parameter color_frame_name:

The frame name used for color image.

Parameter depth_frame_name:

The frame name used for depth image.

Parameter label_frame_name:

The frame name used for label image.

Parameter do_compress:

When true, zlib compression will be performed. The default is false.

  1. __init__(self: pydrake.systems.sensors.ImageToLcmImageArrayT, do_compress: bool = False) -> None

Constructs an empty system with no input ports. After construction, use DeclareImageInputPort() to add inputs.

color_image_input_port(self: pydrake.systems.sensors.ImageToLcmImageArrayT) pydrake.systems.framework.InputPort

Returns the input port containing a color image. Note: Only valid if the color/depth/label constructor is used.

template DeclareImageInputPort

Instantiations: DeclareImageInputPort[PixelType.kRgba8U], DeclareImageInputPort[PixelType.kRgb8U], DeclareImageInputPort[PixelType.kBgra8U], DeclareImageInputPort[PixelType.kBgr8U], DeclareImageInputPort[PixelType.kDepth16U], DeclareImageInputPort[PixelType.kDepth32F], DeclareImageInputPort[PixelType.kLabel16I], DeclareImageInputPort[PixelType.kGrey8U]

DeclareImageInputPort[PixelType.kBgr8U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kBgra8U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kDepth16U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kDepth32F](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kGrey8U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kLabel16I](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kRgb8U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
DeclareImageInputPort[PixelType.kRgba8U](self: pydrake.systems.sensors.ImageToLcmImageArrayT, name: str) pydrake.systems.framework.InputPort
depth_image_input_port(self: pydrake.systems.sensors.ImageToLcmImageArrayT) pydrake.systems.framework.InputPort

Returns the input port containing a depth image. Note: Only valid if the color/depth/label constructor is used.

image_array_t_msg_output_port(self: pydrake.systems.sensors.ImageToLcmImageArrayT) pydrake.systems.framework.OutputPort

Returns the abstract valued output port that contains a Value<lcmt_image_array>.

label_image_input_port(self: pydrake.systems.sensors.ImageToLcmImageArrayT) pydrake.systems.framework.InputPort

Returns the input port containing a label image. Note: Only valid if the color/depth/label constructor is used.

template pydrake.systems.sensors.ImageTraits

Instantiations: ImageTraits[PixelType.kRgba8U], ImageTraits[PixelType.kRgb8U], ImageTraits[PixelType.kBgra8U], ImageTraits[PixelType.kBgr8U], ImageTraits[PixelType.kDepth16U], ImageTraits[PixelType.kDepth32F], ImageTraits[PixelType.kLabel16I], ImageTraits[PixelType.kGrey8U]

class pydrake.systems.sensors.ImageTraits[PixelType.kBgr8U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint8

kNumChannels = 3
kPixelFormat = <PixelFormat.kBgr: 1>
class pydrake.systems.sensors.ImageTraits[PixelType.kBgra8U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint8

kNumChannels = 4
kPixelFormat = <PixelFormat.kBgra: 3>
class pydrake.systems.sensors.ImageTraits[PixelType.kDepth16U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint16

kNumChannels = 1
kPixelFormat = <PixelFormat.kDepth: 5>
class pydrake.systems.sensors.ImageTraits[PixelType.kDepth32F]
__init__(*args, **kwargs)
ChannelType

alias of numpy.float32

kNumChannels = 1
kPixelFormat = <PixelFormat.kDepth: 5>
class pydrake.systems.sensors.ImageTraits[PixelType.kGrey8U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint8

kNumChannels = 1
kPixelFormat = <PixelFormat.kGrey: 4>
class pydrake.systems.sensors.ImageTraits[PixelType.kLabel16I]
__init__(*args, **kwargs)
ChannelType

alias of numpy.int16

kNumChannels = 1
kPixelFormat = <PixelFormat.kLabel: 6>
class pydrake.systems.sensors.ImageTraits[PixelType.kRgb8U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint8

kNumChannels = 3
kPixelFormat = <PixelFormat.kRgb: 0>
class pydrake.systems.sensors.ImageTraits[PixelType.kRgba8U]
__init__(*args, **kwargs)
ChannelType

alias of numpy.uint8

kNumChannels = 4
kPixelFormat = <PixelFormat.kRgba: 2>
class pydrake.systems.sensors.ImageWriter

Bases: pydrake.systems.framework.LeafSystem

A system for periodically writing images to the file system. The system also provides direct image writing via a forced publish event. The system does not have a fixed set of input ports; the system can have an arbitrary number of image input ports. Each input port is independently configured with respect to:

  • publish frequency,

  • write location (directory) and image name,

  • input image format (which, in turn, implies a file-system format),

  • port name (which needs to be unique across all input ports in this system),

and - context time at which output starts.

By design, this system is intended to work with RgbdCamera, but can connect to any output port that provides images.

(user assigned port name)→
...→
(user assigned port name)→
ImageWriter

ImageWriter supports three specific types of images:

  • ImageRgba8U - typically a color image written to disk as .png images.

  • ImageDepth32F - typically a depth image, written to disk as .tiff images.

  • ImageLabel16I - typically a label image, written to disk as .png images.

  • ImageDepth16U - typically a depth image, written to disk as .png images.

  • ImageGrey8U - typically a grey scale image, written to disk as .png images.

Input ports are added to an ImageWriter via DeclareImageInputPort(). See that function’s documentation for elaboration on how to configure image output. It is important to note, that every declared image input port must be connected; otherwise, attempting to write an image from that port, will cause an error in the system.

If the user intends to write images directly instead of periodically, e.g., when running this system outside of Simulator::AdvanceTo, a call to ForcedPublish(system_context) will write all images from each input port simultaneously to disk. Note that one can invoke a forced publish on this system using the same context multiple times, resulting in multiple write operations, with each operation overwriting the same file(s).

__init__(self: pydrake.systems.sensors.ImageWriter) None

Constructs default instance with no image ports.

DeclareImageInputPort(self: pydrake.systems.sensors.ImageWriter, pixel_type: pydrake.systems.sensors.PixelType, port_name: str, file_name_format: str, publish_period: float, start_time: float) pydrake.systems.framework.InputPort

Declares and configures a new image input port. A port is configured by providing:

  • a unique port name,

  • an output file format string,

  • a publish period,

  • a start time, and

  • an image type.

Each port is evaluated independently, so that two ports on the same ImageWriter can write images to different locations at different frequencies, etc. If images are to be kept in sync (e.g., registered color and depth images), they should be given the same period and start time.

Specifying the times at which images are written

Given a positive publish period p, images will be written at times contained in the list of times: t = [0, 1⋅p, 2⋅p, ...]. The start time parameter determines what the first output time will be. Given a “start time” value tₛ, the frames will be written at: t = tₛ + [0, 1⋅p, 2⋅p, ...].

Specifying write location and output file names

When writing image data to disk, the location and name of the output files are controlled by a user-defined format string. The format string should be compatible with fmt::format(). ImageWriter provides several named format arguments that can be referenced in the format string:

  • port_name - The name of the port (see below).

  • image_type - One of color, depth, or label, depending on the

image type requested. - time_double - The time (in seconds) stored in the context at the invocation of Publish(), represented as a double. - time_usec - The time (in microseconds) stored in the context at the invocation of Publish(), represented as a 64-bit integer. - time_msec - The time (in milliseconds) stored in the context at the invocation of Publish(), represented as an integer. - count - The number of write operations that have occurred from this port since system creation or the last invocation of ResetAllImageCounts() (the first write operation would get zero, the Nᵗʰ would get N - 1). This value increments every time a write operation occurs.

File names can then be specified as shown in the following examples (assuming the port was declared as a color image port, with a name of “my_port”, a period of 0.02 s (50 Hz), and a start time of 5 s.

  • /home/user/images/{port_name}/{time_usec} creates a sequence like:

  • /home/user/images/my_port/5000000.png

  • /home/user/images/my_port/5020000.png

  • /home/user/images/my_port/5040000.png

  • /home/user/images/{image_type}/{time_msec:05} creates a sequence like:

  • /home/user/images/color/05000.png

  • /home/user/images/color/05020.png

  • /home/user/images/color/05040.png

  • /home/user/{port_name}/my_image_{count:03}.txt creates a sequence like:

  • /home/user/my_port/my_image_000.txt.png

  • /home/user/my_port/my_image_001.txt.png

  • /home/user/my_port/my_image_002.txt.png

We call attention particularly to the following:

  • Note the zero-padding arguments in the second and third examples. Making

use of zero-padding typically facilitates other processes. - If the file name format does not end with an appropriate extension (e.g., .png or .tiff), the extension will be added. - The directory specified in the format will be tested for validity (does it exist, is it a directory, can the program write to it). The full file name will not be validated. If it is invalid (e.g., too long, invalid characters, bad format substitution), images will silently not be created. - The third example uses the count flag – regardless of start time, the first file written will always be zero, the second one, etc. - The directory can only depend port_name and image_type. It cannot depend on values that change over time (e.g., time_double, count, etc.

Parameter port_name:

The name of the port (must be unique among all image ports). This string is available in the format string as port_name.

Parameter file_name_format:

The fmt::format()-compatible string which defines the context-dependent file name to write the image to.

Parameter publish_period:

The period at which images read from this input port are written in calls to Publish().

Parameter start_time:

The minimum value for the context’s time at which images will be written in calls to Publish().

Parameter pixel_type:

The representation of the per-pixel data (see PixelType). Must be one of {PixelType::kRgba8U, PixelType::kDepth32F, PixelType::kLabel16I, PixelType::kDepth16U, or PixelType::kGrey8U}.

Raises
  • RuntimeError if (1) the directory encoded in the

  • file_name_format` is not "valid" (see documentation above fo

  • definition), (2) publish_period is not positive, (3)

  • port_name` is used by a previous input port, or (4

  • pixel_type` is not supported

ResetAllImageCounts(self: pydrake.systems.sensors.ImageWriter) None
class pydrake.systems.sensors.LcmImageArrayToImages

Bases: pydrake.systems.framework.LeafSystem

An LcmImageArrayToImages takes as input an AbstractValue containing a Value<lcmt_image_array> LCM message that defines an array of images (lcmt_image). The system has output ports for one color image as an ImageRgba8U and one depth image as ImageDepth32F (intended to be similar to the API of RgbdCamera, though without the label image port).

image_array_t→
LcmImageArrayToImages
→ color_image
→ depth_image
→ label_image
__init__(self: pydrake.systems.sensors.LcmImageArrayToImages) None
color_image_output_port(self: pydrake.systems.sensors.LcmImageArrayToImages) pydrake.systems.framework.OutputPort

Returns the abstract valued output port that contains a RGBA image of the type ImageRgba8U. The image will be empty if no color image was received in the most recent message (so, for example, sending color and depth in different messages will not produce useful results).

depth_image_output_port(self: pydrake.systems.sensors.LcmImageArrayToImages) pydrake.systems.framework.OutputPort

Returns the abstract valued output port that contains an ImageDepth32F. The image will be empty if no color image was received in the most recent message (so, for example, sending color and depth in different messages will not produce useful results).

image_array_t_input_port(self: pydrake.systems.sensors.LcmImageArrayToImages) pydrake.systems.framework.InputPort

Returns the abstract valued input port that expects a Value<lcmt_image_array>.

label_image_output_port(self: pydrake.systems.sensors.LcmImageArrayToImages) pydrake.systems.framework.OutputPort

Returns the abstract valued output port that contains an ImageLabel16I. The image will be empty if no label image was received in the most recent message (so, for example, sending color and label in different messages will not produce useful results).

class pydrake.systems.sensors.PixelFormat

Members:

kRgb

kBgr

kRgba

kBgra

kGrey

kDepth

kLabel

__init__(self: pydrake.systems.sensors.PixelFormat, value: int) None
kBgr = <PixelFormat.kBgr: 1>
kBgra = <PixelFormat.kBgra: 3>
kDepth = <PixelFormat.kDepth: 5>
kGrey = <PixelFormat.kGrey: 4>
kLabel = <PixelFormat.kLabel: 6>
kRgb = <PixelFormat.kRgb: 0>
kRgba = <PixelFormat.kRgba: 2>
property name
property value
class pydrake.systems.sensors.PixelScalar

Members:

k8U

k16I

k16U

k32F

__init__(self: pydrake.systems.sensors.PixelScalar, value: int) None
k16I = <PixelScalar.k16I: 1>
k16U = <PixelScalar.k16U: 2>
k32F = <PixelScalar.k32F: 3>
k8U = <PixelScalar.k8U: 0>
property name
property value
class pydrake.systems.sensors.PixelType

Members:

kRgba8U

kRgb8U

kBgra8U

kBgr8U

kDepth16U

kDepth32F

kLabel16I

kGrey8U

__init__(self: pydrake.systems.sensors.PixelType, value: int) None
kBgr8U = <PixelType.kBgr8U: 1>
kBgra8U = <PixelType.kBgra8U: 3>
kDepth16U = <PixelType.kDepth16U: 5>
kDepth32F = <PixelType.kDepth32F: 6>
kGrey8U = <PixelType.kGrey8U: 4>
kLabel16I = <PixelType.kLabel16I: 7>
kRgb8U = <PixelType.kRgb8U: 0>
kRgba8U = <PixelType.kRgba8U: 2>
property name
property value
class pydrake.systems.sensors.RgbdSensor

Bases: pydrake.systems.framework.LeafSystem

A meta-sensor that houses RGB, depth, and label cameras, producing their corresponding images based on the contents of the geometry::SceneGraph.

geometry_query→
RgbdSensor
→ color_image
→ depth_image_32f
→ depth_image_16u
→ label_image
→ body_pose_in_world
→ image_time

This system models a continuous sensor, where the output ports reflect the instantaneous images observed by the sensor. In contrast, a discrete (sample and hold) sensor model might be a more suitable match for a real-world camera; for that case, see RgbdSensorDiscrete or RgbdSensorAsync.

The following text uses terminology and conventions from CameraInfo. Please review its documentation.

This class uses the following frames:

  • W - world frame

  • C - color camera frame, used for both color and label cameras to guarantee

perfect registration between color and label images. - D - depth camera frame - B - sensor body frame. Approximately, the frame of the “physical” sensor that contains the color, depth, and label cameras. The contained cameras are rigidly fixed to B and X_WB is what is used to pose the sensor in the world (or, alternatively, X_PB where P is some parent frame for which X_WP is known).

By default, frames B, C, and D are coincident and aligned. These can be changed using the camera_poses constructor parameter. Frames C and D are always rigidly affixed to the sensor body frame B. As documented in the camera_axes_in_image “CameraInfo documentation”, the color and depth cameras “look” in the positive Cz and Dz directions, respectively with the positive Cy and Dy directions pointing to the bottom of the image. If R_BC and R_BD are the identity rotation, we can apply the same reasoning to the body frame: look in the +Bz direction with the +By direction pointing down in the image. Only if the depth or color frames are re-oriented relative to the body does further reasoning need to be applied.

Output port image formats:

  • color_image: Four channels, each channel uint8_t, in the following order:

red, green, blue, and alpha.

  • depth_image_32f: One channel, float, representing the Z value in

D in meters. The values 0 and infinity are reserved for out-of-range depth returns (too close or too far, respectively, as defined by geometry::render::DepthRenderCamera “DepthRenderCamera”).

  • depth_image_16u: One channel, uint16_t, representing the Z value in

D in millimeters. The values 0 and 65535 are reserved for out-of-range depth returns (too close or too far, respectively, as defined by geometry::render::DepthRenderCamera “DepthRenderCamera”). Additionally, 65535 will also be returned if the depth measurement exceeds the representation range of uint16_t. Thus, the maximum valid depth return is 65534mm.

  • label_image: One channel, int16_t, whose value is a unique

geometry::render::RenderLabel “RenderLabel” value aligned with the color camera frame. See geometry::render::RenderLabel “RenderLabel” for discussion of interpreting rendered labels.

Note

These depth sensor measurements differ from those of range data used by laser range finders (like DepthSensor), where the depth value represents the distance from the sensor origin to the object’s surface.

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.systems.sensors.RgbdSensor, parent_id: pydrake.geometry.FrameId, X_PB: pydrake.math.RigidTransform, color_camera: pydrake.geometry.ColorRenderCamera, depth_camera: pydrake.geometry.DepthRenderCamera) -> None

Constructs an RgbdSensor with fully specified render camera models for both color/label and depth cameras.

  1. __init__(self: pydrake.systems.sensors.RgbdSensor, parent_id: pydrake.geometry.FrameId, X_PB: pydrake.math.RigidTransform, depth_camera: pydrake.geometry.DepthRenderCamera, show_window: bool = False) -> None

Constructs an RgbdSensor with fully specified render camera models for both the depth camera. The color camera in inferred from the depth_camera; it shares the same geometry::render::RenderCameraCore and is configured to show the window based on the value of show_color_window.

body_pose_in_world_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port (containing a RigidTransform) which reports the pose of the body in the world frame (X_WB).

color_camera_info(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.sensors.CameraInfo

Returns the intrinsics properties of the color camera model.

color_image_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageRgba8U.

depth_camera_info(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.sensors.CameraInfo

Returns the intrinsics properties of the depth camera model.

depth_image_16U_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageDepth16U.

depth_image_32F_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageDepth32F.

image_time_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the vector-valued output port (with size == 1) that reports the current simulation time, in seconds. This is provided for consistency with RgbdSensorDiscrete and RgbdSensorAsync (where the image time is not always the current time).

label_image_output_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageLabel16I.

parent_frame_id(self: pydrake.systems.sensors.RgbdSensor) pydrake.geometry.FrameId

Returns the id of the frame to which the body is affixed.

query_object_input_port(self: pydrake.systems.sensors.RgbdSensor) pydrake.systems.framework.InputPort

Returns the geometry::QueryObject<double>-valued input port.

X_BC(self: pydrake.systems.sensors.RgbdSensor) pydrake.math.RigidTransform

Returns X_BC.

X_BD(self: pydrake.systems.sensors.RgbdSensor) pydrake.math.RigidTransform

Returns X_BD.

class pydrake.systems.sensors.RgbdSensorAsync

Bases: pydrake.systems.framework.LeafSystem

A sensor similar to RgbdSensorDiscrete but the rendering occurs on a background thread to offer improved performance.

Warning

This feature is considered to be experimental and may change or be removed at any time, without any deprecation notice ahead of time.

Warning

This system is intended for use only with the out-of-process glTF rendering engine (MakeRenderEngineGltfClient()). Other render engines (e.g., MakeRenderEngineVtk() or MakeRenderEngineGl()) are either not thread-safe or perform poorly when used here. We hope to add async support for those engines down the road (see #19437 for details).

geometry_query→
RgbdSensorAsync
→ color_image (optional)
→ depth_image_32f (optional)
→ depth_image_16u (optional)
→ label_image (optional)
→ body_pose_in_world
→ image_time

This sensor works by capturing the i’th state of the world at time t_capture = capture_offset + i / fps and outputting the corresponding rendered image at time t_output = t_capture + output_delay, i.e., t_output = capture_offset + i / fps + output_delay.

For example, with the following constructor settings: - capture_offset = 0 ms - output_delay = 50 ms - fps = 5 Hz (i.e., 200 ms)

We have the following timing schedule:

Time | Event | Current output images | | :—-: | :—–: |
———————-
| 0 ms | capture | empty (i.e., zero-sized)
| 50 ms | output | the scene as of 0 ms | | 200 ms | capture | ^ | |

250 ms | output | the scene as of 200 ms | | 400 ms | capture | ^ | | … | … | … |

The capture_offset is typically zero, but can be useful to stagger multiple cameras in the same scene to trigger at different times if desired (e.g., to space out rendering events for better performance).

The output_delay can be used to model delays of a physical camera (e.g., for firmware processing on the device or transmitting the image over a USB or network connection, etc.).

The image rendering between the capture event and the output event happens on a background thread, allowing simulation time to continue to advance during the rendering’s output_delay. This helps smooth over the runtime latency associated with rendering.

See also RgbdSensorDiscrete for a simpler (unthreaded) discrete sensor model, or RgbdSensor for a continuous model.

Warning

As the moment, this implementation cannnot respond to changes to geometry shapes, textures, etc. after a simulation has started. The only thing it responds to are changes to geometry poses. If you change anything beyond poses, you must manually call Simulator::Initialize() to reset things before resuming the simulation, or else you’ll get an exception. This holds true for changes to both SceneGraph’s model geometry and the copy of the geometry in a scene graph Context.

__init__(self: pydrake.systems.sensors.RgbdSensorAsync, scene_graph: pydrake.geometry.SceneGraph, parent_id: pydrake.geometry.FrameId, X_PB: pydrake.math.RigidTransform, fps: float, capture_offset: float, output_delay: float, color_camera: Optional[pydrake.geometry.ColorRenderCamera], depth_camera: Optional[pydrake.geometry.DepthRenderCamera] = None, render_label_image: bool = False) None

Constructs a sensor with the given parameters. Refer to the class overview documentation for additional exposition and examples of the parameters.

Parameter scene_graph:

The SceneGraph to use for image rendering. This pointer is retained by the constructor so must remain valid for the lifetime of this object. After construction, this system’s geometry_query input port must be connected to the relevant output port of the given scene_graph.

Parameter parent_id:

The frame “P” to which the sensor body is affixed. See RgbdSensor for details.

Parameter X_PB:

The transform relating parent and sensor body. See RgbdSensor for details.

Parameter fps:

How often (frames per second) to sample the geometry_query input and render the associated images. Must be strictly positive and finite.

Parameter capture_offset:

The time of the first sample of geometry_query input. Typically zero. Must be non-negative and finite.

Parameter output_delay:

How long after the geometry_query input sample the output ports should change to reflect the new rendered image(s). Must be strictly positive and strictly less than 1/fps.

Parameter color_camera:

The properties for the color_image output port. When nullopt, there will be no color_image output port. At least one of color_camera or depth_camera must be provided.

Parameter depth_camera:

The properties for the depth_image_32f and depth_image_16u output ports. When nullopt, there will be no such output ports. At least one of color_camera or depth_camera must be provided.

Parameter render_label_image:

Whether to provide the label_image output port. May be set to true only when a color_camera has also been provided.

body_pose_in_world_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port (containing a RigidTransform) which reports the pose of the body in the world frame (X_WB).

capture_offset(self: pydrake.systems.sensors.RgbdSensorAsync) float

Returns the capture_offset passed to the constructor.

color_camera(self: pydrake.systems.sensors.RgbdSensorAsync) Optional[pydrake.geometry.ColorRenderCamera]

Returns the color_camera passed to the constructor.

color_image_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageRgba8U.

Raises

RuntimeError when color output is not enabled.

depth_camera(self: pydrake.systems.sensors.RgbdSensorAsync) Optional[pydrake.geometry.DepthRenderCamera]

Returns the depth_camera passed to the constructor.

depth_image_16U_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageDepth16U.

Raises

RuntimeError when depth output is not enabled.

depth_image_32F_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageDepth32F.

Raises

RuntimeError when depth output is not enabled.

fps(self: pydrake.systems.sensors.RgbdSensorAsync) float

Returns the fps passed to the constructor.

image_time_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the vector-valued output port (with size == 1) which reports the simulation time when the image outputs were captured, in seconds. When there are no images available on the output ports (e.g., at the beginning of a simulation), the value will be NaN.

label_image_output_port(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.systems.framework.OutputPort

Returns the abstract-valued output port that contains an ImageLabel16I.

Raises

RuntimeError when label output is not enabled.

output_delay(self: pydrake.systems.sensors.RgbdSensorAsync) float

Returns the output_delay passed to the constructor.

parent_id(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.geometry.FrameId

Returns the parent_id passed to the constructor.

X_PB(self: pydrake.systems.sensors.RgbdSensorAsync) pydrake.math.RigidTransform

Returns the X_PB passed to the constructor.

class pydrake.systems.sensors.RgbdSensorDiscrete

Bases: pydrake.systems.framework.Diagram

Wraps a continuous RgbdSensor with a zero-order hold to create a discrete sensor.

geometry_query→
RgbdSensorDiscrete
→ color_image
→ depth_image_32f
→ depth_image_16u
→ label_image
→ body_pose_in_world
→ image_time

See also RgbdSensorAsync for a slightly different discrete sensor model.

Note

Be mindful that the discrete dynamics of a zero-order hold mean that all three image types (color, depth, label) are rendered at the given period, even if nothing ends up using the images on the output ports; this might be computationally wasteful. If you only need color and depth, be sure to pass render_label_image = false to the constructor. If you only need one image type, eschew this system in favor of adding your own zero-order hold on just the one RgbdSensor output port that you need.

__init__(self: pydrake.systems.sensors.RgbdSensorDiscrete, sensor: pydrake.systems.sensors.RgbdSensor, period: float = 0.03333333333333333, render_label_image: bool = True) None

Constructs a diagram containing a (non-registered) RgbdSensor that will update at a given rate.

Parameter sensor:

The continuous sensor used to generate periodic images.

Parameter period:

Update period (sec).

Parameter render_label_image:

If true, renders label image (which requires additional overhead). If false, label_image_output_port will raise an error if called.

body_pose_in_world_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

See also

RgbdSensor::body_pose_in_world_output_port().

color_image_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

See also

RgbdSensor::color_image_output_port().

depth_image_16U_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

See also

RgbdSensor::depth_image_16U_output_port().

depth_image_32F_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

See also

RgbdSensor::depth_image_32F_output_port().

image_time_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

Returns the vector-valued output port (with size == 1) which reports the simulation time when the image outputs were captured, in seconds (i.e., the time of the most recent zero-order hold discrete update).

kDefaultPeriod = 0.03333333333333333
label_image_output_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.OutputPort

See also

RgbdSensor::label_image_output_port().

period(self: pydrake.systems.sensors.RgbdSensorDiscrete) float

Returns the update period for the discrete camera.

query_object_input_port(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.framework.InputPort

See also

RgbdSensor::query_object_input_port().

sensor(self: pydrake.systems.sensors.RgbdSensorDiscrete) pydrake.systems.sensors.RgbdSensor

Returns a reference to the underlying RgbdSensor object.