3D Sensors

The environment representation that MoveIt! uses includes an octree representation of the robot's surroundings. This octree is represented using the octomap library and it is updated using so-called "octomap updater plugins". The intention is that if a robot has some sort of sensor that can produce 3D data, and that data can be incorporated in an octree representation, there will be a plugin for that type of sensor. Multiple plugins can be used simultaneously to update the maintained representation. A key operation that the octomap updaters must do is to filter the 3D data and exclude points that correspond to the robot itself from the octomap (robot "self filtering"). Furthermore, great care must be taken to use correct frames of reference for the data being incorporated.

Typical sensors produce point clouds or depth images. For these two cases, we provide plugins by default. The first one is generic and can probably be used in most cases when 3D sensing is available. The latter plugin is more efficient when it comes to depth images in particular and does much of its computation using a GPU.

Adding a RGB-D Sensor on the PR2
We will illustrate the steps for configuring a 3D sensor with MoveIt! with using as example the RGB-D sensor mounted on several PR2s.

Adding a sensor configuration file
To integrate a sensor, you will need to create a new configuration file that describes some properties of the sensor and how you would like to handle the incoming point clouds form the sensor.

A complete configuration file for the RGB-D sensor on the PR2 (save the complete text in a file named sensors_rgbd.yaml in your MoveIt! configuration directory) using the PointCloud updater:

Note the properties you will need to fill out for this sensor plugin:
 * sensor_plugin: the name of the plugin that should be loaded to perform the octomap updating
 * point_could_topic: The name of the topic on which point cloud data is being broadcast
 * max_range: The max range (in m) of the sensor
 * padding_offset: The padding offset to be considered for robot links and attached objects when self-filtering
 * padding_scale: The padding scale to be considered for robot links and attached objects when self-filtering
 * point_subsample: If the update process is too slow, points could be subsampled. Values above 1 make the updated skip points instead of processing them.
 * filtered_cloud_topic: If this parameter is specified, the filtered cloud (without robot parts) is also republished. This makes things a little less efficient but can be useful for debugging.

Alternatively, we can use the DepthImageUpdater plugin:

Note the properties you will need to fill out for this sensor plugin:
 * sensor_plugin: the name of the plugin that should be loaded to perform the octomap updating
 * image_topic: The name of the topic on which depth image data is being broadcast
 * queue_size: The queue size for the image transport subscriber
 * near_clipping_plane_distance: Minimum distance from the sensor for data to be considered valid
 * far_clipping_plane_distance: Maximum distance from the sensor for data to be considered valid
 * skip_vertical_pixels: The number of pixels to skip from the top and the bottom of the depth image. A value of 2 will skip the first 2 rows and last 2 rows.
 * skip_horizontal_pixels: The number of pixels to skip from the left and the right sides of the depth image. A value of 2 will skip the first 2 columns and last 2 columns.
 * shadow_threshold: When filtering the robot parts from the depth image, if a point appears to be just below a robot link, it is removed unless it is further away than the shadow threshold. This situation appears when padding & scaling are used. When adding padding to robot links, points that are in fact seen may appear to be behind the robot itself. At that point, the decision has to be made whether to remove the points or keep them. This is currently done by looking at the distance from the sensor. Note this is a problem in how data is processed. It is sometimes possible points that are in fact on the robot are kept.
 * padding_scale: The scale to use when padding the robot links and attached bodies. This is in pixels and the actual values used will vary with respect to the distance from the sensor. Empirical tests are encouraged for setting this value.
 * padding_offset: The offset to use when padding the robot links and attached bodies. This is in pixels and the actual values used will vary with respect to the distance from the sensor. Empirical tests are encouraged for setting this value.
 * filtered_cloud_topic: If this parameter is specified, the filtered depth image (without robot parts) is also republished. This makes things a little less efficient but can be useful for debugging.

Update the pr2_moveit_sensor_manager.launch file
You will now need to update the pr2_moveit_sensor_manager.launch file in the "launch" directory of your MoveIt! configuration directory with this sensor information (this file was auto-generated by the Setup Assistant but was empty). You will need to add the following lines into that file to configure (a) The Octomap representation and specify (b) the set of sensor sources for MoveIt! to use:

The "Octomap" parameters above are configuration parameters for this representation:
 * octomap_frame specifies the coordinate frame in which this representation will be stored. If you are working with a mobile robot, this frame should be a fixed frame in the world.
 * octomap_resolution specifies the resolution at which this representation is maintained (in meters).