Environment Representation/Overview

The Planning Scene
The [/classplanning__scene_1_1PlanningScene.html PlanningScene] class is used in MoveIt! to represent the complete context necessary for motion planning. Among others, this includes a world representation used for collision checking and a robot state. In this section, we will provide an overview of this representation, exploring the different ways in which the world can be represented and the use of the PlanningScene class for collision checking and constraint evaluation.

The PlanningScene class ([/classplanning__scene_1_1PlanningScene.html Code API]) is the most useful entry point for accessing the following:
 * robot representation
 * environment representation
 * collision checking
 * constraint evaluation

Robot Representation
The PlanningScene maintains a full representation of the robot in two parts: For a more detailed description of the RobotModel and RobotState classes, please see the Kinematics section.
 * An instance of a [/classmoveit_1_1core_1_1RobotModel.html RobotModel], which includes the geometry of the robot and various static properties of the robot's components
 * An instance of a [/classmoveit_1_1core_1_1RobotState.html RobotState], which specifies the current state of the robot (joint values) for that planning scene. This does not need to be the real current state of the robot.

Environment Representation
The environment in the PlanningScene is represented by the [/classcollision__detection_1_1CollisionWorld.html CollisionWorld] class from the collision_detection library. The CollisionWorld class encapsulates the world as a set of objects, as defined in the geometric_shapes library. These shapes include primitives such as spheres, boxes, cylinders, cones, meshes, as well as more complex objects such as an octomap, represented using the Octomap library.

The PlanningScene provides an easy to use API to these representations that you will learn more about in the corresponding C++ tutorial. The API allows you to easily set or get primitive objects and Octomap representations using ROS message types, as well as geometric_shapes types.

The complete set of objects in a planning scene, as well as the robot, are maintained in one common reference frame known as the planning frame (this can be retrieved using the [/classplanning__scene_1_1PlanningScene.html planning_scene::getPlanningFrame] function). To allow easy use of other coordinate frames, including ones that do not correspond to robot links, a representation of additional transforms is included in the planning scene using the [/classmoveit_1_1core_1_1Transforms.html Transforms] class.

Collision Checking
Since the PlanningScene maintains both a representation of the robot and the environment, it has the easiest to use interface for collision checking. You can use this interface for checking internal collisions and for checking collisions between the robot and the world. We will explore this interface in more detail in the C++ API tutorial. You can also get direct access to the CollisionWorld maintained by the PlanningScene if you need to.


 * A representation of collisions that are to be ignored during planning, using the [/classcollision__detection_1_1AllowedCollisionMatrix.html AllowedCollisionMatrix] class is also included in the PlanningScene. This is often useful in manipulation tasks where the robot has to move into contact with the world.


 * The collision checking API allows for checking collisions against both a padded or an unpadded robot. Padding involves increasing the size of individual links of the robot by a small amount. This is often used to keep the robot further away from obstacles while motion planning. Typically, the unpadded robot is used only for self-collision checks.

Constraints
The PlanningScene also allows for checking configurations of the robot against constraints. Two types of constraints are allowed:


 * A set of kinematic constraints from the "kinematic_constraints" package. This includes joint constraints, position and orientation constraints and visibility constraints.
 * A user-specified set of constraints: e.g. if you would like to check for the stability of a humanoid robot. This is implemented using a callback function that the user can use to perform any types of checks. Checks can be performed on individual configurations of the robot and also on motion segments.

C++ API Tutorial
The C++ API to the planning scene is explored in this tutorial.

The Planning Scene Monitor
The [/classplanning__scene_1_1PlanningScene.html PlanningScene] class is the main representation of the environment for collision checking, constraint checking and motion planning. The recommended method for building a PlanningScene is through the use of the [/classplanning__scene__monitor_1_1PlanningSceneMonitor.html PlanningSceneMonitor]. The PlanningSceneMonitor helps in building a representation of the world in multiple ways:
 * By listening to the current state of the robot and storing it in the planning scene
 * Using sensor information to build a geometric model of the world for the motion planners and other components to operate with
 * Listening on ROS topics for additional geometric information that may be specified by users: e.g. users can directly add or remove mesh models from the environment

The PlanningSceneMonitor thus serves as a bridge between a simulated or real robot and the motion planners, filling in all the information that the motion planners will need.

High-level Diagram


This high-level diagram provides an overview of the Planning scene architecture.

Current State Information
The planning scene monitor subscribes to multiple ROS topics to create a representation of the robot's configuration in the world. If you have a properly configured robot running ROS, most of this information is already generated for you, e.g. the joint_states and tf topics are necessary for almost any robot operations in ROS. The one additional topic that the planning scene monitor listens to is the attached collision object topic. Here's a list of all the topics that specify information corresponding to the current state of the robot:
 * joint_states - topic for getting joint values for the robot,
 * tf - topic for getting additional transform information about the robot's configuration in the world,
 * attached_collision_object - topic for information about objects that have been picked up by the robot and are attached to the robot model,

Joint States Topic

 * topic name : joint_states
 * message type: sensor_msgs/JointState

joint_states is the topic on which the planning scene monitor expects to hear information about the joints of the robot. In general, you should have a joint_states publisher on your robot (real or simulated) publishing this information.

Transform Information

 * topic name: tf
 * message type: tf/tfMessage

Your simulated or real robot should also have a source of transform information being published. A typical approach to generating such information for a robot itself is to use the robot_state_publisher to convert joint values into transform information.

Attached Object Information

 * topic name: attached_collision_object
 * message type: moveit_msgs/AttachedCollisionObject

The attached_collision_object message is used to specify that an object has been attached to the model of the robot, e.g. if the robot picks up an object. In such cases, it is important for the motion planners to know that the robot is carrying an object.

Environment Representation
The planning scene monitor can take input on the environment in multiple ways: (1) you can add and remove objects (including an occupancy map) in the environment manually, (2) information on the environment can come in directly from other modules, e.g. an object detection routine. To take such input, the planning scene monitor listens on a set of topics, each with its own message. Note that the information on each of these topics is not necessarily unique but this functionality is provided as a convenience to help with usability of the monitor.

Input Type 1: PlanningScene Message on the planning_scene topic
The planning scene message incorporates all the information in a planning scene, including the robot state, attached object information, environment representation and the specification of allowed collision operations. It is the highest level message that the planning scene monitor can listen to.

Input Type 2: PlanningScene Message as a diff
A flag in the PlanningScene message allows for the specification of the information in this message as a diff corresponding to the current representation that the planning scene monitor is maintaining. This could be used, e.g., to add or remove objects from a planning scene.

Objects in the world

 * topic name : collision_object
 * message type: moveit_msgs/CollisionObject

The collision_object message is used to add or remove objects in the planning scene. A typical use of this message would be when an object detection algorithm detects an object in the world. The object can be added to the planning scene by broadcasting its information on this topic.

Maintaining an Occupancy Representation using sensor data
The planning scene monitor maintains (optionally) an occupancy map representation of the world using 3D point cloud information from the sensors on the robot. This quick start tutorial walks you through the steps for configuring the RGB-D sensor on the PR2 robot with MoveIt!, allowing the planning scene monitor to maintain its own occupancy map information.

Debugging
Use this command to run a useful tool for debugging the state of the planning_scene_monitor:

rosrun moveit_ros_planning moveit_print_planning_model_info

ROS API
The ROS API to the planning scene monitor allows a user to perform multiple operations:
 * Specifying a whole planning scene using the ROS PlanningScene message.
 * Specifying diffs to the current planning scene.
 * Adding or removing objects from the environment.
 * Attaching or detaching objects from the robot.

The ROS API is explored in more detail in the ROS API tutorial.

Links

 * Back to Planning Scene