PickAndPlace

Input

 * Joint state information (e.g., sensor_msgs::JointState messages for the robot's current configuration)
 * 3D sensor information (e.g., Kinect, laser, stereo)
 * Set of known objects (sent to the move_group node) as moveit_msgs::CollisionObject
 * User specified object to pick (by name)
 * Optionally, a set of grasping poses can be supplied
 * The configuration of the end-effector before grasping and after grasping

Output

 * The object specified by the user is picked by the robot
 * The environment of the robot is updated accordingly (the robot will be aware the object is now attached to the end-effector)

API
This functionality is available programmatically:
 * [[MoveGroup_Node|ROS Interface
 * MoveGroup Interface
 * Python Interface

Required Capabilities

 * Maintaining a world representation
 * Planning trajectories
 * Executing and monitoring trajectories