Benchmarking

Benchmarking Motion Planners
MoveIt! includes a number of different planners that all satisfy a common interface. Because of this, we can run different planners on the same problem scenario to compare performance. If you have a motion planner that satisfies this interface, or that planner is part of a library that satisfies this interface, you can benchmark that planner as well.

There are four components to benchmarking:


 * The database of problems to run the benchmark on
 * The set of algorithms to benchmark
 * The tools that execute the different motion planning algorithms on the different problems and record results
 * The results compiled into visual graphs and statistics

MoveIt! includes all four of these components. Please see the page on the MoveIt! Warehouse for information on databases in MoveIt! and the available motion planners for a list of algorithms that can be benchmarked.



Installation and Prerequisites
The ros-hydro-moveit-ros-benchmarks package is installed by default with the ros-hydro-moveit-full debian package. For more information and other distributions, please see the installation instructions.

We assume you have a package named MYROBOT_moveit_config that is generated by the MoveIt Setup Assistant.

Starting the MoveIt Warehouse
Launch the MoveIt! Warehouse, if you do not have a database of planning scenes already running, using this command:

roslaunch MYROBOT_moveit_config default_warehouse_db.launch

This launch file sets the warehouse database to be saved inside your MYROBOT_moveit_config package. You might want to setup a .gitignore or similar file to keep this database from being versioned. If you instead would like a custom database location, you can run a command similar to this:

roslaunch MYROBOT_moveit_config warehouse.launch moveit_warehouse_database_path:=~/moveit_db

By default the host will be at address 127.0.0.1 with port 33829. You can edit the moveit_warehouse_host and moveit_warehouse_port ROS arguments by editing the file:

MYROBOT_moveit_config/launch/warehouse_settings.launch

Creating Planning Scenes, Start States, Queries and Constraints
To run benchmarking, you can either use a pre-existing planning scene saved to file, or create your own custom scene.

Load A Pre-Build Planning Scene
A source of already created planning scenes and queries is located at the Planning Arena website.

Example of Loading Planning Scene
Download industrial.scene. If you haven't loaded your robot's URDF or SRDF run this:

roslaunch YOURROBOT_moveit_config planning_context.launch load_robot_description:=true

Also make sure you have started the MoveIt Warehouse, as described above. Next, load the scene into the Warehouse by running:

rosrun moveit_ros_warehouse moveit_warehouse_import_from_text --host 127.0.0.1 --port 33829 --scene industrial.scene

To test if this was successful, launch Rviz with the "Motion Planning Box":

roslaunch YOURROBOT_moveit_config demo.launch

In the Motion Planning Window at the bottom left of Rviz, click Connect to the warehouse. Then click on the Stored Scenes tab, select industrial from the list, click Load Scene button and you should see the shelves appear as in the image below:



Creating a Custom Planning Scene
If you need to construct a custom planning scene, you can use the MoveIt Rviz Motion Planning Plugin or make them by hand using a text editor and a lot of patience.

Create Start States and Goal Queries
Additionally, you will need to create a your robot's start states and a set of goal queries for the benchmarking to perform testing on. This can be accomplished with either the MoveIt Rviz Motion Planning Plugin or the MoveIt Benchmarking Tool.

Example
Download industrial.queries from the above mentioned location. Load your robot's URDF, SRDF and the MoveIt Warehouse as described above. Then, load the queries by running:

rosrun moveit_ros_warehouse moveit_warehouse_import_from_text --host 127.0.0.1 --port 33829 --queries industrial.queries

Follow a similar procedure as described for testing an imported scene, above, except this time to to the Stored States tab, click Load States, answer ".*" to the popup box and click OK and you should see the initial start state appear in the list box. Select it and click Set as Start to see the robot take the start position.

Creating Benchmark Configuration Files
A configuration file specifies how the benchmark should be performed - which algorithms, how many runs, etc. Each configuration file specifies a benchmark request with respect to the MoveIt Warehouse database moveit_call_benchmark connects to. The configuration file can either be created by hand or by using the MoveIt Benchmarking Tool. The following is an example configuration file:

[scene] name=pole_blocking_right_arm_pan output=mylocation.log runs=2

[plugin] name=ompl_interface/OMPLPlanner planners=KPIECEkConfigDefault RRTConnectkConfigDefault runs=10

[plugin] name=my_lib/myPlanner planners=planner_name

This file has two sections: "scene" and "plugin".

"scene" section
There should only be one scene section. It can include the following parameters:
 * name Name of the planning scene to load from the database
 * runs Number of times to execute each algorithm for the problem. Multiple executions are needed when the planner is not deterministic and averaging of results is desired.
 * timeout Time limit for planning in seconds
 * start (optional) Regex for the start states to use
 * query (optional) Regex for the queries to execute (.+)
 * goal (optional) Regex for the names of constraints to use as goals
 * trajectory (optional) Regex for the names of constraints to use as trajectories
 * group (optional) Override the group to plan for
 * planning_frame (optional) Override the planning frame to use
 * default_constrained_link (optional) Specify the default link to consider as constrained when one is not specified in a moveit_msgs::Constraints message
 * goal_offset_x (optional) Goal offset in x. These offsets are useful for example when testing the same benchmark on multiple robots with different end effector positions
 * goal_offset_y (optional) Goal offset in y
 * goal_offset_z (optional) Goal offset in z
 * goal_offset_roll (optional) Goal offset in roll
 * goal_offset_pitch (optional) Goal offset in pitch
 * goal_offset_yaw (optional) Goal offset in yaw
 * output (optional) Location for saving computed data in *.log format. "1.log" will automatically be appended to the file name. The default output location is in in your ~/.ros folder.

"plugin" section
You can have one or more plugin sections. It can include the following parameters:
 * name specifies the name of the plugin that contains the implementation of planning_interface::Planner
 * planners the names of the planners to execute
 * runs (optional) can override the number of runs specified in the "scene" section.

Setting Up Start and Goal Positions
There are two ways to specify a planning problem (start and goal positions) in a planning request for benchmarking - by using a combined query or by specifying start and goal constraints for desired planning group.

Query Method
A query includes the specification of both a start state and a goal representation. Queries are loaded from the warehouse planning scene database (which can associate MotionPlanRequest messages to PlanningScene messaes). So when performing benchmarks for a particular scene, the MotionPlanRequest messages associated to that scene can be loaded and sent to the planner.

Manual Method
Some times it may be more convenient to construct queries by separately specifying start states and goal constraints.

Start
All start positions in the Benchmarking pipeline consist of simply the initial joint states of the robot. You can easily create this using the Rviz Motion Planning Plugin or optionally the Benchmarking GUI. If you don't specify a start state, then the default/initial position of the robot will be used, which is usually all 0 values for joint positions.

Goal
Unlike the start state, you cannot specify a goal position as a set of joint positions. Instead you must specify constraints for the goal.


 * Goal Constraints - Using the goal and goal_* sections above, position and orientation constraints can be specified a desired pose for a robot link.


 * Trajectory Constraints - Using the trajectory section, a trajectory constraint can be set for the goal configuration (author: need to expand this explanation once i understand it more).


 * Path Constraints - not implemented

Running the Benchmarks
You will need a launch file (e.g. run_benchmark_PLANNER.launch) for your desired planner to be benchmarked that includes the settings for the planners you wish to run. This is custom to the planner plugin you wish to test, but should be similar to OMPL's launch file:

OMPL Example
An OMPL benchmark launch file is created automatically with the Setup Assistant located here:

YOURROBOT_moveit_config/run_benchmark_ompl.launch

This launch file loads the URDF, SRDF, MoveIt Warehouse and an executable from the moveit_ros_benchmarks package. You will need to fill in the parameters the planner plugins to be benchmarked expect.

Note: To benchmark OMPL, you might need to tweak the projection evaluator set for each planning group in ompl_planning.yaml. This can be changed by editing

YOURROBOT_moveit_config/config/ompl_planning.yaml

to have the right joints for each planning group at this line:

projection_evaluator: joints(name_of_first_joint_in_group,name_of_second_joint_in_group)

By default projection_evaluator has the first two links in your planning group's chain.

Run
To trigger the computation of an actual benchmark you will need to pass as argument one or more configuration files:

roslaunch moveit_ros_benchmarks run_benchmark_ompl.launch cfg:=config1.cfg

The output of these service calls is a .log file whose location is specified by the "output" property in your .cfg file. This log file can be post-processed into a PDF, below.

Visualizing Benchmark Results
The output of the benchmark server can be post-processed to produce results in human readable formats (e.g., plots). See the scripts/ folder in the moveit_ros_benchmarks package and run them with the –help option to see the possible options.

Creating a PDF of the results
Find your generated .log output file and run the following command after replacing RESULTS with your file name: rosrun moveit_ros_benchmarks moveit_benchmark_statistics.py RESULTS.log -p bechmark_results.pdf

This command will parse all .log files given as argument and store them in an SQLite database. The default file for that database is ~/.ros/benchmarks.db, but you can change that using the -d command line option.

The PDF plot can be generated later on using the -p argument only, because generating plots only uses the database.

Sometimes you may not want to merge your results in the same database. You should then either use -d to specify different databases or remove the one that was created (e.g., rm ~/.ros/benchmark.db)

Experimental R Analysis
The following is a method to import the benchmarking results into R for further analysis. Still under development!

First, convert the generated *.log file to a SQLite database:

rosrun moveit_ros_benchmarks moveit_benchmark_statistics.py RightArmAll.1.log -m benchmark.sql

Then run the R script...