To keep track of the run time performance of the most important tools of the toolset an automated process has been setup to perform benchmarks. These benchmarks also periodically performed on our build servers to measure how the performance changes over builds. If you have an account for our TeamCity servers you can view the results of these measurements here.
The details of the benchmarks are described below. To perform these benchmarks yourself, see the Setup section.
The run time of the following tools are currently being measured:
Each tool is used on a number of mCRL2 examples. The process performed depends on the tool, i.e, linearisation, state space exploration and model checking of the nodeadlock property. The following examples present in our repository are used to perform the benchmarks on:
Besides tool benchmarks there are also microbenchmarks that measure specific aspects of individual libraries.
For compiling the toolset itself and the build configuration, see Build instructions. If you are able to compile the toolset then enable the MCRL2_ENABLE_BENCHMARKS` CMake option to enable the generation of benchmark targets. Compile the
benchmarks target, this will generate the necessary files to perform the benchmarks. It will also compile the tools and microbenchmarks if necessary.
After this step, performing the benchmarks can be done using ctest. For each tool, specification and option it generates a target named
benchmark_<tool>_<specification>_<options>. Each benchmark target has the
benchmark label. For example to run all benchmarks pertaining the 1394-fin specification use
ctest -L benchmark -R 1394-fin, or to benchmark a specific tool use
ctest -L benchmark -R lps2lts.