APTE

Benchmarking POR

We present on this page how to build APTE and instructions for reproduction of our benchmarks presented in:

APTE: Algorithm for Proving Trace Equivalence. This tool is a decision procedure for proving trace equivalence between two processes for a bounded number of sessions.

Some links: official website, official distribution on Github, and the diff containing only POR contributions. This page on Github.

How to build APTE and launch benchmarks

Build APTE

Download APTE either using git and the official repository:

git clone https://github.com/APTE/APTE.git
cd APTE

or by downloading a tarball:

$ wget https://github.com/APTE/APTE/archive/master.zip
$ unzip master.zip
$ cd APTE-master

In order to compile APTE, you need an OCaml compiler > 3.0 (see OCaml Download) and then type:

$ make

producing the executable 'apte'. A help message is displayed when typing:

$ ./apte

Benchmarks

Some python scripts are available to run benchmarks and extract results. You need Python > 2.0.

Launch benchmarks

To launch benchmarks, go to 'bench' folder and use the script 'launch_benchmark.py'. By default, the script launches all versions of APTE on all protocols. You can also use arguments as follows:

$ cd bench
$ ./launch_benchmark.py -v [versions] -ft [filter] -f [filename]

It is advised to launch in parallel one instance of this script per core of your processor since an instance will only use one core. For instance:

$ ./launch_benchmark.py -v ref -ft WMF -f WMF_ref
$ ./launch_benchmark.py -v red -ft WMF -f WMF_red
$ ./launch_benchmark.py -v red -ft Simple -f Simple_toy_example_red
If one test takes to much time (exceed your time out) you can kill the APTE process ('htop' and press 'k'). You can then manually extract results from log files or read the following section.

Extract results

Once 'launch_benchmark.py' performed enough benchmarks, you can extract results and display them in a matrix using the script 'extract_result.py'. This script looks for log files in the foler 'log', extract results and produce a matrix. Note that if you killed a process launched by 'launch_benchmark.py' because it took to much time, 'extract_res.py' is not able to extract this information. To use this script, you need the followings libraries (easy_install [lib] or pip install [lib]' to install):

To display results (in seconds), type:

$ ./extract_res.py

Tested protocols and results

Protocols

We performed benchmarks on real world protocols. They all are defined in the folder ./bench/protocols/ (via Github: Protocols Files). We check various properties. We list here all tested protocols and what property we check using APTE:

Results

We ran the three versions of APTE (reference, compression and reduction) on those protocols with various sizes (number of agents in parallel). We performed those benchmarks on this machine:

Note that, APTE only uses one core of the processor.

The results can be found in the journal paper A Reduced Semantics for Deciding Trace Equivalence in Logical Methods in Computer Science.

About LSV