SLAMDUNK - Visual SLAM Evaluation Framework

News

Introduction

SLAMDUNK is a framework for evaluating visual SLAM systems on rendered image sequences. The framework is a collection of XML format definitions, Makefiles, Python scripts, and a C++ API. It provides:

For system integration and easy extendability, all relevant data throughout the framework is stored in an XML format. This includes camera and trajectory definitions, log data, ground truth, and intermediate evaluation results.

A large part of the framework deals with automatizing the processes of creating image sequences and evaluating experiment results. This is build on Makefiles and Python scripts.

For image generation, the POV-Ray command line ray tracer is used. POV-Ray allows to expicitly calculate the intersection point of a ray with the whole scene. We use this feature for generating ground truth sparse feature maps.

For the interface between the evaluation framework and the VSLAM system, we provide a lightweight API in C++ that allows access to the created sequence images and straightforward data logging of the VSLAM process.

Download

The most recent version is available from the GIT repository. You can use the following command:

git clone http://vslam.inf.tu-dresden.de/git/slamdunk

You can also download a recent snapshot:
snapshot100111.tar.gz
snapshot090922.tar.gz
snapshot090905.tar.gz

Pre-rendered sequences

BMVC09 scene

sequence files (scene, camera definition, trajectory, rendering options)


BMVC09 scene, monocular
download pre-rendered sequence


BMVC09 scene, monocular, motion blur
download pre-rendered sequences


BMVC09 scene, stereo
download pre-rendered sequences


BMVC09 scene, stereo, motion blur
download pre-rendered sequences

Note: In order to evaluate the outcome of your VSLAM system on these sequences, you have to download the sequence files as well. Unpack the tar-ball in the root directory of your installation. This will add the scene description, monocular and stereo camera definition, a handmade trajectory, rendering options for both with and without motion blur, and the setup files that have been used to create the sequences.

Installation

The framework is developed for the GNU/Linux operating system. Your System needs to satisfy the following dependencies:
(The version numbers indicate the lowest version with which we tried to run the framework. Lower versions might work. Higher versions should work...)

You might also be able to run the framework on Windows. However, this has not been tested at all. You will most likely have to use Cygwin, because the framework relies heavily on GNU make. If you use the framework on Windows and can help with installation instructions, please let us know.

Documentation

A high-level overview and description of framework concepts can be found in the following paper:
A Framework For Evaluation Visual SLAM (Funke and Pietzsch, BMVC 09)

To get you started, there are tutorials covering the different framework components:

There is also a Users manual.

All documentation is contained in the git repository and snapshots, as well.

Contact

Jan.Funke (at) inf.tu-dresden.de
Tobias.Pietzsch (at) inf.tu-dresden.de