Ground truth data is an important prerequisite for evaluating vision algorithms that are used in traffic environments, where safety and reliability are of high concern. Obtaining ground truth from real traffic requires expensive hardware setups, and even with those the ground truth is often not completely accurate.
In a synthetic 3D environment, however, all parameters are known and controllable, and ground truth data can be absolutely accurate. We therefore present our method to create synthetic images of traffic scenarios with several types of ground truth data.
In a flexible and highly configurable system, traffic scenarios can be created with realistic vehicle movement, variable lighting and free camera placement. Those scenes are rendered using the path tracing method, which is capable of producing images with accurate illumination to provide a high degree of realism. Ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling.
Images that are created with this system are highly realistic and close to natural images, both in terms of subjective visual impression as well as in terms of statistics.
The COnGRATS framework consists of two parts, a configuration tool and a rendering engine.
In the configuration tool, a vehicle dynamics model is used to create user-defined traffic scenarios. A user can choose to drive around manually in a scene or make vehicles drive semi-automatically.
Once a scene is fully configured, the pose data from the configuration tool is used to render the configured traffic scenario with the path-tracing rendering engine of blender. At this step, the ground truth data is also acquired and stored.
Ground truth data
Camera positioning is flexible and it is possible to use both vehicle-mounted camera set-ups as well as static cameras that are looking onto the road.
Similarly, the number of cameras that is used can be configured freely. The camera model that is used incorporates focal length and sensor size and is able to approximate real cameras accurately.
- D. Biedermann, M. Ochs and R. Mester: Evaluating visual ADAS components on the COnGRATS dataset. Intelligent Vehicles (IV) Symposium, Gothenburg, Sweden, June 2016
- D. Biedermann, M. Ochs and R. Mester: COnGRATS: Realistic Simulation of Traffic Sequences for Autonomous Driving. Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, November 2015 (Best Student Paper Award)
Currently avaiable datasets to download: