COnGRATS: COmputer GRAphic generated synthetic Traffic Scenes

Ground truth data is an important prerequisite for evaluating vision algorithms that are used in traffic environments, where safety and reliability are of high concern. Obtaining ground truth from real traffic requires expensive hardware setups, and even with those the ground truth is often not completely accurate.

In a synthetic 3D environment, however, all parameters are known and controllable, and ground truth data can be absolutely accurate. We therefore present our method to create synthetic images of traffic scenarios with several types of ground truth data.

In a flexible and highly confi gurable system, traffic scenarios can be created with realistic vehicle movement, variable lighting and free camera placement. Those scenes are rendered using the path tracing method, which is capable of producing images with accurate illumination to provide a high degree of realism. Ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling.

Images that are created with this system are highly realistic and close to natural images, both in terms of subjective visual impression as well as in terms of statistics.

Overview

Framework

The COnGRATS framework consists of two parts, a configuration tool and a rendering engine.

In the configuration tool, a vehicle dynamics model is used to create user-defined traffic scenarios. A user can choose to drive around manually in a scene or make vehicles drive semi-automatically.

Once a scene is fully configured, the pose data from the configuration tool is used to render the configured traffic scenario with the path-tracing rendering engine of blender. At this step, the ground truth data is also acquired and stored.

Ground truth data

Currently, sequences created with COnGRATS can have ground truth for depth, scene labeling, optical flow, surface normals and pose data.

Depth maps
Motion labels
Semantic scene labeling
Camera positioning

Camera positioning is flexible and it is possible to use both vehicle-mounted camera set-ups as well as static cameras that are looking onto the road.
Similarly, the number of cameras that is used can be configured freely. The camera model that is used incorporates focal length and sensor size and is able to approximate real cameras accurately.

Environmental conditions

Sequences in the COnGRATS framework can be fitted with various environmental conditions. Those include wetness, fog and nighttime scenarios.

Foggy weather conditions
Sunny weather conditions
Rainy weather conditions
Papers
  • D. Biedermann, M. Ochs and R. Mester: Evaluating visual ADAS components on the COnGRATS dataset. Intelligent Vehicles (IV) Symposium, Gothenburg, Sweden, June 2016
  • D. Biedermann, M. Ochs and R. Mester: COnGRATS: Realistic Simulation of Traffic Sequences for Autonomous Driving. Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, November 2015 (Best Student Paper Award)

Videos

Download

Currently avaiable datasets to download:

  • ConstructionScene00KittiPoses
  • ConstructionScene01
Computational efficient methods for egomotion estimation
Machine Learning for Intelligent Control of Vehicles
Menü