Multi-modal perception for small obstacles

Lidar Point clouds


Dense Annotations

Real and Synthetic Datapoints

Real Data

Small obstacles on road with challenging lighting and occlusion.

Synthetic Data

Synthetic data with small obstacles on road, collected using Microsoft AirSim in Unreal Engine.

Small Obstacle Dataset

This dataset addresses the problem of detecting unexpected small obstacles on the road caused by construction activites, lost cargo and other stochastic scenarios. Instances of such obstacles are rare in popular autonomous driving datasets (KITTI, Waymo, Cityscapes) and thus methods trained on such datasets might fail to address this problem adequately. We thus introduce this dataset to the research community comprising of over 3000 annotated frames, taken in challenging scenarios with varied set of small obstacles.

Stereo depth has been previously employed[ Lost and Found Dataset] as a complementary input to Image for this problem, however the depth obtained was limited to 20 metres and could further prove erroneous due to lack of discernible features of small objects. In this dataset, we utilise a highly accurate but sparse LiDAR sensor to obtain depths upto a range of 75 m. Pixel-wise Image annotations for 3 semantic classes: small obstacle, road, and off-road are available for about 3000 frames. We provide precise extrinsics calibration matrix between camera and LiDAR (refer to Publication section for our methodology) which can be used to obtain Point-Cloud labels from given Image annotations. The dataset can thus be used to evaluate individual LiDAR/Image methods or a multi-modal approach for this task. We further invite contributions for the task of domain adaptation and release a synthetic version of the small obstacle dataset collected on custom city maps in Unreal Engine. Links to the dataset and code for the publication can be accessed from the download section.