Robot Unstructured Ground Driving

A Video Dataset for Visual Perception and
Autonomous Navigation in Unstructured Environments

RUGD Dataset Overview

The RUGD dataset focuses on semantic understanding of unstructured outdoor environments for applications in off-road autonomous navigation. The datset is comprised of video sequences captured from the camera onboard a mobile robot platform. The overall goal of the data collection is to provide a more representative dataset of environments that lack structural cues that are commonly found in urban city autonmous navigation datasets. The platform used for data collection is small enough to manuever in cluttered environments, and is rugged enough to traverse through challenging terrain to explore more unstructured areas of an environment.

Dense pixel-wise annotations are provided for every fifth frame in a video sequence. The ontology is defined to support fine-grained terrain identification for path planning tasks, and object identification to avoid obstacles and localize landmarks. In total, 24 semantic categories can be found in the annotations of the videos including eight unique terrain types.

RUGD Example Annotations


Annotation files and associated raw video frames for the 18 video sequences can be downloaded below. Full video sequences are available upon request.


When using this dataset in your research, we would appreciate if you would cite us! [paper] [slides]

  author = {Wigness, Maggie and Eum, Sungmin and Rogers, John G and Han, David and Kwon, Heesung},
  title = {A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments},
  booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
  year = {2019}