Cluttered Tabletop Dataset (CTD)

This dataset contains 89 scenes of various objects placed on a table. The set of objects includes simple objects like boxes as well as nonconvex objects such as a teddy bear. Complexity of the scenes varies from single objects to multiple objects put side by side and stacked on top of each other. Scenes were captured by moving a Kinect sensor in approximately 120 degree horizontal arc around the center of the scene and stitching individual RGB-D frames using the Kinect Fusion algorithm. Pointclouds were preporocessed by removing the points belonging to the environment and only leaving the points belonging to the objects on the table. In addition to pointcloud data the following information is available for each scene:

The dataset is organized as follows:

Dataset

Each scene is stored in a folder called scene_xxx. Scenes are numbered in the increasing order of complexity:

Scene indices Number of objects
000 - 024 1
025 - 038 2
039 - 056 3-4
057 - 088 5+

For each scene the following information is provided:

Sample Code

A C++ program that shows how to load a scene, display a pointcloud, the table plane and the ground truth segmentation. Requires PCL.

To compile:

cd sample_code
mkdir build
cd build
cmake ..
make

To run:

./sample_code <scene directory>

Associated Publications

Please cite the following reference if you use this dataset:

@inproceedings{Ecins_IROS2018,
 author = "Aleksandrs Ecins and Cornelia Fermuller, and Yiannis Aloimonos",
 title = "Seeing Behind The Scene: Using Symmetry To Reason About Objects in Cluttered Environments",
 booktitle = "International Conference on Intelligent Robots and Systems (IROS)",
 year = "2018",
}

Author

Aleksandrs Ecins
Computer Vision Laboratory
University of Maryland College Park
aecins(at)cs.umd.edu

PS: Don't hasitate to contact me for further information.