Overview
Robots that succeed in factories stumble to complete the simplest daily task humans take for granted, for the change of environment makes the task exceedingly difficult. Aiming to teach robots perform daily interactive manipulation in a changing environment using human demonstrations, we collected our own data of interactive manipulation. The dataset focuses on position, orientation, force, and torque of objects manipulated in daily tasks. The dataset includes 1,603 trials of 32 types of daily motions and 1,751 trials of pouring alone, as well as helper code. We present our dataset to facilitate the research on task-oriented interactive manipulation.
We present a dataset of daily interactive manipulation.
Specifically, we record daily performed fine motion in
which an object is manipulated to interact with another
object. We refer to the person who executes the motion as
subject, the manipulated object as tool, and the interactive
object as object. We focus on recording the motion of the
tool. In some cases, we also record the motion of the object.
The dataset consists of two parts. The first part contains
1,603 trials that cover 32 types of motions. We choose fine
motions that people commonly perform in daily life which
involve interaction with a variety of objects. Different
subsets of the motions are found in multiple different
motion-related datasets. The motions we collect include those
that are most frequently executed in cooking scenarios except that
we do not include pick-and-place because it barely involve
change of orientation.
The second part contains the pouring motion alone. We
collect it to help with motion generalization to different
environments. We chose pouring because 1) pouring is
found to be the second frequently executed motion in
cooking, right after pick-and-place
and 2) we can vary the environment setup of the pouring
motion easily by switching different materials, cups, and
containers. The pouring data contains 1,751 trials of
pouring 3 materials from 6 cups into 10 containers.
We collect the two parts of the data using the same
system.
The dataset provides position and orientation (PO) with 100% coverage,
and force and torque (FT) with 100% coverage, and
vision with 50% coverage. The less-than-perfect coverage of vision
results from filming restrictions.
To cite this dataset:
Huang, Y. and Sun, Y. (2019), A Dataset of Daily Interactive Manipulation, International Journal of Robotics Research (IJRR), 38(8): 879-886.
[pdf]
Bibtex:
@article{huang2018dataset,
  title={A dataset of daily interactive manipulation},
  author={Huang, Yongqiang and Sun, Yu},
  journal={The International Journal of Robotics Research},
  pages={8790886},
  volume = 38,
  number = 8,
  year={2019},
  publisher={SAGE Publications Sage UK: London, England}
}