Vision-based manipulation of deformable and rigid objects using subspace projections of 2D contours

Jihong Zhu*, David Navarro-Alarcon, Robin Passama, Andrea Cherubini

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper proposes a unified vision-based manipulation framework using image contours of deformable/rigid objects. Instead of explicitly defining the features by geometries or functions, the robot automatically learns the visual features from processed vision data. Our method simultaneously generates – from the same data – both visual features and the interaction matrix that relates them to the robot control inputs. Extraction of the feature vector and control commands is done online and adaptively, and requires little data for initialization. Our method allows the robot to manipulate an object without knowing whether it is rigid or deformable. To validate our approach, we conduct numerical simulations and experiments with both deformable and rigid objects.

Original languageEnglish
Article number103798
JournalRobotics and Autonomous Systems
Volume142
Early online date27 May 2021
DOIs
Publication statusPublished - Aug 2021

Bibliographical note

Funding Information:
This work is supported in part by the EU H2020 research and innovation programme as part of the project VERSATILE, France under grant agreement No 731330 , by the Research Grants Council (RGC) of Hong Kong under grant number 14203917 , and by the PROCORE-France/Hong Kong RGC Joint Research Scheme, France under grant F-PolyU503/18 .

Publisher Copyright:
© 2021

Keywords

  • Deformable object manipulation
  • Sensor-based control
  • Visual servoing

Cite this