Abstract
This paper proposes a unified vision-based manipulation framework using image contours of deformable/rigid objects. Instead of explicitly defining the features by geometries or functions, the robot automatically learns the visual features from processed vision data. Our method simultaneously generates – from the same data – both visual features and the interaction matrix that relates them to the robot control inputs. Extraction of the feature vector and control commands is done online and adaptively, and requires little data for initialization. Our method allows the robot to manipulate an object without knowing whether it is rigid or deformable. To validate our approach, we conduct numerical simulations and experiments with both deformable and rigid objects.
Original language | English |
---|---|
Article number | 103798 |
Journal | Robotics and Autonomous Systems |
Volume | 142 |
Early online date | 27 May 2021 |
DOIs | |
Publication status | Published - Aug 2021 |
Bibliographical note
Funding Information:This work is supported in part by the EU H2020 research and innovation programme as part of the project VERSATILE, France under grant agreement No 731330 , by the Research Grants Council (RGC) of Hong Kong under grant number 14203917 , and by the PROCORE-France/Hong Kong RGC Joint Research Scheme, France under grant F-PolyU503/18 .
Publisher Copyright:
© 2021
Keywords
- Deformable object manipulation
- Sensor-based control
- Visual servoing