Commit c4a194de authored by Guilhem Saurel's avatar Guilhem Saurel
Browse files

cosyslam v2

parent d1b6a475
Pipeline #17822 passed with stage
in 8 seconds
--- ---
title: "CosySLAM: tracking contact features using visual-inertial object-level SLAM for locomotion" title: "CosySlam: investigating object-level SLAM for detecting locomotion surfaces"
subtitle: Submitted to 2022 IEEE ICRA - International Conference on Robotics and Automation subtitle: Submitted to 2022 IEEE/RSJ IROS - International Conference on Intelligent Robots and Systems
author: author:
- César Debeunne ^1^ - César Debeunne ^1,2^
- Médéric Fourmy ^2,3^ - Médéric Fourmy ^2^
- Yann Labbé ^5^ - Yann Labbé ^3^
- Pierre-Alexandre Léziart ^2^ - Pierre-Alexandre Léziart ^2^
- Guilhem Saurel ^2^ - Guilhem Saurel ^2^
- Joan Solà ^4^ - Joan Solà ^2,4^
- <a href="https://gepettoweb.laas.fr/index.php/Members/NicolasMansard">Nicolas Mansard</a> ^1,2^ - <a href="https://gepettoweb.laas.fr/index.php/Members/NicolasMansard">Nicolas Mansard</a> ^2,5^
org: org:
- ^1^ ISAE-Supaero, Toulouse - ^1^ ISAE-Supaero, Toulouse
- ^2^ LAAS-CNRS, Université de Toulouse - ^2^ LAAS-CNRS, Université de Toulouse
- ^3^ Artificial and Natural Intelligence Toulouse Institute, Toulouse - ^3^ Inria, École normale supérieure, CNRS, PSL Research University, Paris
- ^4^ Intitut de Robòtica i Informàtica Industrial, Barcelona - ^4^ Intitut de Robòtica i Informàtica Industrial, Barcelona
- ^5^ Inria, École normale supérieure, CNRS, PSL Research University, Paris - ^5^ Artificial and Natural Intelligence Toulouse Institute, Toulouse
hal: https://hal.archives-ouvertes.fr/hal-03351438v1 hal: https://hal.archives-ouvertes.fr/hal-03351438v2
peertube: https://peertube.laas.fr/videos/embed/56edb26d-e2c3-46ac-909b-61f55d10c569 peertube: https://peertube.laas.fr/videos/embed/56edb26d-e2c3-46ac-909b-61f55d10c569
data: https://gepettoweb.laas.fr/data/cosyslam/ data: https://gepettoweb.laas.fr/data/cosyslam/
... ...
## Abstract ## Abstract
A legged robot is equipped with several sensors observing different classes of information, in order to provide various estimates on its states and its environment. While blindfolded legged locomotion has demonstrated impressive capabilities in the last few years, further progresses
While state estimation and mapping in this domain have traditionally been investigated through multiple local filters, recent progresses have been made toward tightly-coupled estimation. are expected from using exteroceptive perception to better adapt the robot behavior to the available surfaces of
Multiple observations are then merged into an a-posteriori maximum estimating several quantities that otherwise were separately estimated. contact. In this paper, we investigate whether mono cameras are suitable sensors for that aim. We propose to rely on
With this paper, our goal is to move one step further, by leveraging on object-based simultaneous localization and mapping. object-level SLAM, fusing RGB images and inertial measurements, to simultaneously estimate the robot balance state
We use an object pose estimator to localize the relative placement of the robot with respect to large elements of the environments, e.g. stair steps. (orientation in the gravity field and velocity), the robot position, and the location of candidate contact surfaces. We
These measurements are merged with other typical observations of legged robots, e.g. inertial measurements, to provide an estimation of the robot state (position, orientation and velocity of the basis) along with an accurate estimation of the environment pieces. used CosyPose, a learning-based object pose estimator for which we propose an empirical uncertainty model, as the sole
It then provides a consistent estimation of these two quantities, which is an important property as both would be needed to control the robot locomotion. front-end of our visual inertial SLAM. We then combine it with inertial measurements which ideally complete the system
We provide a complete implementation of this idea with the object tracker CosyPose, which we trained on our environment and for which we provide a covariance model, and with the SLAM engine Wolf used as a visual-inertial estimator on the quadruped robot Solo. observability, although extending the proposed approach would be straightforward (e.g. kinematic information about the
contact, or a feature based visual front end). We demonstrate the interest of object-based SLAM on several locomotion
sequences, by some absolute metrics and in comparison with other mono SLAM.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment