Commit abff3dcd authored by Guilhem Saurel's avatar Guilhem Saurel
Browse files

public/

parent 187d51bd
Pipeline #16128 passed with stage
in 9 seconds
image: memmos.laas.fr:5000/gepetto/articles
pages:
script:
- make
artifacts:
paths:
- public
FROM ubuntu:20.04
RUN --mount=type=cache,sharing=locked,target=/var/cache/apt --mount=type=cache,sharing=locked,target=/var/lib/apt \
apt-get update -qqy && DEBIAN_FRONTEND=noninteractive apt-get install -qqy \
pandoc \
pandoc-citeproc \
make
%.html: %.md template.html bibliography.bib
public/%.html: %.md template.html bibliography.bib
pandoc -s -F pandoc-citeproc --template template.html -o $@ $<
all: cosyslam.html
all: public/cosyslam.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="generator" content="pandoc" />
<meta name="author" content="César Debeunne 1" />
<meta name="author" content="Médéric Fourmy 1,2" />
<meta name="author" content="Yann Labbé 3" />
<meta name="author" content="Pierre-Alexandre Léziart 1" />
<meta name="author" content="Guilhem Saurel 1" />
<meta name="author" content="Joan Solà 1,3" />
<meta name="author" content="Nicolas Mansard 1,2" />
<title>CosySLAM: tracking contact features using visual-inertial object-level SLAM for locomotion</title>
<style>
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
</style>
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-F3w7mX95PdgyTmZZMECAngseQB83DfGTowi0iMjiWaeVhAn4FJkqJByhZMI3AhiU" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.5.0/font/bootstrap-icons.css">
</head>
<body>
<div class="container">
<header id="title-block-header">
<h1 class="title text-center">CosySLAM: tracking contact features using visual-inertial object-level SLAM for locomotion</h1>
<h4 class="subtitle text-center text-muted">Submitted to 2022 IEEE ICRA - International Conference on Robotics and Automation</h4>
<ul class="list-inline text-center">
<li class="author list-inline-item">César Debeunne <sup>1</sup></li>
<li class="author list-inline-item">Médéric Fourmy <sup>1,2</sup></li>
<li class="author list-inline-item">Yann Labbé <sup>3</sup></li>
<li class="author list-inline-item">Pierre-Alexandre Léziart <sup>1</sup></li>
<li class="author list-inline-item">Guilhem Saurel <sup>1</sup></li>
<li class="author list-inline-item">Joan Solà <sup>1,3</sup></li>
<li class="author list-inline-item"><a href="https://gepettoweb.laas.fr/index.php/Members/NicolasMansard">Nicolas Mansard</a> <sup>1,2</sup></li>
</ul>
<div class="row align-items-center">
<div class="col">
<a role="button" class="btn btn-outline-primary" href="https://hal.laas.fr/hal-TODO/document">Paper <i class="bi-file-pdf"></i></a>
<a role="button" class="btn btn-outline-primary" href="https://github.com/gepetto/TODO">Code <i class="bi-github"></i></a>
<a role="button" class="btn btn-outline-primary" href="CosySLAM">Cite <i class="bi-book"></i></a>
</div>
<ul class="col text-end list-unstyled">
<li class="org"><sup>1</sup> LAAS-CNRS, Université de Toulouse</li>
<li class="org"><sup>2</sup> Artificial and Natural Intelligence Toulouse Institute, Toulouse</li>
<li class="org"><sup>3</sup> Intitut de Robòtica i Informàtica Industrial, Barcelona</li>
<li class="org"><sup>4</sup> Inria, École normale supérieure, CNRS, PSL Research University, Paris</li>
</ul>
</div>
</header>
<div class="text-center">
<iframe width="560" height="315" sandbox="allow-same-origin allow-scripts allow-popups" src="https://peertube.laas.fr/videos/embed/a78430ea-09b0-4e31-8ca6-7436c7af9165"
frameborder="0" allowfullscreen ></iframe>
</div>
<h2 id="abstract">Abstract</h2>
<p>A legged robot is equipped with several sensors observing different classes of information, in order to provide various estimates on its states and its environment. While state estimation and mapping in this domain have traditionally been investigated through multiple local filters, recent progresses have been made toward tightly-coupled estimation. Multiple observations are then merged into an a-posteriori maximum estimating several quantities that otherwise were separately estimated. With this paper, our goal is to move one step further, by leveraging on object-based simultaneous localization and mapping. We use an object pose estimator to localize the relative placement of the robot with respect to large elements of the environments, e.g. stair steps. These measurements are merged with other typical observations of legged robots, e.g. inertial measurements, to provide an estimation of the robot state (position, orientation and velocity of the basis) along with an accurate estimation of the environment pieces. It then provides a consistent estimation of these two quantities, which is an important property as both would be needed to control the robot locomotion. We provide a complete implementation of this idea with the object tracker CosyPose, which we trained on our environment and for which we provide a covariance model, and with the SLAM engine Wolf used as a visual-inertial estimator on the quadruped robot Solo.</p>
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.1/dist/js/bootstrap.bundle.min.js" integrity="sha384-/bQdsTh/da6pkI1MST/rWKFNjaCP5gBSY4sEBT38Q/9RBh9AH40zEOg7Hlq2THRZ" crossorigin="anonymous"></script>
</body>
</html>
......@@ -17,9 +17,6 @@ $if(description-meta)$
<meta name="description" content="$description-meta$" />
$endif$
<title>$if(title-prefix)$$title-prefix$ – $endif$$pagetitle$</title>
<style>
$styles.html()$
</style>
$for(css)$
<link rel="stylesheet" href="$css$" />
$endfor$
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment