Commit c56d8616 authored by Guilhem Saurel's avatar Guilhem Saurel
Browse files

Dantec RAL 2022: embed peertube

parent 322c1818
Pipeline #17436 passed with stage
in 7 seconds
......@@ -9,7 +9,7 @@ org:
- ^1^ LAAS-CNRS, Université de Toulouse
- ^2^ Artificial and Natural Intelligence Toulouse Institute, Toulouse
hal: https://hal.archives-ouvertes.fr/hal-03419712
peertube: https://peertube.laas.fr/videos/watch/b71febbc-d5f4-4e35-a339-b8a3cbd6fed1
peertube: https://peertube.laas.fr/videos/embed/b71febbc-d5f4-4e35-a339-b8a3cbd6fed1
...
## Abstract
......@@ -19,5 +19,5 @@ Currently the best solvers are barely able to reach 100Hz for computing the cont
This problem is usually tackled by using a handcrafted low-level tracking control whose inputs are the low-frequency trajectory computed by the MPC.
We show that a linear state feedback controller naturally arises from the optimal control formulation and can be used directly in the low-level control loop along with other sensitivities of relevant time-varying parameters of the problem.
When the optimal control problem is solved by DDP, this linear controller can be computed for cheap as a by-product of the backward pass, and corresponds in part to the classical Riccati gains.
A side effect of our proposition is to show that Riccati gains are valuable assets that must be used to achieve an efficient control and that they are not stiffer than the optimal control scheme itself.
A side effect of our proposition is to show that Riccati gains are valuable assets that must be used to achieve an efficient control and that they are not stiffer than the optimal control scheme itself.
We propose a complete implementation of this idea on a full-scale humanoid robot and demonstrate its importance with real experiments on the robot Talos.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment