Towards Safe & Spare Robot Navigation

Jun 30, 2025 · 4 min read
Abstract
Mobile robots may embed high-consumption sensors for localizaton, such as 3D LiDAR or flash-based sensors. A sparing usage of these sensors can reduce the on-board power consumption, increasing the robot autonomy in navigation. Such a strategy relies on a self-triggered controller, triggering new state measurements only when needed (for safety of stability reasons).
Event
Location

Rennes, France

IRISA - Université de Rennes - Campus Beaulieu, Rennes, Bretagne 35000

Introduction

Mobile robots may embed high-consumption sensors for localization, such as 3D LiDAR or flash-based sensors. A sparing usage of these sensors can reduce the on-board power consumption, increasing the robot autonomy in navigation. Such a strategy relies on a self-triggered controller, triggering new state measurements only when needed (for safety of stability reasons). The work in Pouthier et al., 2025 is extended, by applying the proposed strategy to the safe robot navigation amidst obstacles.

Reachability and Invariance for DLTI Systems

Let’s consider that all or part of the robot’s dynamics can be modeled by a discrete-time, linear, time-invariant (DLTI) system of the form $$\bm{\xi}_{k+1} = \bm{A}\bm{\xi}_{k} + \bm{B}\bm{\nu}_{k} + \bm{E}\bm{w}_{k},$$ where $\bm{\xi}_{k}\in\mathbb{R}^n$ is the system state, $\bm{\nu}_{k}\in\mathbb{R}^m$ is the control input, and $\bm{w}_{k}\in\mathbb{R}^d$ is an additive disturbance input. This disturbance is unknown but bounded by a convex polytope $\mathcal{W}\subset\mathbb{R}^d$. The state $\bm{\xi}_{k}$ is subject to state constraints, such as position limits (presence of obstacles) or velocity limits (saturations of the physical system). These constraints are defined by another convex polytope $\mathcal{X}\subseteq\mathbb{R}^n$. The same applies to the control input, which is bounded by a polytope $\mathcal{U}\subseteq\mathbb{R}^m$. The reachability of the open-loop system can be recursively calculated with

$$\mathcal{S}_{k+1}=\bm{A}\mathcal{S}_k\oplus\{\bm{B}\bm{\nu}_k\}\oplus\bm{E}\mathcal{W},$$

where $\mathcal{S}_k\subset\mathbb{R}^n$ is a polytope encompassing the current state of the open-loop system, and $\oplus$ denotes the Minkowski sum operator.

The open-loop system can be stabilized by a linear state feedback law of the form $\bm\nu_k = -\bm{K}\bm\xi_k$, leading to the closed-loop system

$$\bm{\xi}_{k+1} = \bm{A}_{cl}\bm{\xi}_{k} + \bm{E}\bm{w}_{k}, \quad \bm{A}_{cl}\triangleq\bm{A}-\bm{B}\bm{K},$$

where the gain matrix $\bm{K}$ is selected such that $\bm{A}_{cl}$ is asymptotically stable. Then, an invariant set exists for this closed-loop system 1.

Definition 1. The set $\mathcal{Z}\subset\mathbb{R}^n$ is a robust positively invariant set for the closed-loop system if $\bm{A}_{cl}\mathcal{Z}\oplus\bm{E}\mathcal{W}\subseteq\mathcal{Z}$.

The maximal invariant set for the closed-loop system can be finitely determined 2, computing the sequence $\{\mathcal{O}_k\}_{k=0}^{\infty}$ recursively defined as

$$\forall k\in\mathbb{N},\ \mathcal{O}_{k+1}\triangleq\left(\bm A_{cl}^{-1}(\mathcal{O}_{k}\ominus\bm{E}\mathcal{W})\right)\cap\mathcal{X}, \quad \mathcal{O}_0\triangleq\mathcal{X},$$

where $\ominus$ denotes the Pontryagin difference operator. The set sequence $\{\mathcal{O}_k\}_{k=0}^{\infty}$ converges towards the maximal invariant set $\mathcal{O}_{\infty}$.

Self-Triggered Control on Invariant Sets

Sparing the costly measurements of the system state can be achieved by applying a self-triggered controller (STC) 3.

Definition 2. A self-triggered (state feedback) controller is defined by:

  • an event function $\sigma:\mathcal{X}\times\mathbb{N}\mapsto\{0,1\}$ that indicates if a control update is needed ($\sigma_k=1$) or not ($\sigma_k=0$). The event function takes as input the last state measurement triggered $\bm\xi^\star\in\mathcal{X}$, and the number of instants $j\in\mathbb{N}$ elapsed since this last measurement;
  • a feedback function $\bm{\nu}:\mathcal{X}\times\mathbb{N}\mapsto\mathcal{U}$ which also takes $\bm\xi^\star$ and $j$ as input and defines the applied feedback at event instants ($\sigma_k=1$) and between two events ($\sigma_k=0$).

At an event instant, the feedback function takes the value of the state feedback with the new triggered state, i.e. $\bm\nu_k = -\bm{K}\bm\xi_k$ when $\sigma_k=1$. However, the feedback function can define various control profiles between two events. Given that the closed-loop model of the system dynamics is known, a model-based controller can be deployed, i.e. the feedback function of the STC is defined by

$$ \bm\nu_k\left(\bm{\xi}^\star,j\right)= \left\{ \begin{array}{lll} -\bm{K}\bm{A}_{cl}^j\bm\xi^\star & & \quad \text{if }\sigma_k=0\\ -\bm{K}\bm\xi_k, & \quad \bm\xi^\star\leftarrow\bm\xi_k & \quad \text{if }\sigma_k=1 \end{array} \right.. $$

Choosing this controller leads to calculate a new sequence of reachable sets $\{\mathcal{S}_k\}_{k=0}^{\infty}$ for the STC controlled system. These reachable sets can be calculated using the explicit form

$$\mathcal{S}_{k^\star+j}=\bigoplus_{i=0}^{j-1}\bm{A}^i\bm{E}\mathcal{W}\oplus\{\bm{A}_{cl}^j\bm\xi^\star\}$$

with initial condition $\mathcal{S}_{k^\star}\triangleq\bm\xi^\star$, and where $k^\star$ denotes the time instant when the last measurement $\bm\xi^\star$ has been triggered.

The event function of the STC is designed to ensure system safety. Indeed, it must detect when a reachable set enters in an unsafe zone. Then, the proposed strategy is to always maintain the sequence of reachable sets $\{\mathcal{S}_k\}_{k=0}^{\infty}$ in the maximal invariant set $\mathcal{O}_\infty$. Defining

$$ \sigma_k = \left\{ \begin{array}{ll} 0 & \text{if}\ \mathcal{S}_{k^\star+j+1}\subseteq\mathcal{O}_\infty\\ 1 & \text{otherwise} % \text{if}\ \mathcal{S}_{k^\star+j+1}\not\subseteq\mathcal{Z} \end{array} \right. $$

leads to trigger a new measurement only when the next reachable set is lying outside $\mathcal{O}_\infty$. Safety and stability of the STC are proved in Pouthier et al., 2025.

Invariance based on Distance to Obstacles

The maximal invariant set is computed in a way to encircle (from the inside) the square $\mathcal{X}$ of state constraints 4. This square adapts to the distance $d$ to obstacles of the robot (with radius $r$). State constraints limits in position can then be computed as

$$|p_{lim}|\triangleq(d-r)/\sqrt{2}.$$

Conclusion

This work lays the foundations for a safe navigation strategy while limiting the use of energy-consuming sensors to increase robot autonomy. Invariant sets are used in the STC to provide a safe triggering set given the distance from the robot to obstacles.

Acknowledgement

This research has been supported by the French National Research Agency through the Dark-NAV project (ANR-20-CE33-0009) and the CominLabs LEASARD project (ANR-10-LABX-07-01).


  1. F. Blanchini. Set invariance in control. Automatica, 1999. ↩︎

  2. I. Kolmanovsky, E.G. Gilbert. Theory and computation of disturbance invariant sets for discrete-time linear systems. Mathematical Problems in Engineering, 1998. ↩︎

  3. W.P.M.H. Heemels, M.C.F. Donkers, and A.R. Teel. Periodic event-triggered control based on state feedback. 50th IEEE Conference on Decision and Control and European Control Conference, 2011. ↩︎

  4. S.V. Raković, and M. Fiacchini. Invariant Approximations of the Maximal Invariant Set or “Encircling the Square”. 17th IFAC World Congress, 2008. ↩︎