Towards Low-Latency Object Detection on Board Reactive Search-and-Rescue Drones

Sep 9, 2025·
Ismail Amessegher
Ismail Amessegher
,
Arthur Gaudard
,
Kojo Nyamekye Anyinam-Boateng
,
Hugo Le Blévec
,
Lionel Génevé
Florian Pouthier
Florian Pouthier
,
Mathieu Leonardon
,
Hajer Fradi
,
Lucia Bergantin
,
Panagiotis Papadakis
,
Isabelle Fantoni
,
Jean-Philippe Diguet
,
Matthieu Arzel
· 0 min read
Image credit: Matthieu Arzel
Abstract
Drones play a crucial role in search and rescue missions by providing real-time information on areas of interest that are difficult to access or endangering to human rescuers. However, analyzing raw video feeds by human operators to detect objects of interest, such as vehicles or victims, becomes increasingly demanding as the mission duration increases. This underscores the need for embedded computer vision to reduce the operator’s cognitive load and enhance mission responsiveness. Towards this goal, we propose a low-latency object detection model based on YOLO, fitted to SAR missions, and able to process data coming from RGB and event cameras. We also propose a low-latency implementation on FPGA on board drones, achieving accurate detection in less than 10ms. Through a series of tests performed indoors using a prototype drone, we highlight the features of the model and processing core that favor drone reactivity and operational autonomy.
Type
Publication
Accepted in the 2025 IEEE International Conference on Safety, Security, and Rescue Robotics, October 29-31, 2025, Galway, Ireland