Authors
Lorenzo Lamberti, Georg Rutishauser, Francesco Conti, Luca Benini
Publication date
2024/3/18
Journal
arXiv preprint arXiv:2403.11661
Description
A critical challenge in deploying unmanned aerial vehicles (UAVs) for autonomous tasks is their ability to navigate in an unknown environment. This paper introduces a novel vision-depth fusion approach for autonomous navigation on nano-UAVs. We combine the visual-based PULP-Dronet convolutional neural network for semantic information extraction, i.e., serving as the global perception, with 8x8px depth maps for close-proximity maneuvers, i.e., the local perception. When tested in-field, our integration strategy highlights the complementary strengths of both visual and depth sensory information. We achieve a 100% success rate over 15 flights in a complex navigation scenario, encompassing straight pathways, static obstacle avoidance, and 90{\deg} turns.
Scholar articles
L Lamberti, G Rutishauser, F Conti, L Benini - arXiv preprint arXiv:2403.11661, 2024