Нейроморфные процессоры, такие как Loihi, предлагают многообещающую альтернативу традиционным вычислительным модулям для наделения ограниченных систем, таких как микролетающие аппараты (MAV), надежными, эффективными и автономными навыками, такими как взлет и посадка, уклонение от препятствий и преследование. Однако серьезной проблемой для использования таких процессоров на роботизированных платформах является разрыв между реальностью моделирования и реальным миром. В этом исследовании мы впервые представляем полностью встроенное приложение прототипа нейроморфного чипа Loihi в летающем роботе. Была разработана импульсная нейронная сеть (SNN) для вычисления команды тяги на основе расхождения вентрального оптического поля потока для выполнения автономной посадки. Эволюция была выполнена в симуляторе на основе Python с использованием библиотеки PySNN. Результирующая архитектура сети состоит всего из 35 нейронов, распределенных по 3 слоям. Количественный анализ между результатами моделирования и Loihi показывает среднеквадратичную погрешность задания тяги всего 0,005 g, а также 99,8% совпадение последовательностей импульсов в скрытом слое и 99,7% в выходном слое. Предлагаемый подход успешно устраняет разрыв с реальностью, предоставляя важные идеи для будущих нейроморфных приложений в робототехнике.
*Перевод выполнен с помощью нейросетей
Neuromorphic control for optic-flow-based landings of MAVs using the Loihi processor
Neuromorphic processors like Loihi offer a promising alternative to conventional computing modules for endowing constrained systems like micro air vehicles (MAVs) with robust, efficient and autonomous skills such as take-off and landing, obstacle avoidance, and pursuit. However, a major challenge for using such processors on robotic platforms is the reality gap between simulation and the real world. In this study, we present for the very first time a fully embedded application of the Loihi neuromorphic chip prototype in a flying robot. A spiking neural network (SNN) was evolved to compute the thrust command based on the divergence of the ventral optic flow field to perform autonomous landing. Evolution was performed in a Python-based simulator using the PySNN library. The resulting network architecture consists of only 35 neurons distributed among 3 layers. Quantitative analysis between simulation and Loihi reveals a root-meansquare error of the thrust setpoint as low as 0.005 g, along with a 99.8% matching of the spike sequences in the hidden layer, and 99.7% in the output layer. The proposed approach successfully bridges the reality gap, offering important insights for future neuromorphic applications in robotics. Supplementary material is available at https://mavlab.tudelft.nl/loihi/. I.
Providing micro air vehicles (MAVs) with complete autonomy is a complex challenge that generally requires multiple sensors and sensory redundancy along with significant computational resources, which are constraints that do not always fit with MAVs. Yet, flying insects like fruit flies, with their ∼100,000 neurons, are known to excel in flying in unknown and complex environments, performing fast maneuvers, avoiding obstacles, chasing mates, and navigating [1]. To achieve such a performance, these insects rely on visual cues like the optic flow, i.e. the brightness change in the retina caused by the relative motion of the observer [2], [3]. It was recently demonstrated that hoverflies use visual cues to stabilize their flight during free-fall to avoid a crash [4]. Besides, honeybees maintain a constant divergence (rate of expansion) of the optic flow to achieve smooth landing [5]. Given the very low number of neurons in the flying insect brain, we must acknowledge that they represent a gold standard in autonomous flight, one that we should take inspiration from to design robust and efficient autonomous flight controllers.
Providing micro air vehicles (MAVs) with complete autonomy is a complex challenge that generally requires multiple sensors and sensory redundancy along with significant computational resources, which are constraints that do not always fit with MAVs. Yet, flying insects like fruit flies, with their ∼100,000 neurons, are known to excel in flying in unknown and complex environments, performing fast maneuvers, avoiding obstacles, chasing mates, and navigating [1]. To achieve such a performance, these insects rely on visual cues like the optic flow, i.e. the brightness change in the retina caused by the relative motion of the observer [2], [3]. It was recently demonstrated that hoverflies use visual cues to stabilize their flight during free-fall to avoid a crash [4]. Besides, honeybees maintain a constant divergence (rate of expansion) of the optic flow to achieve smooth landing [5]. Given the very low number of neurons in the flying insect brain, we must acknowledge that they represent a gold standard in autonomous flight, one that we should take inspiration from to design robust and efficient autonomous flight controllers.