TY - GEN
T1 - Bio-inspired inverted landing strategy in a small aerial robot using policy gradient
AU - Liu, Pan
AU - Geng, Junyi
AU - Li, Yixian
AU - Cao, Yanran
AU - Bayiz, Yagiz E.
AU - Langelaan, Jack W.
AU - Cheng, Bo
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/10/24
Y1 - 2020/10/24
N2 - Landing upside down on a ceiling is challenging as it requires a flier to invert its body and land against the gravity, a process that demands a stringent spatiotemporal coordination of body translational and rotational motion. Although such an aerobatic feat is routinely performed by biological fliers such as flies, it is not yet achieved in aerial robots using onboard sensors. This work describes the development of a bio-inspired inverted landing strategy using computationally efficient Relative Retinal Expansion Velocity (RREV) as a visual cue. This landing strategy consists of a sequence of two motions, i.e. an upward acceleration and a rapid angular maneuver. A policy search algorithm is applied to optimize the landing strategy and improve its robustness by learning the transition timing between the two motions and the magnitude of the target body angular velocity. Simulation results show that the aerial robot is able to achieve robust inverted landing, and it tends to exploit its maximal maneuverability. In addition to the computational aspects of the landing strategy, the robustness of landing is also significantly dependent on the mechanical design of the landing gear, the upward velocity at the start of body rotation, and timing of rotor shutdown.
AB - Landing upside down on a ceiling is challenging as it requires a flier to invert its body and land against the gravity, a process that demands a stringent spatiotemporal coordination of body translational and rotational motion. Although such an aerobatic feat is routinely performed by biological fliers such as flies, it is not yet achieved in aerial robots using onboard sensors. This work describes the development of a bio-inspired inverted landing strategy using computationally efficient Relative Retinal Expansion Velocity (RREV) as a visual cue. This landing strategy consists of a sequence of two motions, i.e. an upward acceleration and a rapid angular maneuver. A policy search algorithm is applied to optimize the landing strategy and improve its robustness by learning the transition timing between the two motions and the magnitude of the target body angular velocity. Simulation results show that the aerial robot is able to achieve robust inverted landing, and it tends to exploit its maximal maneuverability. In addition to the computational aspects of the landing strategy, the robustness of landing is also significantly dependent on the mechanical design of the landing gear, the upward velocity at the start of body rotation, and timing of rotor shutdown.
UR - http://www.scopus.com/inward/record.url?scp=85102396118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102396118&partnerID=8YFLogxK
U2 - 10.1109/IROS45743.2020.9341732
DO - 10.1109/IROS45743.2020.9341732
M3 - Conference contribution
AN - SCOPUS:85102396118
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 7772
EP - 7777
BT - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020
Y2 - 24 October 2020 through 24 January 2021
ER -