TY - GEN
T1 - Vision sensor fusion for autonomous landing
AU - Nakamura, Takuma
AU - Haviland, Stephen
AU - Bershadsky, Dmitry
AU - Johnson, Eric N.
N1 - Publisher Copyright:
© 2017, American Institute of Aeronautics and Astronautics Inc, AIAA. All rights reserved.
PY - 2017
Y1 - 2017
N2 - This paper describes a vision-based algorithm for autonomous landing on a moving target. The algorithm fuses multiple outputs of two different computer vision techniques. One is the Viola-Jones object detection using Haar-like features, and the other is the AprilTag detection that segments an image based on local gradients. The Haar-like feature detector can detect any arbitrary known features, and we use this method when an aircraft is at altitude and approaches a landing spot. The AprilTag, which allows for precise position and attitude determination of the target, is placed at an expected landing location and used for a final approach. The combination of those techniques allows us to track the target through all the landing phases from altitude to touch down. We fuse the outputs by utilizing the statistics of the measurements and multiple extended Kalman filters. This way, we can not only probabilistically choose the right target from multiple candidates but also estimate the velocity of the target for formation flight and landing. This algorithm is demonstrated in an image-in-the-loop simulation and flight tests with a Yamaha RMAX helicopter and a WAM-V boat.
AB - This paper describes a vision-based algorithm for autonomous landing on a moving target. The algorithm fuses multiple outputs of two different computer vision techniques. One is the Viola-Jones object detection using Haar-like features, and the other is the AprilTag detection that segments an image based on local gradients. The Haar-like feature detector can detect any arbitrary known features, and we use this method when an aircraft is at altitude and approaches a landing spot. The AprilTag, which allows for precise position and attitude determination of the target, is placed at an expected landing location and used for a final approach. The combination of those techniques allows us to track the target through all the landing phases from altitude to touch down. We fuse the outputs by utilizing the statistics of the measurements and multiple extended Kalman filters. This way, we can not only probabilistically choose the right target from multiple candidates but also estimate the velocity of the target for formation flight and landing. This algorithm is demonstrated in an image-in-the-loop simulation and flight tests with a Yamaha RMAX helicopter and a WAM-V boat.
UR - http://www.scopus.com/inward/record.url?scp=85085406414&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085406414&partnerID=8YFLogxK
U2 - 10.2514/6.2017-0674
DO - 10.2514/6.2017-0674
M3 - Conference contribution
AN - SCOPUS:85085406414
SN - 9781624104497
T3 - AIAA Information Systems-AIAA Infotech at Aerospace, 2017
BT - AIAA Information Systems-AIAA Infotech at Aerospace, 2017
PB - American Institute of Aeronautics and Astronautics Inc, AIAA
T2 - AIAA Information Systems-Infotech At Aerospace Conference, 2017
Y2 - 9 January 2017 through 13 January 2017
ER -