TY - JOUR
T1 - Single sensor-based 3D feature point location for a small flying robot application using one camera
AU - Shah, Syed Irtiza Ali
AU - Johnson, Eric N.
AU - Wu, Allen
AU - Watanabe, Yoko
PY - 2014/7
Y1 - 2014/7
N2 - Finding the location of feature points in 3D space from 2D vision data in structured environments has been done successfully for years and has been applied effectively on industrial robots. Miniature flying robots flying in unknown environments have stringent weight, space, and security constraints. For such vehicles, it has been attempted here to reduce the number of vision sensors to a single camera. At first, feature points are detected in the image using Harris corner detector, the measurements of which are then statistically corresponded across various images, using knowledge of vehicle's pose from onboard inertial measurement unit. First approach attempted is that of ego-motion perpendicular to camera axis and acceptable results for 3D feature point locations have been achieved. Next, except for a small region around the focus of expansion, forward translations along the camera axis have also been attempted with acceptable results, which is an improvement to the previous relevant work. The 3D location map of feature points thus obtained is utilizable for trajectory planning while ensuring collision avoidance through 3D space. Reduction of vision sensors to a single camera while utilizing minimum ego-motion space for 3D feature point location is a significant contribution of this work.
AB - Finding the location of feature points in 3D space from 2D vision data in structured environments has been done successfully for years and has been applied effectively on industrial robots. Miniature flying robots flying in unknown environments have stringent weight, space, and security constraints. For such vehicles, it has been attempted here to reduce the number of vision sensors to a single camera. At first, feature points are detected in the image using Harris corner detector, the measurements of which are then statistically corresponded across various images, using knowledge of vehicle's pose from onboard inertial measurement unit. First approach attempted is that of ego-motion perpendicular to camera axis and acceptable results for 3D feature point locations have been achieved. Next, except for a small region around the focus of expansion, forward translations along the camera axis have also been attempted with acceptable results, which is an improvement to the previous relevant work. The 3D location map of feature points thus obtained is utilizable for trajectory planning while ensuring collision avoidance through 3D space. Reduction of vision sensors to a single camera while utilizing minimum ego-motion space for 3D feature point location is a significant contribution of this work.
UR - http://www.scopus.com/inward/record.url?scp=84902108574&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84902108574&partnerID=8YFLogxK
U2 - 10.1177/0954410013500614
DO - 10.1177/0954410013500614
M3 - Article
AN - SCOPUS:84902108574
SN - 0954-4100
VL - 228
SP - 1668
EP - 1689
JO - Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering
JF - Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering
IS - 9
ER -