Abstract
The problems of vision-based localization and mapping are currently highly active areas of research for aerial systems. With a wealth of information available in each image, vision sensors allow vehicles to gather data about their surrounding environment in addition to inferring own-ship information. However, algorithms for processing camera images are often cumbersome for the limited computational power available onboard many unmanned aerial systems. This paper therefore investigates a method for incorporating an inertial measurement unit together with a monocular vision sensor to aid in the extraction of information from camera images, and hence reduce the computational burden for this class of platforms. Feature points are detected in each image using a Harris corner detector, and these feature measurements are statistically corresponded across each captured image using knowledge of the vehicle's pose. The investigated methods employ an Extended Kalman Filter framework for estimation. Real-time hardware results are presented using a baseline configuration in which a manufactured target is used for generating salient feature points, and vehicle pose information is provided by a high precision motion capture system for comparison purposes.
Original language | English (US) |
---|---|
Title of host publication | AIAA Guidance, Navigation and Control Conference and Exhibit |
State | Published - Dec 1 2008 |
Event | AIAA Guidance, Navigation and Control Conference and Exhibit - Honolulu, HI, United States Duration: Aug 18 2008 → Aug 21 2008 |
Other
Other | AIAA Guidance, Navigation and Control Conference and Exhibit |
---|---|
Country/Territory | United States |
City | Honolulu, HI |
Period | 8/18/08 → 8/21/08 |
All Science Journal Classification (ASJC) codes
- Aerospace Engineering
- Control and Systems Engineering