Terrain-based vehicle orientation estimation combining vision and inertial measurements

Vishisht Gupta, Sean Brennan

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

A novel method for estimating vehicle roll, pitch, and yaw using machine vision and inertial sensors is presented that is based on matching images captured from an on-vehicle camera to a rendered representation of the surrounding terrain obtained from a three-dimensional (3D) terrain map. U.S. Geographical Survey Digital Elevation Maps were used to create a 3D topology map of the geography surrounding the vehicle, and it is assumed in this work that large segments of the surrounding terrain are visible, particularly the horizon lines. The horizon lines seen in the captured video from the vehicle are compared to the horizon lines obtained from a rendered geography, allowing absolute comparisons between rendered and actual scene in roll, pitch, and yaw. A kinematic Kalman filter modeling an inertial navigation system then uses the scene matching to generate filtered estimates of orientation. Numerical simulations verify the performance of the Kalman filter. Experiments using an instrumented vehicle operating at the test track of the Pennsylvania Transportation Institute were performed to check the validity of the method. When compared to estimates from a global positioning system/inertial measurement unit (IMU) system, the roll, pitch, and yaw estimates from vision/IMU Kalman filter show an agreement with a (2σ) bound of 0.5, 0.26, and 0.8 deg, respectively.

Original languageEnglish (US)
Pages (from-to)181-202
Number of pages22
JournalJournal of Field Robotics
Volume25
Issue number3
DOIs
StatePublished - Mar 2008

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Terrain-based vehicle orientation estimation combining vision and inertial measurements'. Together they form a unique fingerprint.

Cite this