Visual-inertial fusion based positioning systems

Jianan Zhang, Tim Kane

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


In this paper, we developed a visible light positioning (VLP) system using a camera and low-cost inertial measurement units (IMUs). Applying computer vision and sensor fusion techniques, our VLP system is able to estimate the angle of arrival (AoA) and the distance from a landmark to a mobile device. Due to the complementary nature between IMUs and cameras, we are able to improve the performance of VLP systems by applying sensor fusion. Currently, most optical positioning systems require at least two line-of-sight (LOS) links, so the coverage is not always satisfactory. Using a single round light-emitting diode (LED) panel or two coplanar black thick rings as the landmark, our VLP system only needs one LOS link to estimate the orientation and position of the mobile device. By activating inertial navigation, our VLP system is able to perform localization even if the landmark is temporarily blocked by obstacles. We derived approximated upper bounds of the angular errors and applied visual-inertial sensor fusion in estimating the Euler angles of the mobile device. Since the weights of sensor fusion are determined by upper bounds, the expected maximum errors are minimized in our positioning system. In our field experiments, the positioning system has an average positioning error of 0.18m with an effective positioning range of 7m. Compared to similar types of positioning systems, our system has significant improvements in positioning range without sacrificing positioning accuracy.

Original languageEnglish (US)
Pages (from-to)189761-189774
Number of pages14
JournalIEEE Access
StatePublished - 2020

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)


Dive into the research topics of 'Visual-inertial fusion based positioning systems'. Together they form a unique fingerprint.

Cite this