Algorithms for Context Learning and Information Representation for Multi-Sensor Teams

Nurali Virani, Soumalya Sarkar, Ji Woong Lee, Shashi Phoha, Asok Ray

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Scopus citations


Sensor measurements of the state of a system are affected by natural and man-made operating conditions that are not accounted for in the definition of system states. It is postulated that these conditions, called contexts, are such that the measurements from individual sensors are independent conditioned on each pair of system state and context. This postulation leads to kernel-based unsupervised learning of a measurement model that defines a common context set for all different sensor modalities and automatically takes into account known and unknown contextual effects. The resulting measurement model is used to develop a context-aware sensor fusion technique for multi-modal sensor teams performing state estimation. Moreover, a symbolic compression technique, which replaces raw measurement data with their low-dimensional features in real time, makes the proposed context learning approach scalable to large amounts of data from heterogeneous sensors. The developed approach is tested with field experiments for multi-modal unattended ground sensors performing human walking style classification.

Original languageEnglish (US)
Title of host publicationAdvances in Computer Vision and Pattern Recognition
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages25
StatePublished - 2016

Publication series

NameAdvances in Computer Vision and Pattern Recognition
ISSN (Print)2191-6586
ISSN (Electronic)2191-6594

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence


Dive into the research topics of 'Algorithms for Context Learning and Information Representation for Multi-Sensor Teams'. Together they form a unique fingerprint.

Cite this