On compression of machine-derived context sets for fusion of multi-modal sensor data

Nurali Virani, Shashi Phoha, Asok Ray

Research output: Chapter in Book/Report/Conference proceedingChapter


Dynamic data-driven applications systems (DDDAS) operate on a sensing infrastructure for multi-modal measurement, communications, and computation, through which they perceive and control the evolution of physical dynamic processes. Sensors of different modalities are subject to contextually variable performance under varying operational conditions. Unsupervised learning algorithms have been recently developed to extract the operational context set from multimodal sensor data. A context set represents the set of all natural or man-made factors, which along with the state of the system, completely condition the measurements from sensors observing the system. The desirable property of conditional independence of observations given the state-context pair enables tractable fusion of disparate information sources. In this chapter, we address a crucial problem associated with unsupervised context learning of reducing the cardinality of the context set. Since, the machine-derived context set can have a large number of elements, we propose a graph-theoretic approach and a subset selection approach for the controlled reduction of contexts to obtain a context set of lower cardinality. We also derive an upper bound on the error introduced by the compression. These proposed approaches are validated with data collected in field experiments with unattended ground sensors for border-crossing target classification.

Original languageEnglish (US)
Title of host publicationHandbook of Dynamic Data Driven Applications Systems
PublisherSpringer International Publishing
Number of pages16
ISBN (Electronic)9783319955049
ISBN (Print)9783319955032
StatePublished - Nov 13 2018

All Science Journal Classification (ASJC) codes

  • General Computer Science


Dive into the research topics of 'On compression of machine-derived context sets for fusion of multi-modal sensor data'. Together they form a unique fingerprint.

Cite this