TY - GEN
T1 - STAC
T2 - Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2015
AU - Kira, Zsolt
AU - Wagner, Alan R.
AU - Kennedy, Chris
AU - Zutty, Jason
AU - Tuell, Grady
N1 - Publisher Copyright:
© 2015 SPIE.
PY - 2015
Y1 - 2015
N2 - We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.
AB - We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion algorithms for moving target identification (MTI) applications, what is the best way to combine image data having different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe and illustrate the STAC model, and compare it to other existing models.
UR - http://www.scopus.com/inward/record.url?scp=84938885013&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84938885013&partnerID=8YFLogxK
U2 - 10.1117/12.2178494
DO - 10.1117/12.2178494
M3 - Conference contribution
AN - SCOPUS:84938885013
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Multisensor, Multisource Information Fusion
A2 - Braun, Jerome J.
PB - SPIE
Y2 - 21 April 2015
ER -