TY - JOUR
T1 - Data-level fusion of multilook inverse synthetic aperture radar images
AU - Li, Zhixi
AU - Papson, Scott
AU - Narayanan, Ram M.
N1 - Funding Information:
The database is called the ERADS Slicy database, which is a public database released on October 5, 2004 and provided by ERADS. The database is of a 1/16th scale variant of the reference model Slicy. The base of the Slicy model is a rectangle with size of 171.9 × 152.8 mm (approximately 7 × 6 in). The data were collected at X-band and Ka-band frequencies at 5◦ and 40◦ depression angles. Their project was sponsored by the U.S. Army, Intelligence and Security Command and National Ground Intelligence Center. Fig. 3 shows the picture of the Slicy target at 45◦ rotation.
Funding Information:
Manuscript received May 28, 2007; revised September 11, 2007. This work was supported by the Office of Naval Research under Contract N00014-04-1-0307.
Copyright:
Copyright 2008 Elsevier B.V., All rights reserved.
PY - 2008/5
Y1 - 2008/5
N2 - Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge increases, allowing more useful target information to be extracted. Physics-based image fusion techniques can be developed by processing the raw data collected from multiple inverse synthetic aperture radar sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule to generate a composite image containing enhanced target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the Matrix Fourier Transform is proposed to realize such a complicated integral. This algorithm can be regarded as an exact interpolation such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected to properly register the multiple images before performing the fusion. A comparison of the image attribute rating curve between the fused image and the spatially averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.
AB - Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge increases, allowing more useful target information to be extracted. Physics-based image fusion techniques can be developed by processing the raw data collected from multiple inverse synthetic aperture radar sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule to generate a composite image containing enhanced target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the Matrix Fourier Transform is proposed to realize such a complicated integral. This algorithm can be regarded as an exact interpolation such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected to properly register the multiple images before performing the fusion. A comparison of the image attribute rating curve between the fused image and the spatially averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.
UR - http://www.scopus.com/inward/record.url?scp=52649127329&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=52649127329&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2008.916088
DO - 10.1109/TGRS.2008.916088
M3 - Article
AN - SCOPUS:52649127329
SN - 0196-2892
VL - 46
SP - 1394
EP - 1406
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
IS - 5
ER -