Feature extraction from infrared (IR) images remains a challenging task. Learning based methods that can work on raw imagery/patches have therefore assumed significance. We propose a novel multi-task extension of the widely used sparse-representation-classification (SRC) method in both single and multi-view set-ups. That is, the test sample could be a single IR image or images from different views. When expanded in terms of a training dictionary, the coefficient matrix in a multi-view scenario admits a sparse structure that is not easily captured by traditional sparsity-inducing measures such as the l0-row pseudo norm. To that end, we employ collaborative spike and slab priors on the coefficient matrix, which can capture fairly general sparse structures. Our work involves joint parameter and sparse coefficient estimation (JPCEM) which alleviates the need to handpick prior parameters before classification. The experimental merits of JPCEM are substantiated through comparisons with other state-of-art methods on a challenging mid-wave IR image (MWIR) ATR database made available by the US Army Night Vision and Electronic Sensors Directorate.