Spiking Neural Network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potentially appealing for implementation of low-power neural computing platforms. However, the parallel and memory-intensive computations involved in such algorithms is in complete contrast to the sequential fetch, decode, execute cycles of conventional von-Neumann processors. Recent proposals have investigated the design of spintronic 'in-memory' crossbar based computing architectures driving 'spin neurons' that can potentially alleviate the memory-access bottleneck of CMOS based systems and simultaneously offer the prospect of low-power inner product computations. In this article, we perform a rigorous system-level simulation study of such All-Spin Spiking Neural Networks on a benchmark suite of 6 recognition problems ranging in network complexity from 10k-7.4M synapses and 195-9.2k neurons. System level simulations indicate that the proposed spintronic architecture can potentially achieve ∼1292× energy efficiency and ∼ 235× speedup on average over the benchmark suite in comparison to an optimized CMOS implementation at 45nm technology node.