TY - GEN
T1 - Quantifying and optimizing the impact of victim cache line selection in manycore systems
AU - Kandemir, Mahmut
AU - Ding, Wei
AU - Guttman, Diana
PY - 2015/2/5
Y1 - 2015/2/5
N2 - In both architecture and software, the main goal of data locality-oriented optimizations has always been 'minimizing the number of cache misses' (especially, costly last-level cache misses). However, this paper shows that other metrics such as the distance between the last-level cache and memory controller as well as the memory queuing latency can play an equally important role, as far as application performance is concerned. Focusing on a large set of multithreaded applications, we first show that the last-level cache 'write backs' (memory writes due to displacement of a victim block from the last-level cache) can exhibit significant latencies as well as variances, and then make a case for 'relaxing' the strict LRU policy to save (write back) cycles in both the on-chip network and memory queues. Specifically, we explore novel architecture-level schemes that optimize on-chip network latency, memory queuing latency or both, of the write back messages, by carefully selecting the victim block to write back at the time of cache replacement. Our extensive experimental evaluations using 15 multithreaded applications and a cycle-accurate simulation infrastructure clearly demonstrate that this tradeoffs (between cache hit rate and on-chip network/memory queuing latency) pays off in most of the cases, leading to about 12.2% execution time improvement and 14.9% energy savings, in our default 64-core system with 6 memory controllers.
AB - In both architecture and software, the main goal of data locality-oriented optimizations has always been 'minimizing the number of cache misses' (especially, costly last-level cache misses). However, this paper shows that other metrics such as the distance between the last-level cache and memory controller as well as the memory queuing latency can play an equally important role, as far as application performance is concerned. Focusing on a large set of multithreaded applications, we first show that the last-level cache 'write backs' (memory writes due to displacement of a victim block from the last-level cache) can exhibit significant latencies as well as variances, and then make a case for 'relaxing' the strict LRU policy to save (write back) cycles in both the on-chip network and memory queues. Specifically, we explore novel architecture-level schemes that optimize on-chip network latency, memory queuing latency or both, of the write back messages, by carefully selecting the victim block to write back at the time of cache replacement. Our extensive experimental evaluations using 15 multithreaded applications and a cycle-accurate simulation infrastructure clearly demonstrate that this tradeoffs (between cache hit rate and on-chip network/memory queuing latency) pays off in most of the cases, leading to about 12.2% execution time improvement and 14.9% energy savings, in our default 64-core system with 6 memory controllers.
UR - http://www.scopus.com/inward/record.url?scp=84937955703&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84937955703&partnerID=8YFLogxK
U2 - 10.1109/MASCOTS.2014.54
DO - 10.1109/MASCOTS.2014.54
M3 - Conference contribution
T3 - Proceedings - IEEE Computer Society's Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, MASCOTS
SP - 385
EP - 394
BT - Proceedings - 2014 22nd Annual IEEE International Symposium on Modeling, Analysis and Simulation of Computer, and Telecommunication Systems, MASCOTS 2014
PB - IEEE Computer Society
T2 - 2014 22nd Annual IEEE International Symposium on Modeling, Analysis and Simulation of Computer, and Telecommunication Systems, MASCOTS 2014
Y2 - 9 September 2014 through 11 September 2014
ER -