TY - JOUR
T1 - Process variation-aware adaptive cache architecture and management
AU - Mutyam, Madhu
AU - Wang, Feng
AU - Krishnan, Ramakrishnan
AU - Narayanan, Vijaykrishnan
AU - Kandemir, Mahmut
AU - Xie, Yuan
AU - Irwin, Mary Jane
N1 - Funding Information:
This work was supported in part by a grant from the Department of Science and Technology (DST), India, Project No. SR/S3/EECE/80/2006, and US National Science Foundation (NSF) Grant Nos. 0720659 and 0643902.
Funding Information:
Mahmut Kandemir is an associate professor in the Computer Science and Engineering Depart-ment at the Pennsylvania State University. His research interests are in optimizing compilers, runtime systems, embedded systems, I/O and high-performance storage, and power-aware computing. He is the author of more than 80 journal publications and over 300 conference/ workshop papers in these areas. He has graduated 11 PhD and 8 masters students so far, and is currently supervising 15 PhD students. His research is funded by the US National Science Foundation (NSF), DARPA, and SRC. He is a recipient of the NSF Career Award and the Penn State Engineering Society Outstanding Research Award. He is a member of the ACM and the IEEE.
PY - 2009
Y1 - 2009
N2 - Fabricating circuits that employ ever-smaller transistors leads to dramatic variations in critical process parameters. This in turn results in large variations in execution/access latencies of different hardware components. This situation is even more severe for memory components due to minimum-sized transistors used in their design. Current design methodologies that are tuned for the worst case scenarios are becoming increasingly pessimistic from the performance angle, and thus, may not be a viable option at all for future designs. This paper makes two contributions targeting on-chip data caches. First, it presents an adaptive cache management policy based on nonuniform cache access. Second, it proposes a latency compensation approach that employs several circuit-level techniques to change the access latency of select cache lines based on the criticalities of the load instructions that access them. Our experiments reveal that both these techniques can recover significant amount of the lost performance due to worst case designs.
AB - Fabricating circuits that employ ever-smaller transistors leads to dramatic variations in critical process parameters. This in turn results in large variations in execution/access latencies of different hardware components. This situation is even more severe for memory components due to minimum-sized transistors used in their design. Current design methodologies that are tuned for the worst case scenarios are becoming increasingly pessimistic from the performance angle, and thus, may not be a viable option at all for future designs. This paper makes two contributions targeting on-chip data caches. First, it presents an adaptive cache management policy based on nonuniform cache access. Second, it proposes a latency compensation approach that employs several circuit-level techniques to change the access latency of select cache lines based on the criticalities of the load instructions that access them. Our experiments reveal that both these techniques can recover significant amount of the lost performance due to worst case designs.
UR - http://www.scopus.com/inward/record.url?scp=67649855147&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=67649855147&partnerID=8YFLogxK
U2 - 10.1109/TC.2009.30
DO - 10.1109/TC.2009.30
M3 - Article
AN - SCOPUS:67649855147
SN - 0018-9340
VL - 58
SP - 865
EP - 877
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
IS - 7
ER -