TY - JOUR
T1 - EECache
T2 - A comprehensive study on the architectural design for energy-efficient last-level caches in chip multiprocessors
AU - Cheng, Hsiang Yun
AU - Poremba, Matt
AU - Shahidi, Narges
AU - Stalev, Ivan
AU - Irwin, Mary Jane
AU - Kandemir, Mahmut
AU - Sampson, Jack
AU - Xie, Yuan
N1 - Publisher Copyright:
© 2015 ACM.
PY - 2015/7/1
Y1 - 2015/7/1
N2 - Power management for large last-level caches (LLCs) is important in chip multiprocessors (CMPs), as the leakage power of LLCs accounts for a significant fraction of the limited on-chip power budget. Since not all workloads running on CMPs need the entire cache, portions of a large, shared LLC can be disabled to save energy. In this article, we explore different design choices, from circuit-level cache organization to microarchitectural management policies, topropose a low-overhead runtime mechanism for energy reduction in the large, shared LLC. We first introduce a slice-based cache organization that can shut down parts of the shared LLC with minimal circuit overhead. Based on this slice-based organization, part of the shared LLC can be turned off according to the spatial and temporal cache access behavior captured by low-overhead sampling-based hardware. In order to eliminate the performance penalties caused by flushing data before powering off a cache slice, we propose data migration policies to prevent the loss of useful data in the LLC. Results show that our energy-efficient cache design (EECache) provides 14.1% energy savings at only 1.2% performance degradation and consumes negligible hardware overhead compared to prior work.
AB - Power management for large last-level caches (LLCs) is important in chip multiprocessors (CMPs), as the leakage power of LLCs accounts for a significant fraction of the limited on-chip power budget. Since not all workloads running on CMPs need the entire cache, portions of a large, shared LLC can be disabled to save energy. In this article, we explore different design choices, from circuit-level cache organization to microarchitectural management policies, topropose a low-overhead runtime mechanism for energy reduction in the large, shared LLC. We first introduce a slice-based cache organization that can shut down parts of the shared LLC with minimal circuit overhead. Based on this slice-based organization, part of the shared LLC can be turned off according to the spatial and temporal cache access behavior captured by low-overhead sampling-based hardware. In order to eliminate the performance penalties caused by flushing data before powering off a cache slice, we propose data migration policies to prevent the loss of useful data in the LLC. Results show that our energy-efficient cache design (EECache) provides 14.1% energy savings at only 1.2% performance degradation and consumes negligible hardware overhead compared to prior work.
UR - http://www.scopus.com/inward/record.url?scp=84937045161&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84937045161&partnerID=8YFLogxK
U2 - 10.1145/2756552
DO - 10.1145/2756552
M3 - Article
AN - SCOPUS:84937045161
SN - 1544-3566
VL - 12
JO - ACM Transactions on Architecture and Code Optimization
JF - ACM Transactions on Architecture and Code Optimization
IS - 2
M1 - 17
ER -