TY - GEN
T1 - Managing GPU Concurrency in Heterogeneous Architectures
AU - Kayiran, Onur
AU - Nachiappan, Nachiappan Chidambaram
AU - Jog, Adwait
AU - Ausavarungnirun, Rachata
AU - Kandemir, Mahmut T.
AU - Loh, Gabriel H.
AU - Mutlu, Onur
AU - Das, Chita R.
PY - 2015/1/15
Y1 - 2015/1/15
N2 - Heterogeneous architectures consisting of general-purpose CPUs and throughput-optimized GPUs are projected to be the dominant computing platforms for many classes of applications. The design of such systems is more complex than that of homogeneous architectures because maximizing resource utilization while minimizing shared resource interference between CPU and GPU applications is difficult. We show that GPU applications tend to monopolize the shared hardware resources, such as memory and network, because of their high thread-level parallelism (TLP), and discuss the limitations of existing GPU-based concurrency management techniques when employed in heterogeneous systems. To solve this problem, we propose an integrated concurrency management strategy that modulates the TLP in GPUs to control the performance of both CPU and GPU applications. This mechanism considers both GPU core state and system-wide memory and network congestion information to dynamically decide on the level of GPU concurrency to maximize system performance. We propose and evaluate two schemes: one (CM-CPU) for boosting CPU performance in the presence of GPU interference, the other (CM-BAL) for improving both CPU and GPU performance in a balanced manner and thus overall system performance. Our evaluations show that the first scheme improves average CPU performance by 24%, while reducing average GPU performance by 11%. The second scheme provides 7% average performance improvement for both CPU and GPU applications. We also show that our solution allows the user to control performance trade-offs between CPUs and GPUs.
AB - Heterogeneous architectures consisting of general-purpose CPUs and throughput-optimized GPUs are projected to be the dominant computing platforms for many classes of applications. The design of such systems is more complex than that of homogeneous architectures because maximizing resource utilization while minimizing shared resource interference between CPU and GPU applications is difficult. We show that GPU applications tend to monopolize the shared hardware resources, such as memory and network, because of their high thread-level parallelism (TLP), and discuss the limitations of existing GPU-based concurrency management techniques when employed in heterogeneous systems. To solve this problem, we propose an integrated concurrency management strategy that modulates the TLP in GPUs to control the performance of both CPU and GPU applications. This mechanism considers both GPU core state and system-wide memory and network congestion information to dynamically decide on the level of GPU concurrency to maximize system performance. We propose and evaluate two schemes: one (CM-CPU) for boosting CPU performance in the presence of GPU interference, the other (CM-BAL) for improving both CPU and GPU performance in a balanced manner and thus overall system performance. Our evaluations show that the first scheme improves average CPU performance by 24%, while reducing average GPU performance by 11%. The second scheme provides 7% average performance improvement for both CPU and GPU applications. We also show that our solution allows the user to control performance trade-offs between CPUs and GPUs.
UR - http://www.scopus.com/inward/record.url?scp=84937711016&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84937711016&partnerID=8YFLogxK
U2 - 10.1109/MICRO.2014.62
DO - 10.1109/MICRO.2014.62
M3 - Conference contribution
T3 - Proceedings of the Annual International Symposium on Microarchitecture, MICRO
SP - 114
EP - 126
BT - Proceedings - 47th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2014
PB - IEEE Computer Society
T2 - 47th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2014
Y2 - 13 December 2014 through 17 December 2014
ER -