TY - GEN
T1 - Exploiting core criticality for enhanced GPU performance
AU - Jog, Adwait
AU - Kayiran, Onur
AU - Pattnaik, Ashutosh
AU - Kandemir, Mahmut T.
AU - Mutlu, Onur
AU - Iyer, Ravishankar
AU - Das, Chita R.
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/6/14
Y1 - 2016/6/14
N2 - Modern memory access schedulers employed in GPUS typi- cally optimize for memory throughput. They implicitly as- sume that all requests from different cores are equally im- portant. However, we show that during the execution of a subset of CUDA applications, different cores can have dif- ferent amounts of tolerance to latency. In particular, cores with a larger fraction of warps waiting for data to come back from DRAM are less likely to tolerate the latency of an outstanding memory request. Requests from such cores are more critical than requests from others. Based on this observation, this paper introduces a new memory sched- uler, called (C)ritica(L)ity (A)ware (M)emory (S)cheduler (CLAMS), which takes into account the latency-tolerance of the cores that generate memory requests. The key idea is to use the fraction of critical requests in the memory request buffer to switch between scheduling policies optimized for criticality and locality. If this fraction is below a threshold, CLAMS prioritizes critical requests to ensure cores that can- not tolerate latency are serviced faster. Otherwise, CLAMS optimizes for locality, anticipating that there are too many critical requests and prioritizing one over another would not significantly benefit performance. We first present a core-criticality estimation mechanism for determining critical cores and requests, and then dis- cuss issues related to finding a balance between criticality and locality in the memory scheduler. We progressively de- vise three variants of CLAMS, and show that the Dynamic CLAMS provides significantly higher performance, across a variety of workloads, than the commonly-employed GPU memory schedulers optimized solely for locality. The results indicate that a GPU memory system that considers both core criticality and DRAM access locality can provide sig- nificant improvement in performance.
AB - Modern memory access schedulers employed in GPUS typi- cally optimize for memory throughput. They implicitly as- sume that all requests from different cores are equally im- portant. However, we show that during the execution of a subset of CUDA applications, different cores can have dif- ferent amounts of tolerance to latency. In particular, cores with a larger fraction of warps waiting for data to come back from DRAM are less likely to tolerate the latency of an outstanding memory request. Requests from such cores are more critical than requests from others. Based on this observation, this paper introduces a new memory sched- uler, called (C)ritica(L)ity (A)ware (M)emory (S)cheduler (CLAMS), which takes into account the latency-tolerance of the cores that generate memory requests. The key idea is to use the fraction of critical requests in the memory request buffer to switch between scheduling policies optimized for criticality and locality. If this fraction is below a threshold, CLAMS prioritizes critical requests to ensure cores that can- not tolerate latency are serviced faster. Otherwise, CLAMS optimizes for locality, anticipating that there are too many critical requests and prioritizing one over another would not significantly benefit performance. We first present a core-criticality estimation mechanism for determining critical cores and requests, and then dis- cuss issues related to finding a balance between criticality and locality in the memory scheduler. We progressively de- vise three variants of CLAMS, and show that the Dynamic CLAMS provides significantly higher performance, across a variety of workloads, than the commonly-employed GPU memory schedulers optimized solely for locality. The results indicate that a GPU memory system that considers both core criticality and DRAM access locality can provide sig- nificant improvement in performance.
UR - http://www.scopus.com/inward/record.url?scp=84978764321&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84978764321&partnerID=8YFLogxK
U2 - 10.1145/2896377.2901468
DO - 10.1145/2896377.2901468
M3 - Conference contribution
AN - SCOPUS:84978764321
T3 - SIGMETRICS/ Performance 2016 - Proceedings of the SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Science
SP - 351
EP - 363
BT - SIGMETRICS/ Performance 2016 - Proceedings of the SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Science
PB - Association for Computing Machinery, Inc
T2 - 13th Joint International Conference on Measurement and Modeling of Computer Systems, ACM SIGMETRICS / IFIP Performance 2016
Y2 - 14 June 2016 through 18 June 2016
ER -