TY - GEN
T1 - OWL
T2 - 18th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2013
AU - Jog, Adwait
AU - Kayiran, Onur
AU - Nachiappan, Nachiappan Chidambaram
AU - Mishra, Asit K.
AU - Kandemir, Mahmut T.
AU - Mutlu, Onur
AU - Iyer, Ravishankar
AU - Das, Chita R.
PY - 2013
Y1 - 2013
N2 - Emerging GPGPU architectures, along with programming models like CUDA and OpenCL, offer a cost-effective platform for many applications by providing high thread level parallelism at lower energy budgets. Unfortunately, for many general-purpose applications, available hardware resources of a GPGPU are not efficiently utilized, leading to lost opportunity in improving performance. A major cause of this is the inefficiency of current warp scheduling policies in tolerating long memory latencies. In this paper, we identify that the scheduling decisions made by such policies are agnostic to thread-block, or cooperative thread array (CTA), behavior, and as a result inefficient. We present a coordinated CTA-aware scheduling policy that utilizes four schemes to minimize the impact of long memory latencies. The first two schemes, CTA-aware two-level warp scheduling and locality aware warp scheduling, enhance per-core performance by effectively reducing cache contention and improving latency hiding capability. The third scheme, bank-level parallelism aware warp scheduling, improves overall GPGPU performance by enhancing DRAM bank-level parallelism. The fourth scheme employs opportunistic memory-side prefetching to further enhance performance by taking advantage of open DRAM rows. Evaluations on a 28-core GPGPU platform with highly memory-intensive applications indicate that our proposed mechanism can provide 33% average performance improvement compared to the commonly-employed round-robin warp scheduling policy.
AB - Emerging GPGPU architectures, along with programming models like CUDA and OpenCL, offer a cost-effective platform for many applications by providing high thread level parallelism at lower energy budgets. Unfortunately, for many general-purpose applications, available hardware resources of a GPGPU are not efficiently utilized, leading to lost opportunity in improving performance. A major cause of this is the inefficiency of current warp scheduling policies in tolerating long memory latencies. In this paper, we identify that the scheduling decisions made by such policies are agnostic to thread-block, or cooperative thread array (CTA), behavior, and as a result inefficient. We present a coordinated CTA-aware scheduling policy that utilizes four schemes to minimize the impact of long memory latencies. The first two schemes, CTA-aware two-level warp scheduling and locality aware warp scheduling, enhance per-core performance by effectively reducing cache contention and improving latency hiding capability. The third scheme, bank-level parallelism aware warp scheduling, improves overall GPGPU performance by enhancing DRAM bank-level parallelism. The fourth scheme employs opportunistic memory-side prefetching to further enhance performance by taking advantage of open DRAM rows. Evaluations on a 28-core GPGPU platform with highly memory-intensive applications indicate that our proposed mechanism can provide 33% average performance improvement compared to the commonly-employed round-robin warp scheduling policy.
UR - http://www.scopus.com/inward/record.url?scp=84875640178&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84875640178&partnerID=8YFLogxK
U2 - 10.1145/2451116.2451158
DO - 10.1145/2451116.2451158
M3 - Conference contribution
AN - SCOPUS:84875640178
SN - 9781450318709
T3 - International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS
SP - 395
EP - 406
BT - ASPLOS 2013 - 18th International Conference on Architectural Support for Programming Languages and Operating Systems
Y2 - 16 March 2013 through 20 March 2013
ER -