Exploiting core criticality for enhanced GPU performance

Adwait Jog, Onur Kayiran, Ashutosh Pattnaik, Mahmut T. Kandemir, Onur Mutlu, Ravishankar Iyer, Chita R. Das

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Scopus citations

Abstract

Modern memory access schedulers employed in GPUS typi- cally optimize for memory throughput. They implicitly as- sume that all requests from different cores are equally im- portant. However, we show that during the execution of a subset of CUDA applications, different cores can have dif- ferent amounts of tolerance to latency. In particular, cores with a larger fraction of warps waiting for data to come back from DRAM are less likely to tolerate the latency of an outstanding memory request. Requests from such cores are more critical than requests from others. Based on this observation, this paper introduces a new memory sched- uler, called (C)ritica(L)ity (A)ware (M)emory (S)cheduler (CLAMS), which takes into account the latency-tolerance of the cores that generate memory requests. The key idea is to use the fraction of critical requests in the memory request buffer to switch between scheduling policies optimized for criticality and locality. If this fraction is below a threshold, CLAMS prioritizes critical requests to ensure cores that can- not tolerate latency are serviced faster. Otherwise, CLAMS optimizes for locality, anticipating that there are too many critical requests and prioritizing one over another would not significantly benefit performance. We first present a core-criticality estimation mechanism for determining critical cores and requests, and then dis- cuss issues related to finding a balance between criticality and locality in the memory scheduler. We progressively de- vise three variants of CLAMS, and show that the Dynamic CLAMS provides significantly higher performance, across a variety of workloads, than the commonly-employed GPU memory schedulers optimized solely for locality. The results indicate that a GPU memory system that considers both core criticality and DRAM access locality can provide sig- nificant improvement in performance.

Original languageEnglish (US)
Title of host publicationSIGMETRICS/ Performance 2016 - Proceedings of the SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Science
PublisherAssociation for Computing Machinery, Inc
Pages351-363
Number of pages13
ISBN (Electronic)9781450342667
DOIs
StatePublished - Jun 14 2016
Event13th Joint International Conference on Measurement and Modeling of Computer Systems, ACM SIGMETRICS / IFIP Performance 2016 - Antibes Juan-les-Pins, France
Duration: Jun 14 2016Jun 18 2016

Publication series

NameSIGMETRICS/ Performance 2016 - Proceedings of the SIGMETRICS/Performance Joint International Conference on Measurement and Modeling of Computer Science

Other

Other13th Joint International Conference on Measurement and Modeling of Computer Systems, ACM SIGMETRICS / IFIP Performance 2016
Country/TerritoryFrance
CityAntibes Juan-les-Pins
Period6/14/166/18/16

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computational Theory and Mathematics
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Exploiting core criticality for enhanced GPU performance'. Together they form a unique fingerprint.

Cite this