Kube-Knots: Resource Harvesting through Dynamic Container Orchestration in GPU-based Datacenters

Prashanth Thinakaran, Jashwant Raj Gunasekaran, Bikash Sharma, Mahmut Taylan Kandemir, Chita R. Das

Research output: Chapter in Book/Report/Conference proceedingConference contribution

33 Scopus citations

Abstract

Compute heterogeneity is increasingly gaining prominence in modern datacenters due to the addition of accelerators like GPUs and FPGAs. We observe that datacenter schedulers are agnostic of these emerging accelerators, especially their resource utilization footprints, and thus, not well equipped to dynamically provision them based on the application needs. We observe that the state-of-the-art datacenter schedulers fail to provide fine-grained resource guarantees for latency-sensitive tasks that are GPU-bound. Specifically for GPUs, this results in resource fragmentation and interference leading to poor utilization of allocated GPU resources. Furthermore, GPUs exhibit highly linear energy efficiency with respect to utilization and hence proactive management of these resources is essential to keep the operational costs low while ensuring the end-to-end Quality of Service (QoS) in case of user-facing queries.Towards addressing the GPU orchestration problem, we build Knots, a GPU-aware resource orchestration layer and integrate it with the Kubernetes container orchestrator to build Kube-Knots. Kube-Knots can dynamically harvest spare compute cycles through dynamic container orchestration enabling co-location of latency-critical and batch workloads together while improving the overall resource utilization. We design and evaluate two GPU-based scheduling techniques to schedule datacenter-scale workloads through Kube-Knots on a ten node GPU cluster. Our proposed Correlation Based Prediction (CBP) and Peak Prediction (PP) schemes together improves both average and 99th percentile cluster-wide GPU utilization by up to 80% in case of HPC workloads. In addition, CBP+PP improves the average job completion times (JCT) of deep learning workloads by up to 36% when compared to state-of-the-art schedulers. This leads to 33% cluster-wide energy savings on an average for three different workloads compared to state-of-the-art GPU-agnostic schedulers. Further, the proposed PP scheduler guarantees the end-to-end QoS for latency-critical queries by reducing QoS violations by up to 53% when compared to state-of-the-art GPU schedulers.

Original languageEnglish (US)
Title of host publicationProceedings - 2019 IEEE International Conference on Cluster Computing, CLUSTER 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728147345
DOIs
StatePublished - Sep 2019
Event2019 IEEE International Conference on Cluster Computing, CLUSTER 2019 - Albuquerque, United States
Duration: Sep 23 2019Sep 26 2019

Publication series

NameProceedings - IEEE International Conference on Cluster Computing, ICCC
Volume2019-September
ISSN (Print)1552-5244

Conference

Conference2019 IEEE International Conference on Cluster Computing, CLUSTER 2019
Country/TerritoryUnited States
CityAlbuquerque
Period9/23/199/26/19

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Signal Processing

Fingerprint

Dive into the research topics of 'Kube-Knots: Resource Harvesting through Dynamic Container Orchestration in GPU-based Datacenters'. Together they form a unique fingerprint.

Cite this