Profiling Hyperscale Big Data Processing

Abraham Gonzalez, Sihang Liu, Jichuan Chang, Aasheesh Kolli, Vidushi Dadu, Krste Asanović, Samira Khan, Sagar Karandikar, Parthasarathy Ranganathan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Computing demand continues to grow exponentially, largely driven by “big data” processing on hyperscale data stores. At the same time, the slowdown in Moore’s law is leading the industry to embrace custom computing in large-scale systems. Taken together, these trends motivate the need to characterize live production traffic on these large data processing platforms and understand the opportunity of acceleration at scale. This paper addresses this key need. We characterize three important production distributed database and data analytics platforms at Google to identify key hardware acceleration opportunities and perform a comprehensive limits study to understand the trade-offs among various hardware acceleration strategies. We observe that hyperscale data processing platforms spend significant time on distributed storage and other remote work across distributed workers. Therefore, optimizing storage and remote work in addition to compute acceleration is critical for these platforms. We present a detailed breakdown of the compute-intensive functions in these platforms and identify dominant key data operations related to datacenter and systems taxes. We observe that no single accelerator can provide a significant benefit but collectively, a sea of accelerators, can accelerate many of these smaller platform-specific functions. We demonstrate the potential gains of the sea of accelerators proposal in a limits study and analytical model. We perform a comprehensive study to understand the trade-offs between accelerator location (on-chip/off-chip) and invocation model (synchronous/asynchronous). We propose and evaluate a chained accelerator execution model where identified compute-intensive functions are accelerated and pipelined to avoid invocation from the core, achieving a 3x improvement over the baseline system while nearly matching identical performance to an ideal fully asynchronous execution model.

Original languageEnglish (US)
Title of host publicationISCA 2023 - Proceedings of the 2023 50th Annual International Symposium on Computer Architecture
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages660-675
Number of pages16
ISBN (Electronic)9798400700958
DOIs
StatePublished - Jun 17 2023
Event50th Annual International Symposium on Computer Architecture, ISCA 2023 - Orlando, United States
Duration: Jun 17 2023Jun 21 2023

Publication series

NameProceedings - International Symposium on Computer Architecture
ISSN (Print)1063-6897

Conference

Conference50th Annual International Symposium on Computer Architecture, ISCA 2023
Country/TerritoryUnited States
CityOrlando
Period6/17/236/21/23

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Profiling Hyperscale Big Data Processing'. Together they form a unique fingerprint.

Cite this