Compilation for distributed memory architectures

Alok Choudhary, Mahmut Kandemir

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Distributed memory machines provide the required computational power to solve large-scale, data-intensive applications. These machines achieve high performance and scalability; however, they are very difficult to program. This is because taking advantage of parallel processors and distributed memory (see Figure 11.1) requires that both data and computation should be distributed between processors. In addition, because each processor can directly access only its local memory, nonlocal (remote) accesses demand a coordination (in the form of explicit communication or synchronization) across processors. Because the cost of interprocessor synchronization and communication might be very high, a well-written parallel code for distributed memory machines should minimize the number of synchronization and communication operations as much as possible. These issues make it very difficult to program these architectures and necessitates optimizing compiler help for generating efficient parallel code. Nevertheless, most of the current compiler techniques for distributed memory architectures require some form of user help for successful compilation.

Original languageEnglish (US)
Title of host publicationThe Compiler Design Handbook
Subtitle of host publicationOptimizations and Machine Code Generation
PublisherCRC Press
Pages373-407
Number of pages35
ISBN (Electronic)9781420040579
ISBN (Print)084931240X, 9780849312403
DOIs
StatePublished - Jan 1 2002

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Compilation for distributed memory architectures'. Together they form a unique fingerprint.

Cite this