Memory bank locality and its usage in reducing energy consumption

Research output: Chapter in Book/Report/Conference proceedingChapter


Bank locality can be defined, in the context of a multi-bank memory system, as localizing the number of load/store accesses to a small set of memory banks at a given time. An optimizing compiler can modify a given input code to improve its bank locality. There are several practical advantages of enhancing bank locality, the most important of which is reduced memory energy consumption. Recent trends indicate that energy consumption is fast becoming a first-order design parameter as processor-based systems continue to become more complex and multi-functional. Off-chip memory energy consumption in particular can be a limiting factor in many embedded system designs. This paper presents a novel compiler-based strategy for maximizing the benefits of low-power operating modes available in some recent DRAM-based multi-bank memory systems. In this strategy, the compiler uses linear algebra to represent and optimize bank locality in a mathematical framework. We discuss that exploiting bank locality can be cast as loop (iteration space) and array layout (data space) transformations. We also present experimental data showing the effectiveness of our optimization strategy. Our results show that exploiting bank locality can result in large energy savings.

Original languageEnglish (US)
Title of host publicationDesigning Embedded Processors
Subtitle of host publicationA Low Power Perspective
PublisherSpringer Netherlands
Number of pages26
ISBN (Print)9781402058684
StatePublished - Dec 1 2007

All Science Journal Classification (ASJC) codes

  • General Engineering


Dive into the research topics of 'Memory bank locality and its usage in reducing energy consumption'. Together they form a unique fingerprint.

Cite this