TY - GEN
T1 - Runtime system support for software-guided disk power management
AU - Son, Seung Woo
AU - Kandemir, Mahmut
PY - 2007
Y1 - 2007
N2 - Disk subsystem is known to be a major contributor to the overall power budget of large-scale parallel systems. Most scientific applications today rely heavily on disk I/O for out-of-core computations, checkpointing, and visualization of data. To reduce excess energy consumption on disk system, prior studies proposed several hardware or OS-based disk power management schemes. While such schemes have been known to be effective in certain cases, they might miss opportunities for better energy savings due to their reactive nature. While compiler based schemes can make more accurate decisions on a given application by extracting disk access patterns statically, the lack of runtime information on the status of shared disks may lead to wrong decisions when multiple applications exercise the same set of disks concurrently. In this paper, we propose a runtime system based approach that provides more effective disk power management. In our scheme, the compiler provides crucial information on the future disk access patterns and preferred disk speeds from the perspective of individual applications, and a runtime system uses this information along with current state of the shared disks to make decisions that are agreeable to all applications. We implemented our runtime system support within PVFS2, a parallel file system. Our experimental results with four I/O-intensive scientific applications indicate large energy savings: 19.4% and 39.9% over the previously-proposed pure software and pure hardware based schemes, respectively. We further show in this paper that our scheme can achieve consistent energy savings with a varying number and mix of applications and different disk layouts of data.
AB - Disk subsystem is known to be a major contributor to the overall power budget of large-scale parallel systems. Most scientific applications today rely heavily on disk I/O for out-of-core computations, checkpointing, and visualization of data. To reduce excess energy consumption on disk system, prior studies proposed several hardware or OS-based disk power management schemes. While such schemes have been known to be effective in certain cases, they might miss opportunities for better energy savings due to their reactive nature. While compiler based schemes can make more accurate decisions on a given application by extracting disk access patterns statically, the lack of runtime information on the status of shared disks may lead to wrong decisions when multiple applications exercise the same set of disks concurrently. In this paper, we propose a runtime system based approach that provides more effective disk power management. In our scheme, the compiler provides crucial information on the future disk access patterns and preferred disk speeds from the perspective of individual applications, and a runtime system uses this information along with current state of the shared disks to make decisions that are agreeable to all applications. We implemented our runtime system support within PVFS2, a parallel file system. Our experimental results with four I/O-intensive scientific applications indicate large energy savings: 19.4% and 39.9% over the previously-proposed pure software and pure hardware based schemes, respectively. We further show in this paper that our scheme can achieve consistent energy savings with a varying number and mix of applications and different disk layouts of data.
UR - http://www.scopus.com/inward/record.url?scp=53349102384&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=53349102384&partnerID=8YFLogxK
U2 - 10.1109/CLUSTR.2007.4629226
DO - 10.1109/CLUSTR.2007.4629226
M3 - Conference contribution
AN - SCOPUS:53349102384
SN - 1424413885
SN - 9781424413881
T3 - Proceedings - IEEE International Conference on Cluster Computing, ICCC
SP - 139
EP - 148
BT - Proceedings - 2007 IEEE International Conference on Cluster Computing, CLUSTER 2007
T2 - 2007 IEEE International Conference on Cluster Computing, CLUSTER 2007
Y2 - 19 September 2007 through 20 September 2007
ER -