TY - GEN
T1 - Topology-Aware I/O Caching for Shared Storage Systems
AU - Son, Seung Woo
AU - Kandemir, Mahmut
AU - Zhang, Yuanrui
AU - Garg, Rajat
N1 - Publisher Copyright:
Copyright © (2009) by the International Society for Computers and Their Applications. All rights reserved.
PY - 2009
Y1 - 2009
N2 - The main contribution of this paper is a topology-aware storage caching scheme for parallel architectures. In a parallel system with multiple storage caches, these caches form a shared cache space, and effective management of this space is a critical issue. Of particular interest is data migration (i.e., moving data from one storage cache to another at runtime), which may help reduce the distance between a data block and its customers. As the data access and sharing patterns change during execution, we can migrate data in the shared cache space to reduce access latencies. The proposed storage caching approach, which is based on the two-dimensional post-office placement model, takes advantage of the variances across the access latencies of the different storage caches (from a given node’s perspective), by selecting the most appropriate location (cache) to place a data block shared by multiple nodes. This paper also presents experimental results from our implementation of this data migration-based scheme. The results reveal that the improvements brought by our proposed scheme in average hit latency, average miss rate, and average data access latency are 29.1%, 7.0% and 32.7%, respectively, over an alternative storage caching scheme.
AB - The main contribution of this paper is a topology-aware storage caching scheme for parallel architectures. In a parallel system with multiple storage caches, these caches form a shared cache space, and effective management of this space is a critical issue. Of particular interest is data migration (i.e., moving data from one storage cache to another at runtime), which may help reduce the distance between a data block and its customers. As the data access and sharing patterns change during execution, we can migrate data in the shared cache space to reduce access latencies. The proposed storage caching approach, which is based on the two-dimensional post-office placement model, takes advantage of the variances across the access latencies of the different storage caches (from a given node’s perspective), by selecting the most appropriate location (cache) to place a data block shared by multiple nodes. This paper also presents experimental results from our implementation of this data migration-based scheme. The results reveal that the improvements brought by our proposed scheme in average hit latency, average miss rate, and average data access latency are 29.1%, 7.0% and 32.7%, respectively, over an alternative storage caching scheme.
UR - http://www.scopus.com/inward/record.url?scp=85020901843&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85020901843&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85020901843
T3 - 22nd ISCA International Conference on Parallel and Distributed Computing and Communication Systems 2009, PDCCS 2009
SP - 143
EP - 150
BT - 22nd ISCA International Conference on Parallel and Distributed Computing and Communication Systems 2009, PDCCS 2009
PB - International Society for Computers and Their Applications (ISCA)
T2 - 22nd International Conference on Parallel and Distributed Computing and Communication Systems, PDCCS 2009
Y2 - 24 September 2009 through 26 September 2009
ER -