TY - GEN
T1 - On the performance of the POSIX I/O interface to PVFS
AU - Vilayannur, Murali
AU - Ross, Robert B.
AU - Carns, Philip H.
AU - Thakur, Rajeev
AU - Sivasubramaniam, Anand
AU - Kandemir, Mahmut
N1 - Copyright:
Copyright 2012 Elsevier B.V., All rights reserved.
PY - 2004
Y1 - 2004
N2 - The ever-increasing gap in performance between CPU/memory technologies and the I/O subsystem (disks, I/O buses) in modern workstations has exacerbated the I/O bottlenecks inherent in applications that access large disk resident data sets. A common technique to alleviate the I/O bottlenecks on clusters of work-stations, is the use of parallel file systems. One such parallel file system is the Parallel Virtual File System (PVFS), which is a freely available tool to achieve high-performance I/O on Linux-based clusters. In this paper, we describe the performance and scalability of the UNIX I/O interface to PVFS. To illustrate the performance, we present experimental results using Bonnie++, a commonly used file system benchmark to test file system throughput; a synthetic parallel I/O application for calculating aggregate read and write bandwidths; and a synthetic benchmark which calculates the time taken to untar the Linux kernel source tree to measure performance of a large number of small file operations. We obtained aggregate read and write bandwidths as high as 550 MB/s with a Myrinet-based network and 160MB/S with fast Ethernet.
AB - The ever-increasing gap in performance between CPU/memory technologies and the I/O subsystem (disks, I/O buses) in modern workstations has exacerbated the I/O bottlenecks inherent in applications that access large disk resident data sets. A common technique to alleviate the I/O bottlenecks on clusters of work-stations, is the use of parallel file systems. One such parallel file system is the Parallel Virtual File System (PVFS), which is a freely available tool to achieve high-performance I/O on Linux-based clusters. In this paper, we describe the performance and scalability of the UNIX I/O interface to PVFS. To illustrate the performance, we present experimental results using Bonnie++, a commonly used file system benchmark to test file system throughput; a synthetic parallel I/O application for calculating aggregate read and write bandwidths; and a synthetic benchmark which calculates the time taken to untar the Linux kernel source tree to measure performance of a large number of small file operations. We obtained aggregate read and write bandwidths as high as 550 MB/s with a Myrinet-based network and 160MB/S with fast Ethernet.
UR - http://www.scopus.com/inward/record.url?scp=3042530916&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=3042530916&partnerID=8YFLogxK
U2 - 10.1109/EMPDP.2004.1271463
DO - 10.1109/EMPDP.2004.1271463
M3 - Conference contribution
AN - SCOPUS:3042530916
SN - 0769520839
SN - 9780769520834
T3 - Proceedings - Euromicro Conference on Parellel, Distribeted and Network-based Proceeding
SP - 332
EP - 339
BT - Proceedings - 12th Euromicro Conference on Parallel, Distributed and Network-based Proceedings, PDP 2004
T2 - Proceedings - 12th Euromicro Conference on Parallel, Distributed and Network-based Proceedings, PDP 2004
Y2 - 11 February 2004 through 13 February 2004
ER -