TY - JOUR
T1 - The Case for Custom Storage Backends in Distributed Storage Systems
AU - Aghayev, Abutalib
AU - Weil, Sage
AU - Kuchnik, Michael
AU - Nelson, Mark
AU - Ganger, Gregory R.
AU - Amvrosiadis, George
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/6
Y1 - 2020/6
N2 - For a decade, the Ceph distributed file system followed the conventional wisdom of building its storage backend on top of local file systems. This is a preferred choice for most distributed file systems today, because it allows them to benefit from the convenience and maturity of battle-tested code. Ceph's experience, however, shows that this comes at a high price. First, developing a zero-overhead transaction mechanism is challenging. Second, metadata performance at the local level can significantly affect performance at the distributed level. Third, supporting emerging storage hardware is painstakingly slow. Ceph addressed these issues with BlueStore, a new backend designed to run directly on raw storage devices. In only two years since its inception, BlueStore outperformed previous established backends and is adopted by 70% of users in production. By running in user space and fully controlling the I/O stack, it has enabled space-efficient metadata and data checksums, fast overwrites of erasure-coded data, inline compression, decreased performance variability, and avoided a series of performance pitfalls of local file systems. Finally, it makes the adoption of backward-incompatible storage hardware possible, an important trait in a changing storage landscape that is learning to embrace hardware diversity.
AB - For a decade, the Ceph distributed file system followed the conventional wisdom of building its storage backend on top of local file systems. This is a preferred choice for most distributed file systems today, because it allows them to benefit from the convenience and maturity of battle-tested code. Ceph's experience, however, shows that this comes at a high price. First, developing a zero-overhead transaction mechanism is challenging. Second, metadata performance at the local level can significantly affect performance at the distributed level. Third, supporting emerging storage hardware is painstakingly slow. Ceph addressed these issues with BlueStore, a new backend designed to run directly on raw storage devices. In only two years since its inception, BlueStore outperformed previous established backends and is adopted by 70% of users in production. By running in user space and fully controlling the I/O stack, it has enabled space-efficient metadata and data checksums, fast overwrites of erasure-coded data, inline compression, decreased performance variability, and avoided a series of performance pitfalls of local file systems. Finally, it makes the adoption of backward-incompatible storage hardware possible, an important trait in a changing storage landscape that is learning to embrace hardware diversity.
UR - http://www.scopus.com/inward/record.url?scp=85086803345&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086803345&partnerID=8YFLogxK
U2 - 10.1145/3386362
DO - 10.1145/3386362
M3 - Article
AN - SCOPUS:85086803345
SN - 1553-3077
VL - 16
JO - ACM Transactions on Storage
JF - ACM Transactions on Storage
IS - 2
M1 - 9
ER -