TY - GEN
T1 - Minimizing interference through application mapping in multi-level buffer caches
AU - Patrick, Christina M.
AU - Voshell, Nicholas
AU - Kandemir, Mahmut
PY - 2011
Y1 - 2011
N2 - In this paper, we study the impact of cache sharing on co-mapped applications in multi-level buffer cache hierarchies. When the number of applications exceeds the number of resources, resource sharing is inevitable. However, unless applications are co-mapped carefully, destructive interference may cause applications to thrash and spend most of their time paging data to and from disks. We propose two novel models which predict the performance of an application in the presence of other applications and an algorithm which uses the output of these models to perform application-to-node mapping in a multi-level buffer cache hierarchy. Our models use the reuse distances of the application reference streams and their respective I/O rates. This information can be obtained either online or offline. Our main advantage is that we do not require profile information of all application pairs to predict their interferences. The goal of this mapping is to minimize destructive interference during execution. We validate the effectiveness of our models and mapping scheme using several I/O-intensive applications, and found that the error in prediction of our two models is only 3.9% and 2.7% respectively, on average. Further, using our approach, we were effectively able to co-map applications to maximize the performance of the buffer cache hierarchy by 43.6% and 56.8% on average over the median and worst mappings respectively in the entire I/O stack.
AB - In this paper, we study the impact of cache sharing on co-mapped applications in multi-level buffer cache hierarchies. When the number of applications exceeds the number of resources, resource sharing is inevitable. However, unless applications are co-mapped carefully, destructive interference may cause applications to thrash and spend most of their time paging data to and from disks. We propose two novel models which predict the performance of an application in the presence of other applications and an algorithm which uses the output of these models to perform application-to-node mapping in a multi-level buffer cache hierarchy. Our models use the reuse distances of the application reference streams and their respective I/O rates. This information can be obtained either online or offline. Our main advantage is that we do not require profile information of all application pairs to predict their interferences. The goal of this mapping is to minimize destructive interference during execution. We validate the effectiveness of our models and mapping scheme using several I/O-intensive applications, and found that the error in prediction of our two models is only 3.9% and 2.7% respectively, on average. Further, using our approach, we were effectively able to co-map applications to maximize the performance of the buffer cache hierarchy by 43.6% and 56.8% on average over the median and worst mappings respectively in the entire I/O stack.
UR - http://www.scopus.com/inward/record.url?scp=79957499667&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79957499667&partnerID=8YFLogxK
U2 - 10.1109/ISPASS.2011.5762714
DO - 10.1109/ISPASS.2011.5762714
M3 - Conference contribution
AN - SCOPUS:79957499667
SN - 9781612843681
T3 - ISPASS 2011 - IEEE International Symposium on Performance Analysis of Systems and Software
SP - 44
EP - 55
BT - ISPASS 2011 - IEEE International Symposium on Performance Analysis of Systems and Software
T2 - IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2011
Y2 - 10 April 2011 through 12 April 2011
ER -