TY - JOUR
T1 - Enhancing computation-to-core assignment with physical location information
AU - Kislal, Orhan
AU - Kotra, Jagadish
AU - Tang, Xulong
AU - Kandemir, Mahmut Taylan
AU - Jung, Myoungsoo
N1 - Publisher Copyright:
© 2018 ACM.
PY - 2018/6/11
Y1 - 2018/6/11
N2 - Going beyond a certain number of cores in modern architectures requires an on-chip network more scalable than conventional buses. However, employing an on-chip network in a manycore system (to improve scalability) makes the latencies of the data accesses issued by a core non-uniform. This non-uniformity can play a significant role in shaping the overall application performance. This work presents a novel compiler strategy which involves exposing architecture information to the compiler to enable an optimized computation-to-core mapping. Specifically, we propose a compiler-guided scheme that takes into account the relative positions of (and distances between) cores, last-level caches (LLCs) and memory controllers (MCs) in a manycore system, and generates a mapping of computations to cores with the goal of minimizing the on-chip network traffic. The experimental data collected using a set of 21 multi-threaded applications reveal that, on an average, our approach reduces the on-chip network latency in a 6×6 manycore system by 38.4% in the case of private LLCs, and 43.8% in the case of shared LLCs. These improvements translate to the corresponding execution time improvements of 10.9% and 12.7% for the private LLC and shared LLC based systems, respectively.
AB - Going beyond a certain number of cores in modern architectures requires an on-chip network more scalable than conventional buses. However, employing an on-chip network in a manycore system (to improve scalability) makes the latencies of the data accesses issued by a core non-uniform. This non-uniformity can play a significant role in shaping the overall application performance. This work presents a novel compiler strategy which involves exposing architecture information to the compiler to enable an optimized computation-to-core mapping. Specifically, we propose a compiler-guided scheme that takes into account the relative positions of (and distances between) cores, last-level caches (LLCs) and memory controllers (MCs) in a manycore system, and generates a mapping of computations to cores with the goal of minimizing the on-chip network traffic. The experimental data collected using a set of 21 multi-threaded applications reveal that, on an average, our approach reduces the on-chip network latency in a 6×6 manycore system by 38.4% in the case of private LLCs, and 43.8% in the case of shared LLCs. These improvements translate to the corresponding execution time improvements of 10.9% and 12.7% for the private LLC and shared LLC based systems, respectively.
UR - http://www.scopus.com/inward/record.url?scp=85084437596&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084437596&partnerID=8YFLogxK
U2 - 10.1145/3192366.3192386
DO - 10.1145/3192366.3192386
M3 - Article
AN - SCOPUS:85084437596
SN - 1523-2867
VL - 53
SP - 312
EP - 327
JO - ACM SIGPLAN Notices
JF - ACM SIGPLAN Notices
IS - 4
ER -