TY - GEN
T1 - MASCOT
T2 - 21st IEEE International Conference on Data Mining, ICDM 2021
AU - Ko, Yunyong
AU - Yu, Jae Seo
AU - Bae, Hong Kyun
AU - Park, Yongjun
AU - Lee, Dongwon
AU - Kim, Sang Wook
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - In recent years, quantization methods have successfully accelerated the training of large deep neural network (DNN) models by reducing the level of precision in computing operations (e.g., forward/backward passes) without sacrificing its accuracy. In this work, therefore, we attempt to apply such a quantization idea to the popular Matrix factorization (MF) methods to deal with the growing scale of models and datasets in recommender systems. However, to our dismay, we observe that the state-of-the-art quantization methods are not effective in the training of MF models, unlike their successes in the training of DNN models. To this phenomenon, we posit that two distinctive features in training MF models could explain the difference: (i) the training of MF models is much more memory-intensive than that of DNN models, and (ii) the quantization errors across users and items in recommendation are not uniform. From these observations, we develop a quantization framework for MF models, named MASCOT, employing novel strategies (i.e., m-quantization and g-switching) to successfully address the aforementioned limitations of quantization in the training of MF models. The comprehensive evaluation using four real-world datasets demonstrates that MASCOT improves the training performance of MF models by about 45%, compared to the training without quantization, while maintaining low model errors, and the strategies and implementation optimizations of MASCOT are quite effective in the training of MF models. For the detailed information about MASCOT, we release the code of MASCOT and the datasets at: https://github.com/Yujaeseo/lCDM-2021_MASCOT.
AB - In recent years, quantization methods have successfully accelerated the training of large deep neural network (DNN) models by reducing the level of precision in computing operations (e.g., forward/backward passes) without sacrificing its accuracy. In this work, therefore, we attempt to apply such a quantization idea to the popular Matrix factorization (MF) methods to deal with the growing scale of models and datasets in recommender systems. However, to our dismay, we observe that the state-of-the-art quantization methods are not effective in the training of MF models, unlike their successes in the training of DNN models. To this phenomenon, we posit that two distinctive features in training MF models could explain the difference: (i) the training of MF models is much more memory-intensive than that of DNN models, and (ii) the quantization errors across users and items in recommendation are not uniform. From these observations, we develop a quantization framework for MF models, named MASCOT, employing novel strategies (i.e., m-quantization and g-switching) to successfully address the aforementioned limitations of quantization in the training of MF models. The comprehensive evaluation using four real-world datasets demonstrates that MASCOT improves the training performance of MF models by about 45%, compared to the training without quantization, while maintaining low model errors, and the strategies and implementation optimizations of MASCOT are quite effective in the training of MF models. For the detailed information about MASCOT, we release the code of MASCOT and the datasets at: https://github.com/Yujaeseo/lCDM-2021_MASCOT.
UR - https://www.scopus.com/pages/publications/85125178359
UR - https://www.scopus.com/inward/citedby.url?scp=85125178359&partnerID=8YFLogxK
U2 - 10.1109/ICDM51629.2021.00039
DO - 10.1109/ICDM51629.2021.00039
M3 - Conference contribution
AN - SCOPUS:85125178359
T3 - Proceedings - IEEE International Conference on Data Mining, ICDM
SP - 290
EP - 299
BT - Proceedings - 21st IEEE International Conference on Data Mining, ICDM 2021
A2 - Bailey, James
A2 - Miettinen, Pauli
A2 - Koh, Yun Sing
A2 - Tao, Dacheng
A2 - Wu, Xindong
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 7 December 2021 through 10 December 2021
ER -