TY - GEN
T1 - CURRENT STATE AND BENCHMARKING OF GENERATIVE ARTIFICIAL INTELLIGENCE FOR ADDITIVE MANUFACTURING
AU - Surovi, Nowrin Akter
AU - Witherell, Paul
AU - Mathew, Vinay Saji
AU - Kumara, Soundar
N1 - Publisher Copyright:
© 2024 by National Institute of Standards and Technology (NIST).
PY - 2024
Y1 - 2024
N2 - Additive Manufacturing (AM) is gaining popularity in the industry for its cost-effectiveness and time-saving benefits. However, AM encounters challenges that need to be addressed to enhance its efficiency. While Machine Learning (ML) can tackle various AM challenges, it is often limited to specific issues, necessitating multiple models. In contrast, Generative Artificial Intelligence (GenAI) has the potential to mitigate instance-specific bias due to its broader training. This paper presents a comprehensive methodology for evaluating the capabilities of various existing GenAI tools in addressing diverse AM-related tasks. We propose three categories of metrics, totaling 35 metrics, namely agnostic, domain task, and problem task metrics. Additionally, we introduce a scoring matrix, a practical tool that can be used to assess the responses of different GenAI tools. The study involves data collection from diverse published papers, which are used to create inquiries for GenAI tools. The results demonstrate that transformer-based models, such as multi-modal GPT-4 and Gemini (prev. BARD), can handle both AM image and text data. In contrast, uni-modals such as GPT-3 and Llama 2 are proficient in processing AM text data. Furthermore, image-based models such as DALL·E 3 and Stable Diffusion can accept AM text data and generate images. It is also observed that the performance of these models varies across different AM-related tasks. The variation in their performance may be due to their underlying architecture and the training dataset.
AB - Additive Manufacturing (AM) is gaining popularity in the industry for its cost-effectiveness and time-saving benefits. However, AM encounters challenges that need to be addressed to enhance its efficiency. While Machine Learning (ML) can tackle various AM challenges, it is often limited to specific issues, necessitating multiple models. In contrast, Generative Artificial Intelligence (GenAI) has the potential to mitigate instance-specific bias due to its broader training. This paper presents a comprehensive methodology for evaluating the capabilities of various existing GenAI tools in addressing diverse AM-related tasks. We propose three categories of metrics, totaling 35 metrics, namely agnostic, domain task, and problem task metrics. Additionally, we introduce a scoring matrix, a practical tool that can be used to assess the responses of different GenAI tools. The study involves data collection from diverse published papers, which are used to create inquiries for GenAI tools. The results demonstrate that transformer-based models, such as multi-modal GPT-4 and Gemini (prev. BARD), can handle both AM image and text data. In contrast, uni-modals such as GPT-3 and Llama 2 are proficient in processing AM text data. Furthermore, image-based models such as DALL·E 3 and Stable Diffusion can accept AM text data and generate images. It is also observed that the performance of these models varies across different AM-related tasks. The variation in their performance may be due to their underlying architecture and the training dataset.
UR - http://www.scopus.com/inward/record.url?scp=85210486631&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85210486631&partnerID=8YFLogxK
U2 - 10.1115/DETC2024-144076
DO - 10.1115/DETC2024-144076
M3 - Conference contribution
AN - SCOPUS:85210486631
T3 - Proceedings of the ASME Design Engineering Technical Conference
BT - 44th Computers and Information in Engineering Conference (CIE)
PB - American Society of Mechanical Engineers (ASME)
T2 - ASME 2024 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC-CIE 2024
Y2 - 25 August 2024 through 28 August 2024
ER -