TY - GEN
T1 - On the Effectiveness of LLMs for Manual Test Verifications
AU - Peixoto, Myron
AU - Baía, Davy
AU - Nascimento, Nathalia
AU - Alencar, Paulo
AU - Fonseca, Baldoino
AU - Ribeiro, Márcio
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Background: Manual testing is vital for detecting issues missed by automated tests, but specifying accurate verifications is challenging. Aims: This study aims to explore the use of Large Language Models (LLMs) to produce verifications for manual tests. Method: We conducted two independent and complementary exploratory studies. The first study involved using 2 closed-source and 6 open-source LLMs to generate verifications for manual test steps and evaluate their similarity to original verifications. The second study involved recruiting software testing professionals to assess their perception and agreement with the generated verifications compared to the original ones. Results: The open-source models Mistral-7B and Phi-3-mini-4k demonstrated effectiveness and consistency comparable to closed-source models like Gemini-1.5-flash and GPT-3.5-turbo in generating manual test verifications. However, the agreement level among professional testers was slightly above 40%, indicating both promise and room for improvement. While some LLM-generated verifications were considered better than the originals, there were also concerns about AI hallucinations, where verifications significantly deviated from expectations. Conclusion: We contributed by evaluating the effectiveness of 8 LLMs through similarity and human acceptance studies, identifying top-performing models like Mistral-7B and GPT-3.5-turbo. Although the models show potential, the relatively modest 40% agreement level highlights the need for further refinement. Enhancing the accuracy, relevance, and clarity of the generated verifications is crucial to ensure greater reliability in real-world testing scenarios.
AB - Background: Manual testing is vital for detecting issues missed by automated tests, but specifying accurate verifications is challenging. Aims: This study aims to explore the use of Large Language Models (LLMs) to produce verifications for manual tests. Method: We conducted two independent and complementary exploratory studies. The first study involved using 2 closed-source and 6 open-source LLMs to generate verifications for manual test steps and evaluate their similarity to original verifications. The second study involved recruiting software testing professionals to assess their perception and agreement with the generated verifications compared to the original ones. Results: The open-source models Mistral-7B and Phi-3-mini-4k demonstrated effectiveness and consistency comparable to closed-source models like Gemini-1.5-flash and GPT-3.5-turbo in generating manual test verifications. However, the agreement level among professional testers was slightly above 40%, indicating both promise and room for improvement. While some LLM-generated verifications were considered better than the originals, there were also concerns about AI hallucinations, where verifications significantly deviated from expectations. Conclusion: We contributed by evaluating the effectiveness of 8 LLMs through similarity and human acceptance studies, identifying top-performing models like Mistral-7B and GPT-3.5-turbo. Although the models show potential, the relatively modest 40% agreement level highlights the need for further refinement. Enhancing the accuracy, relevance, and clarity of the generated verifications is crucial to ensure greater reliability in real-world testing scenarios.
UR - https://www.scopus.com/pages/publications/105009161789
UR - https://www.scopus.com/inward/citedby.url?scp=105009161789&partnerID=8YFLogxK
U2 - 10.1109/DeepTest66595.2025.00012
DO - 10.1109/DeepTest66595.2025.00012
M3 - Conference contribution
AN - SCOPUS:105009161789
T3 - Proceedings - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025
SP - 45
EP - 52
BT - Proceedings - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning, DeepTest 2025
Y2 - 3 May 2025
ER -