TY - GEN
T1 - How Effective are LLMs for Data Science Coding? A Controlled Experiment
AU - Nascimento, Nathalia
AU - Guimaraes, Everton
AU - Chintakunta, Sai Sanjna
AU - Boominathan, Santhosh Anitha
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - The adoption of Large Language Models (LLMs) for code generation in data science offers substantial potential for enhancing tasks such as data manipulation, statistical analysis, and visualization. However, the effectiveness of these models in the data science domain remains underexplored. This paper presents a controlled experiment that empirically assesses the performance of four leading LLM-based AI assistants-Microsoft Copilot (GPT-4 Turbo), ChatGPT (o1-preview), Claude (3.5 Sonnet), and Perplexity Labs (Llama-3.1-70b-instruct)-on a diverse set of data science coding challenges sourced from the Stratacratch platform. Using the Goal-Question-Metric (GQM) approach, we evaluated each model's effectiveness across task types (Analytical, Algorithm, Visualization) and varying difficulty levels. Our statistical testing confirms that all models achieved success rates significantly above 50%, demonstrating performance beyond chance. ChatGPT and Claude significantly exceeded the 60% threshold, but no model reached 70%, indicating limitations in achieving higher accuracy. ChatGPT maintained consistent performance across difficulty levels, whereas Claude's success varied with task complexity. Hypothesis testing indicates that task type does not significantly impact success rate overall. For analytical tasks, efficiency analysis shows no significant differences in execution times, though ChatGPT tended to be slower and less predictable despite high success rates. For visualization tasks, while similarity quality among LLMs is comparable, ChatGPT consistently delivered the most accurate outputs. This study provides a structured, empirical evaluation of LLMs in data science, delivering insights that support informed model selection tailored to specific task demands. Our findings establish a framework for future AI assessments, emphasizing the value of rigorous evaluation beyond basic accuracy measures.
AB - The adoption of Large Language Models (LLMs) for code generation in data science offers substantial potential for enhancing tasks such as data manipulation, statistical analysis, and visualization. However, the effectiveness of these models in the data science domain remains underexplored. This paper presents a controlled experiment that empirically assesses the performance of four leading LLM-based AI assistants-Microsoft Copilot (GPT-4 Turbo), ChatGPT (o1-preview), Claude (3.5 Sonnet), and Perplexity Labs (Llama-3.1-70b-instruct)-on a diverse set of data science coding challenges sourced from the Stratacratch platform. Using the Goal-Question-Metric (GQM) approach, we evaluated each model's effectiveness across task types (Analytical, Algorithm, Visualization) and varying difficulty levels. Our statistical testing confirms that all models achieved success rates significantly above 50%, demonstrating performance beyond chance. ChatGPT and Claude significantly exceeded the 60% threshold, but no model reached 70%, indicating limitations in achieving higher accuracy. ChatGPT maintained consistent performance across difficulty levels, whereas Claude's success varied with task complexity. Hypothesis testing indicates that task type does not significantly impact success rate overall. For analytical tasks, efficiency analysis shows no significant differences in execution times, though ChatGPT tended to be slower and less predictable despite high success rates. For visualization tasks, while similarity quality among LLMs is comparable, ChatGPT consistently delivered the most accurate outputs. This study provides a structured, empirical evaluation of LLMs in data science, delivering insights that support informed model selection tailored to specific task demands. Our findings establish a framework for future AI assessments, emphasizing the value of rigorous evaluation beyond basic accuracy measures.
UR - https://www.scopus.com/pages/publications/105009113116
UR - https://www.scopus.com/pages/publications/105009113116#tab=citedBy
U2 - 10.1109/MSR66628.2025.00041
DO - 10.1109/MSR66628.2025.00041
M3 - Conference contribution
AN - SCOPUS:105009113116
T3 - Proceedings - 2025 IEEE/ACM 22nd International Conference on Mining Software Repositories, MSR 2025
SP - 211
EP - 222
BT - Proceedings - 2025 IEEE/ACM 22nd International Conference on Mining Software Repositories, MSR 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE/ACM International Conference on Mining Software Repositories, MSR 2025
Y2 - 27 April 2025 through 29 April 2025
ER -