How Effective are LLMs for Data Science Coding? A Controlled Experiment

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The adoption of Large Language Models (LLMs) for code generation in data science offers substantial potential for enhancing tasks such as data manipulation, statistical analysis, and visualization. However, the effectiveness of these models in the data science domain remains underexplored. This paper presents a controlled experiment that empirically assesses the performance of four leading LLM-based AI assistants-Microsoft Copilot (GPT-4 Turbo), ChatGPT (o1-preview), Claude (3.5 Sonnet), and Perplexity Labs (Llama-3.1-70b-instruct)-on a diverse set of data science coding challenges sourced from the Stratacratch platform. Using the Goal-Question-Metric (GQM) approach, we evaluated each model's effectiveness across task types (Analytical, Algorithm, Visualization) and varying difficulty levels. Our statistical testing confirms that all models achieved success rates significantly above 50%, demonstrating performance beyond chance. ChatGPT and Claude significantly exceeded the 60% threshold, but no model reached 70%, indicating limitations in achieving higher accuracy. ChatGPT maintained consistent performance across difficulty levels, whereas Claude's success varied with task complexity. Hypothesis testing indicates that task type does not significantly impact success rate overall. For analytical tasks, efficiency analysis shows no significant differences in execution times, though ChatGPT tended to be slower and less predictable despite high success rates. For visualization tasks, while similarity quality among LLMs is comparable, ChatGPT consistently delivered the most accurate outputs. This study provides a structured, empirical evaluation of LLMs in data science, delivering insights that support informed model selection tailored to specific task demands. Our findings establish a framework for future AI assessments, emphasizing the value of rigorous evaluation beyond basic accuracy measures.

Original languageEnglish (US)
Title of host publicationProceedings - 2025 IEEE/ACM 22nd International Conference on Mining Software Repositories, MSR 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages211-222
Number of pages12
ISBN (Electronic)9798331501839
DOIs
StatePublished - 2025
Event22nd IEEE/ACM International Conference on Mining Software Repositories, MSR 2025 - Ottawa, Canada
Duration: Apr 27 2025Apr 29 2025

Publication series

NameProceedings - 2025 IEEE/ACM 22nd International Conference on Mining Software Repositories, MSR 2025

Conference

Conference22nd IEEE/ACM International Conference on Mining Software Repositories, MSR 2025
Country/TerritoryCanada
CityOttawa
Period4/27/254/29/25

All Science Journal Classification (ASJC) codes

  • Safety, Risk, Reliability and Quality
  • Computer Science Applications
  • Software

Fingerprint

Dive into the research topics of 'How Effective are LLMs for Data Science Coding? A Controlled Experiment'. Together they form a unique fingerprint.

Cite this