Robust Unlearning for Large Language Models

Kang Gu, Md Rafi Ur Rashid, Najrin Sultana, Shagufta Mehnaz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

With the rapid development of LLMs, we have witnessed intense competition among the major LLM products like ChatGPT, LLaMa, and Gemini. However, various issues (e.g. privacy leakage and copyright violation) of the training corpus still remain underexplored. For example, the Times sued OpenAI and Microsoft for infringing on its copyrights by using millions of its articles for training. From the perspective of LLM practitioners, handling such unintended privacy violations can be challenging. Previous work mainly approached the “unlearning" problem of LLMs via gradient information, while they mostly lacked theoretical guarantees. In this paper, we revisit the unlearning via the perspective of second-order information (Hessian). Our unlearning algorithms, inspired by the classic Newton update, are not only data-agnostic/model-agnostic but can also derive an upper bound for utility or privacy loss. Through a comprehensive evaluation with common NLP datasets and case studies on real-world datasets, our methods consistently show superiority over first-order methods.

Original languageEnglish (US)
Title of host publicationAdvances in Knowledge Discovery and Data Mining - 29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025, Proceedings
EditorsXintao Wu, Myra Spiliopoulou, Can Wang, Vipin Kumar, Longbing Cao, Yanqiu Wu, Zhangkai Wu, Yu Yao
PublisherSpringer Science and Business Media Deutschland GmbH
Pages143-155
Number of pages13
ISBN (Print)9789819681853
DOIs
StatePublished - 2025
Event29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025 - Sydney, Australia
Duration: Jun 10 2025Jun 13 2025

Publication series

NameLecture Notes in Computer Science
Volume15874 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference29th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2025
Country/TerritoryAustralia
CitySydney
Period6/10/256/13/25

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Robust Unlearning for Large Language Models'. Together they form a unique fingerprint.

Cite this