Workshop on Large Language Models' Interpretability and Trustworthiness (LLMIT)

Tulika Saha, Sriparna Saha, Debasis Ganguly, Prasenjit Mitra

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

Large language models (LLMs), when scaled from millions to billions of parameters, have been demonstrated to exhibit the so-called 'emergence' effect, in that they are not only able to produce semantically correct and coherent text, but are also able to adapt themselves surprisingly well with small changes in contexts supplied as inputs (commonly called prompts). Despite producing semantically coherent and potentially relevant text for a given context, LLMs are vulnerable to yield incorrect information. This misinformation generation, or the so-called hallucination problem of an LLM, gets worse when an adversary manipulates the prompts to their own advantage, e.g., generating false propaganda to disrupt communal harmony, generating false information to trap consumers with target consumables etc. Not only does the consumption of an LLM-generated hallucinated content by humans pose societal threats, such misinformation, when used as prompts, may lead to detrimental effects for in-context learning (also known as few-shot prompt learning). With reference to the above-mentioned problems of LLM usage, we argue that it is necessary to foster research on topics related to not only identifying misinformation from LLM-generated content, but also to mitigate the propagation effects of this generated misinformation on downstream predictive tasks thus leading to more robust and effective leveraging in-context learning.

Original languageEnglish (US)
Title of host publicationCIKM 2023 - Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
PublisherAssociation for Computing Machinery
Pages5290-5293
Number of pages4
ISBN (Electronic)9798400701245
DOIs
StatePublished - Oct 21 2023
Event32nd ACM International Conference on Information and Knowledge Management, CIKM 2023 - Birmingham, United Kingdom
Duration: Oct 21 2023Oct 25 2023

Publication series

NameInternational Conference on Information and Knowledge Management, Proceedings

Conference

Conference32nd ACM International Conference on Information and Knowledge Management, CIKM 2023
Country/TerritoryUnited Kingdom
CityBirmingham
Period10/21/2310/25/23

All Science Journal Classification (ASJC) codes

  • General Business, Management and Accounting
  • General Decision Sciences

Fingerprint

Dive into the research topics of 'Workshop on Large Language Models' Interpretability and Trustworthiness (LLMIT)'. Together they form a unique fingerprint.

Cite this