TY - GEN
T1 - LLM-ACTR
T2 - 2025 AAAI Spring Symposium Series, SSS 2025
AU - Wu, Siyu
AU - Oltramari, Alessandro
AU - Francis, Jonathan
AU - Giles, C. Lee
AU - Ritter, Frank E.
N1 - Publisher Copyright:
Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2025/5/28
Y1 - 2025/5/28
N2 - Using off-the-shelf large language models (LLMs) in manufacturing decision-making often results in broadly competent but noisy behavior. Previous approaches that employ LLMs for decision-making struggle with complex reasoning tasks that require deliberate cognition over fast and intuitive inference. These approaches often report issues related to insufficient grounding, such as human-level but unhuman-like behaviors. Here, we move toward addressing this gap and ask whether language models can learn from cognitive models for human-like decisions. We introduce VSM-ACTR 2.0, an ACT-R cognitive model for manufacturing solutions, and LLM-ACTR, a developing framework for knowledge transfer from cognitive models to language models. The ACT-R cognitive architecture is designed to computationally model the internal mechanisms of human cognitive decision-making. LLM-ACTR extracts knowledge from ACT-R's internal decision-making processes, represents it as latent neural representations, and injects this content vector into trainable LLM adapter layers. It then fine-tunes the LLMs for downstream decision-making predictions. We find that, after fine-tuning and adding the content vector to the activations during the LLM forward pass, the LLM offers better representations of human decision-making behaviors on a novel Design for Manufacturing problem, compared to an LLM-only model that employs chain-of-thought reasoning strategies. Taken together, the results open up new research directions for equipping LLMs with the necessary knowledge to computationally model and replicate the internal mechanisms of human cognitive decision-making.
AB - Using off-the-shelf large language models (LLMs) in manufacturing decision-making often results in broadly competent but noisy behavior. Previous approaches that employ LLMs for decision-making struggle with complex reasoning tasks that require deliberate cognition over fast and intuitive inference. These approaches often report issues related to insufficient grounding, such as human-level but unhuman-like behaviors. Here, we move toward addressing this gap and ask whether language models can learn from cognitive models for human-like decisions. We introduce VSM-ACTR 2.0, an ACT-R cognitive model for manufacturing solutions, and LLM-ACTR, a developing framework for knowledge transfer from cognitive models to language models. The ACT-R cognitive architecture is designed to computationally model the internal mechanisms of human cognitive decision-making. LLM-ACTR extracts knowledge from ACT-R's internal decision-making processes, represents it as latent neural representations, and injects this content vector into trainable LLM adapter layers. It then fine-tunes the LLMs for downstream decision-making predictions. We find that, after fine-tuning and adding the content vector to the activations during the LLM forward pass, the LLM offers better representations of human decision-making behaviors on a novel Design for Manufacturing problem, compared to an LLM-only model that employs chain-of-thought reasoning strategies. Taken together, the results open up new research directions for equipping LLMs with the necessary knowledge to computationally model and replicate the internal mechanisms of human cognitive decision-making.
UR - https://www.scopus.com/pages/publications/105016634419
UR - https://www.scopus.com/inward/citedby.url?scp=105016634419&partnerID=8YFLogxK
U2 - 10.1609/aaaiss.v5i1.35610
DO - 10.1609/aaaiss.v5i1.35610
M3 - Conference contribution
AN - SCOPUS:105016634419
T3 - AAAI Spring Symposium - Technical Report
SP - 340
EP - 349
BT - AAAI Spring Symposium - Technical Report
A2 - Petrick, Ron
A2 - Geib, Christopher
PB - Association for the Advancement of Artificial Intelligence
Y2 - 31 March 2025 through 2 April 2025
ER -