Preemptive Detection and Correction of Misaligned Actions in LLM Agents

1 UKP Lab, TU Darmstadt   2 Department of Electrical and Computer Engineering, Queen's University  

Why we need to preemptively detect misaligned actions?

LLM agents are transforming our daily lives — but what happens when they go off track? 🤖⚠️ Imagine your AI assistant accidentally buying an unwanted item, costing you money. 💸

Our paper introduces a preemptive detection and correction framework, InferAct, to collaborate with humans to rectify misaligned actions before they happen — keeping LLM agents reliable and safe. InferAct is a novel approach that leverages the belief reasoning ability of LLMs, grounded in Theory-of-Mind, to detect misaligned actions before execution. Once the misalignment is detected, InferAct alerts users for timely correction, preventing adverse outcomes and enhancing the reliability of LLM agents' decision-making processes.

An example of InferAct running in a real-world task.

Abstract

Deploying LLM-based agents in real-life applications often faces a critical challenge: the misalignment between agents' behavior and user intent. Such misalignment may lead agents to unintentionally execute critical actions that carry negative outcomes (e.g., accidentally triggering a "buy-now" in web shopping), resulting in undesirable or even irreversible consequences. Although addressing these issues is crucial, the preemptive detection and correction of misaligned actions remains relatively underexplored. To fill this gap, we introduce InferAct, a novel approach that leverages the belief reasoning ability of LLMs, grounded in Theory-of-Mind, to detect misaligned actions before execution. Once the misalignment is detected, InferAct alerts users for timely correction, preventing adverse outcomes and enhancing the reliability of LLM agents' decision-making processes. Experiments on three widely used tasks demonstrate that InferAct achieves up to 20% improvements on Marco-F1 against baselines in misaligned action detection. An in-depth evaluation of misalignment correction further highlights InferAct's effectiveness in improving agent alignment.

InferAct achieves up to 20% improvements on Marco-F1 against baselines in misaligned action detection.

The synergy between InferAct, Actor Agent and humans.

The Actor, guided by InferAct, consistently outperforms baselines over three iterations with both binary and natural language feedback.

BibTeX


        @article{fang2024preemptive,
        title={Preemptive detection and correction of misaligned actions in llm agents},
        author={Fang, Haishuo and Zhu, Xiaodan and Gurevych, Iryna},
        journal={arXiv preprint arXiv:2407.11843},
        year={2024}
      }