Papers
arxiv:2510.12264

Reducing Belief Deviation in Reinforcement Learning for Active Reasoning

Published on Mar 3
Authors:
,
,
,
,
,
,
,

Abstract

Training stability and performance improvements are achieved through belief deviation tracking and trajectory truncation in large language model agents performing active reasoning tasks.

AI-generated summary

Active reasoning requires large language model (LLM) agents to interact with external sources and strategically gather information to solve problems in multiple turns. Central to this process is belief tracking: maintaining an accurate representation of the underlying state and uncertainty in understanding and solving the problem. However, due to limited reasoning capabilities, LLM-based agents often suffer belief deviation: their internal beliefs drift from the true problem state, leading to loss of state awareness and uninformative or repetitive actions. Once this happens, errors compound in the trajectories used for reinforcement learning (RL), leading to misattributed credits and limited exploration. To address this issue, we propose to track belief deviation and develop T^3, a simple yet principled method that detects excessive deviation and truncates training trajectories to suppress uninformative tail effects. Hence, T^3 preserves credits for informative prefixes and systematically improves policy optimization. Across 5 challenging tasks, T^3 consistently enhances training stability and yields performance gains of up to 30 points while cutting token cost by up to 34%. These results highlight belief control as a key principle for building robust LLM agents capable of active reasoning.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2510.12264
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.12264 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.12264 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.12264 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.