ReAct

ReAct, short for Reasoning and Acting, is an agentic LLM framework introduced by Yao et al. in a 2022 Google paper that interleaves reasoning traces with action calls, producing more reliable agent behavior than either pure chain-of-thought or pure tool-use approaches. A ReAct agent alternates between Thought steps (verbal reasoning about what to do next), Action steps (invoking tools or APIs), and Observation steps (incorporating tool results into the running context). The structured Thought-Action-Observation loop helps the agent stay on track and recover from errors more reliably than letting it act without verbalizing its plan. ReAct became the default agent pattern in LangChain's early agent implementations and remains widely used in 2024-2025. Variants include ReAct with self-reflection, ReAct with planning, and ReAct combined with verification steps. The technique works with any sufficiently capable LLM: GPT-4, Claude, Gemini, and Llama 3+ all handle ReAct effectively. AI governance teams document the agent pattern in their AI compliance lineage because different patterns have different failure modes, audit characteristics, and behavioral predictability.

ReAct agents through Centralpoint: Centralpoint orchestrates ReAct-style agents using any LLM as the reasoning engine — Claude, GPT-4, Gemini, Llama — in a model-agnostic stack with full action logging. Tokens are metered per skill, prompts stay local, and agentic chatbots deploy through one line of JavaScript on any portal.


Related Keywords:
ReAct,,