Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

The Core Loop

Every Amodal agent runs the same fundamental cycle:

Explore → what's going on? query systems, load context, gather data
Plan    → what should happen? reason about findings, decide next steps
Execute → do it. call APIs, dispatch agents, present results, learn

This is the conceptual model. For the actual runtime implementation — the discriminated-union state machine that drives every agent turn — see State Machine.

Adaptive Depth

Not every question needs the full loop. The runtime matches depth to the question automatically:

QuestionLoop Behavior
"What's the current error rate?"Explore only — query and answer
"Why did latency spike at 3 PM?"Explore + Plan — gather data, correlate, explain
"Investigate the payment failures"Full loop — sub-agent dispatch, iterative reasoning, skill activation

The Compounding Effect

The loop compounds through stores and knowledge. Every execution can write findings to a store — patterns identified, false positives flagged, baselines updated — so the next explore phase starts with prior context already loaded in.

Session 1: Explore → slow, everything is new
           Plan    → generic reasoning
           Execute → discover false positive, write to findings store
 
Session 50: Explore → fast, stores and KB have patterns and baselines
            Plan    → informed reasoning with historical context
            Execute → focused on novel signals, skip known patterns

This is the flywheel — the system learns from use. See Knowledge Base for details.

How the loop actually runs

Under the hood, the loop is implemented as an explicit state machine rather than a while loop with implicit states. Each agent turn transitions through: thinkingstreamingexecuting (if tools were called) → back to thinking, until the model stops or a stopping condition fires.

Runtime guards:

  • Max turns — prevent infinite loops
  • Max tokens — hard budget ceiling
  • Loop detection — detects when the agent is stuck calling the same tool repeatedly with similar arguments
  • Context compaction — when the conversation exceeds a token threshold, older turns are summarized into a structured snapshot so the agent can keep going

For the full state machine — all six states, the transition rules, and how streaming/tool-calls/compaction interleave — see State Machine.