amodal.json
Every Amodal project starts with a single file: amodal.json at the root of your repo. This is the manifest. It tells the runtime who the agent is, which LLM to use, how to handle failover when a provider goes down, and how to connect to the outside world — MCP servers, data stores, and sandbox environments.
Think of it as package.json for an intelligent agent. The runtime reads it on startup, resolves any environment variable references, validates the schema, and uses it to bootstrap everything: the reasoning loop, provider connections, tool execution policies, and automation scheduling.
The config is deliberately flat. There are no nested pipelines or DAGs to configure. You pick your models, point at your connections, and the runtime handles the rest.
Minimal Config
The smallest thing that works. Three fields — name, version, and a main model:
{
"name": "my-agent",
"version": "0.1.0",
"models": {
"main": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514"
}
}
}This gives you a working agent with no connections, no knowledge base, and no automations. It can chat, reason, and use built-in tools. You would use this during early development or when prototyping a new agent before wiring up APIs.
The provider field tells the runtime which SDK adapter to load. The model field is passed directly to that provider's API. The runtime expects the provider's API key in the standard environment variable (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.) unless you specify explicit credentials.
Production Config with Failover
A realistic configuration for a deployed agent. This is what a production ops-agent might look like — primary model with a fallback, a cheaper model for data-gathering sub-agents, and custom system context:
{
"name": "ops-agent",
"version": "2.4.1",
"description": "Infrastructure monitoring and incident response for Acme Corp",
"models": {
"main": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"credentials": {
"api_key": "env:ANTHROPIC_API_KEY"
},
"fallback": {
"provider": "openai",
"model": "gpt-4o",
"credentials": {
"api_key": "env:OPENAI_API_KEY"
}
}
},
"explore": {
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"fallback": {
"provider": "google",
"model": "gemini-2.0-flash"
}
}
},
"userContext": "You are the operations agent for Acme Corp. Our infrastructure runs on AWS (us-east-1 and eu-west-1). We use Datadog for monitoring, PagerDuty for incident management, and Slack (#ops-alerts) for communication. Always check Datadog metrics before escalating. Never restart production services without explicit confirmation."
}The userContext string is injected at the top of every session prompt. Use it for standing instructions that should always apply — your company's infrastructure layout, naming conventions, escalation policies, or behavioral constraints. This is not the place for methodology (that goes in skills) or reference data (that goes in the knowledge base). Think of it as the agent's permanent memory of "who am I and what are the ground rules."
Config with MCP, Stores, and Sandbox
When you need external tool servers, persistent storage, and sandboxed execution:
{
"name": "finops-agent",
"version": "1.2.0",
"description": "Financial operations analysis and reporting",
"models": {
"main": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514"
},
"explore": {
"provider": "google",
"model": "gemini-2.0-flash"
}
},
"sandbox": {
"shellExec": true,
"template": "finops-sandbox-v2",
"maxTimeout": 60000
},
"stores": {
"dataDir": ".amodal/store-data",
"backend": "pglite"
},
"proactive": {
"webhook": "https://hooks.acme.com/amodal/finops"
},
"mcp": {
"servers": {
"github": {
"transport": "stdio",
"command": "uvx",
"args": ["mcp-server-github"],
"env": { "GITHUB_TOKEN": "env:GITHUB_TOKEN" }
},
"postgres": {
"transport": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": { "DATABASE_URL": "env:ANALYTICS_DB_URL" }
},
"custom-tools": {
"transport": "sse",
"url": "https://tools.internal.acme.com/mcp",
"headers": {
"Authorization": "env:INTERNAL_TOOLS_TOKEN"
}
}
}
}
}MCP (Model Context Protocol) servers extend your agent with additional tools beyond the built-in set. Each server entry declares how to connect — either stdio (the runtime spawns the process) or sse (the runtime connects to a running server over HTTP). The tools exposed by these servers appear alongside built-in tools in the agent's tool list.
The sandbox block controls shell execution. When shellExec is true, the agent can run arbitrary commands. The template field specifies which sandbox image to use (pre-configured with your dependencies), and maxTimeout caps how long any single command can run.
The stores block configures the database backend used for both agent document stores (see Stores) and session persistence (conversation history across restarts). PGLite (the default) runs an in-process WASM build of Postgres — no external database needed for local dev. Agent data lives in .amodal/store-data/ and session history in .amodal/session-data/, both file-backed so they survive restarts. For hosted runtimes and ISV production deployments, switch to a real Postgres by setting backend: "postgres" and providing postgresUrl (typically env:DATABASE_URL). Both backends share the same Drizzle schema, so switching is a pure config change.
Fields
Required
| Field | Type | Description |
|---|---|---|
name | string | Agent name (min 1 char). Used in logging and deployment IDs. |
version | string | Semantic version. Used for snapshot tagging and versioning. |
models.main | ModelConfig | Primary agent model. This is the reasoning brain — it handles the conversation, plans, and decides. |
Optional
| Field | Type | Description |
|---|---|---|
description | string | Agent description. Shown in CLI output. |
userContext | string | Injected at the top of every session prompt. Standing instructions, company context, behavioral constraints. |
models.explore | ModelConfig | Model for explore/gather sub-agents. Should be faster and cheaper than main. |
sandbox | object | Sandbox execution config. Controls whether custom tool handlers can call ctx.exec() for shell commands, and how sandboxing is enforced. |
stores | object | Data store backend config. Defaults to PGLite in the local .amodal/store-data directory. |
proactive | object | Webhook URL for external triggers (automation webhooks, third-party integrations). |
mcp | object | MCP server connections. Each server exposes additional tools to the agent. |
webTools | object | Enables web_search + fetch_url built-in tools via Gemini grounding. See Web Tools. |
ModelConfig
{
"provider": "anthropic" | "openai" | "google",
"model": "claude-sonnet-4-20250514",
"baseUrl": "https://...", // optional, custom endpoint
"credentials": { // optional, explicit keys
"api_key": "env:ANTHROPIC_API_KEY"
},
"fallback": { ... } // optional, another ModelConfig
}Model Tiers: Main vs. Explore
The main model is your reasoning engine. It handles the user conversation, decides when to dispatch sub-agents, interprets their findings, plans next steps, and composes the final response. This is where model quality matters most — you want the best model you can afford here, because this is where judgment happens.
The explore model is the workhorse. When the primary agent dispatches a task agent to gather data — "go query Datadog for the last hour of CPU metrics" or "pull the customer's recent Stripe invoices" — that task agent uses the explore model. These sub-agents do focused, bounded work: load some knowledge, make API calls, interpret the raw response, and return a clean summary. They do not need the full reasoning capability of the main model.
This matters for cost and latency. A complex investigation might dispatch 5-10 task agents, each making multiple tool calls. If every one of those runs on your most expensive model, costs add up fast and the user waits longer. By routing sub-agents to a faster, cheaper model, you keep the primary agent's context clean (it only sees the summaries) and your token bill reasonable.
Typical pairings:
| Use Case | Main | Explore |
|---|---|---|
| Cost-optimized | Claude Sonnet | Claude Haiku |
| Quality-first | Claude Opus | Claude Sonnet |
| Multi-provider | Claude Sonnet | Gemini Flash |
If you omit models.explore, the runtime falls back to models.main for everything. This works fine — it just costs more and runs slower on complex questions.
Fallback Chains
Every ModelConfig can include a fallback — another ModelConfig that the runtime tries when the primary provider fails. Failures include HTTP 5xx errors, rate limits (429), timeouts, and authentication errors.
The fallback is itself a full ModelConfig, which means it can have its own fallback, forming a chain:
{
"models": {
"main": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"fallback": {
"provider": "openai",
"model": "gpt-4o",
"fallback": {
"provider": "google",
"model": "gemini-2.5-pro"
}
}
}
}
}In this example: the runtime tries Anthropic first. If that is down (outage, rate limit), it falls back to OpenAI GPT-4o. If OpenAI also fails, it falls back to Gemini. The user never sees the failover — the runtime handles it transparently and logs the switch.
Fallback is per-request. If Anthropic recovers, the next request goes back to the primary. There is no sticky routing.
This is particularly useful for production deployments where uptime matters more than provider loyalty. A multi-provider fallback chain means your agent stays up even during provider outages, which happen more often than you would like.
Environment Variables
Any string value in amodal.json can reference an environment variable using the env: prefix:
{
"stores": {
"postgresUrl": "env:DATABASE_URL"
},
"mcp": {
"servers": {
"github": {
"env": { "GITHUB_TOKEN": "env:GITHUB_TOKEN" }
}
}
}
}The runtime resolves these at parse time — before any connections are established or tools are loaded. If a referenced variable is missing, the runtime throws an ENV_NOT_SET error and refuses to start. This is intentional. A misconfigured agent is worse than a stopped agent.
When to Use env: vs. Hardcoded Values
Always useenv: for:
- API keys, tokens, and secrets of any kind
- Database connection strings (they contain passwords)
- Anything that changes between environments (dev/staging/prod)
- The value is not sensitive: model names, project IDs, region strings, cron schedules
- The value is part of the agent's identity: name, version, description
A good rule of thumb: if you would not want the value visible in a public GitHub repo, use env:. The amodal.json file is checked into git. Secrets should never be in it directly.
For local development, put your environment variables in a .env file at the repo root (and add it to .gitignore). The runtime loads .env automatically in repo mode. For production deployments, set them in your hosting environment's secret management — Kubernetes secrets, AWS Parameter Store, Fly.io secrets, or whatever your infrastructure uses.