Are you an LLM? Read llms.txt for a summary of the docs, or llms-full.txt for the full context.
Skip to content

Automations

Automations let your agent work proactively — monitoring systems, digesting data, responding to incidents, detecting drift — without someone typing in a chat box. Instead of waiting for a user to ask "what happened overnight?", the agent runs on its own, queries the systems it needs, and routes the results to Slack, email, a webhook, or wherever your team actually looks.

Each automation is a file in the automations/ directory. It defines what the agent should do (the prompt), when it should do it (a cron schedule or webhook trigger), and where the results go. The agent gets a fresh session each time — no accumulated state, no drift, no runaway loops.

This is the difference between a chatbot and an operational agent. Chatbots wait. Agents act.

JSON Format (Recommended)

The simplest automation is a prompt and a schedule:

{
  "title": "Daily Revenue Digest",
  "schedule": "0 9 * * 1-5",
  "prompt": "Pull yesterday's revenue data and summarize by region. Highlight any anomalies compared to the weekly baseline."
}
FieldTypeDescription
titlestringDisplay name, shown in the CLI and admin dashboard
promptstringThe message sent to the agent when the automation runs
schedulestringCron expression (triggers cron mode)
trigger"cron" | "webhook" | "manual"Trigger type (auto-inferred from schedule if present)
deliveryobjectWhere results are routed on success — webhooks and/or ISV callbacks. See Delivery routing.
failureAlertobjectWhere failure alerts go, with consecutive-failure threshold + cooldown.
writeEnabledbooleanExplicitly allow write operations (default: false)

Trigger Types

TypeHow It Runs
cronOn a schedule. Inferred automatically when schedule is present. Uses standard cron syntax.
webhookIn response to an HTTP POST to the automation's webhook URL. The payload is injected into the prompt.
manualOn-demand via CLI (amodal ops automations trigger <name>). Useful for testing.

Complete Examples

Example 1: Daily Revenue Digest

A cron automation that runs every weekday morning, pulls revenue data from Stripe and QuickBooks, and posts a summary to Slack:

{
  "title": "Daily Revenue Digest",
  "schedule": "0 9 * * 1-5",
  "prompt": "Pull yesterday's revenue from Stripe (payments) and QuickBooks (invoices). Break down by region (NA, EMEA, APAC) using the customer metadata tags. Compare each region to its 7-day rolling average. Flag any region where revenue deviated more than 15% from the baseline. Include total MRR and net new revenue. Format as a concise Slack message.",
  "delivery": {
    "targets": [{ "type": "webhook", "url": "env:SLACK_WEBHOOK_REVENUE" }]
  }
}

When this runs, the agent gets a fresh session with the prompt as the initial message. It uses the request tool to query Stripe's payment intents and QuickBooks' invoice endpoints, dispatches sub-agents to process each data source in parallel, compiles the summary, and the runtime POSTs the formatted result to the Slack incoming webhook URL configured via SLACK_WEBHOOK_REVENUE in .env.

Example 2: Webhook-Triggered Incident Responder

A webhook automation that fires when PagerDuty sends an incident notification. The agent investigates the incident, gathers context, and posts a summary to Slack:

{
  "title": "Incident Responder",
  "trigger": "webhook",
  "prompt": "A PagerDuty incident was triggered. The event data is in the <event_data> block of this message.\n\nInvestigate this incident:\n1. Check Datadog for related metrics and anomalies in the affected service\n2. Search recent deployments in GitHub for changes to the affected service\n3. Check the knowledge base for known issues matching this pattern\n4. Check Slack #incidents for any related discussion\n\nReturn your findings with a severity assessment and recommended next steps as a concise Slack message.",
  "delivery": {
    "targets": [{ "type": "webhook", "url": "env:SLACK_WEBHOOK_INCIDENTS" }]
  },
  "failureAlert": {
    "after": 2,
    "cooldownMinutes": 15,
    "targets": [{ "type": "webhook", "url": "env:SLACK_WEBHOOK_ONCALL" }]
  }
}

When PagerDuty sends a POST to the automation's webhook URL, the runtime parses the payload, appends it to the prompt inside an <event_data> block (marked as untrusted input — the agent is explicitly instructed not to follow instructions from inside it), and starts a fresh agent session.

The agent then runs the full loop: dispatches sub-agents to query Datadog, GitHub, and knowledge docs in parallel, synthesizes the findings, and returns a structured incident summary. The runtime POSTs that summary to the SLACK_WEBHOOK_INCIDENTS Slack webhook. If two consecutive runs fail, failureAlert POSTs to the oncall webhook — with a 15-minute cooldown so a sustained outage doesn't flood the channel.

Example 3: Weekly API Drift Detector

A cron automation that compares current API specifications against stored baselines to detect unexpected changes:

{
  "title": "API Drift Detector",
  "schedule": "0 6 * * 1",
  "prompt": "Run a drift detection pass across all connected APIs.\n\nFor each connection with a spec.json:\n1. Fetch the current API spec from the live endpoint (if discovery URL is configured)\n2. Compare against the stored spec.json in the connection directory\n3. Flag any new endpoints, removed endpoints, changed request/response schemas, or deprecated fields\n4. For each drift, assess the impact: breaking change, additive change, or cosmetic\n\nWrite each drift to the drift_history store (keyed by connection + timestamp). Return a Slack-formatted summary — 'All clean' if nothing changed, or a prioritized list of breaking changes if anything did.",
  "delivery": {
    "targets": [{ "type": "webhook", "url": "env:SLACK_WEBHOOK_API_CHANGES" }]
  }
}

This automation writes findings to a store. Over time, the agent builds up a queryable history of API changes, deprecation patterns, and known quirks — making each subsequent drift check smarter. Stores persist across sessions; the knowledge base gets loaded into context at session start.

Webhook Automations

Webhook automations respond to external events in real time. Each webhook automation gets a unique URL that you register with your external service (PagerDuty, GitHub, Stripe, or any system that sends HTTP webhooks).

Getting the Webhook URL

After deploying, the runtime generates a webhook URL for each webhook-triggered automation:

$ amodal ops automations list
 
  Name                 Trigger   Schedule       Status    Webhook URL
  ──────────────────── ───────── ────────────── ───────── ──────────────────────────────────────────
  revenue-digest       cron      0 9 * * 1-5    active
  incident-responder   webhook              active    http://localhost:3847/webhooks/auto_7kx2m9
  drift-detector       cron      0 6 * * 1      active

Copy the webhook URL and configure it in your external service. For PagerDuty, add it as a webhook subscription. For GitHub, add it as a repository webhook. The URL is stable across deploys — it is tied to the automation name, not the deployment version.

Webhook Payload Handling

When the runtime receives a POST to a webhook URL, it:

  1. Validates the request (optional signature verification per-connection)
  2. Extracts the JSON payload
  3. Replaces {{event}} in the automation's prompt with the stringified payload
  4. Creates a fresh agent session with the composed prompt
  5. Runs the explore-plan-execute loop
  6. Routes output to the configured channel

The raw payload is available to the agent as context. The agent can parse it, extract relevant fields, and use them to guide its investigation. For large payloads, the runtime truncates to 10KB and includes a note that the payload was trimmed.

Webhook Security

For production deployments, configure signature verification in the automation:

{
  "title": "GitHub Push Handler",
  "trigger": "webhook",
  "prompt": "...",
  "webhookAuth": {
    "type": "hmac-sha256",
    "secret": "env:GITHUB_WEBHOOK_SECRET",
    "header": "X-Hub-Signature-256"
  }
}

The runtime validates the signature before processing the payload. Invalid signatures get a 401 response and the automation does not run.

Markdown Format (Legacy)

Automations can also be defined in Markdown. This format is supported but not recommended for new automations — JSON is more explicit and easier to validate:

# Automation: Morning Brief
 
Schedule: 0 7 * * *
 
## Check
 
Pull all active deals and recent activities from the CRM.
Summarize wins, losses, and pipeline changes.

The parser extracts the title from the heading, the schedule from the Schedule: line, and the prompt from the ## Check section. It works, but it is ambiguous in edge cases and does not support output routing.

How Runs Work

Each run is stateless. The agent queries systems fresh, using since=lastRunTimestamp to scope its queries to new data since the last run. The runtime provides lastRunSummary — a compact summary of the previous run's output — so the agent has continuity without accumulating state.

  1. Trigger — The scheduler fires a cron job or the webhook listener receives a POST
  2. Session creation — A fresh agent session is created with the automation's prompt (plus any injected event data)
  3. Explore-plan-execute — The agent runs the full reasoning loop. It dispatches task agents, queries APIs, loads knowledge, and synthesizes findings
  4. Output routing — Results are formatted and sent to the configured output channel (Slack, email, webhook, or stored in the session log)
  5. Cleanup — The session is closed and context is discarded. The run summary is stored for the next run's continuity

The statefulness guarantee matters. Automations cannot accumulate side effects across runs. Each run starts clean, which means a bug in one run cannot corrupt the next. If something goes wrong, the worst case is one bad summary — not a cascade of compounding errors.

Guardrails in Practice

Why Automations Cannot Write by Default

Automations run without a human in the loop. There is no one to confirm a write operation, review a destructive action, or catch a hallucinated API call. This is fundamentally different from interactive chat, where the user sees every tool call and can approve or reject writes.

Because of this asymmetry, automations are read-only by default. The agent can query any connected system, load knowledge, dispatch task agents, and generate output — but it cannot create, update, or delete data in external systems. The request tool rejects any call with intent: 'write' during an automation run unless writes are explicitly enabled.

Enabling Writes for Specific Automations

If an automation genuinely needs to write — posting a summary to Slack, creating a Jira ticket, updating a status page — you enable it explicitly in the automation config:

{
  "title": "Incident Responder",
  "trigger": "webhook",
  "prompt": "...",
  "writeEnabled": true
}

When writeEnabled is true, the agent can make write calls — but the runtime still enforces rate limits, audit logging, and per-tool confirmation rules. The writes are logged in the automation run's audit trail.

Setting writeEnabled is a deliberate decision. It shows up in code review. It is audited. There is no way to accidentally grant write access.

Delivery Routing

When an automation completes successfully, the runtime POSTs its final response text to each configured delivery target. Delivery is out-of-band from the agent's tools — the agent doesn't call a "post to slack" tool, the runtime routes the result after the turn ends. This keeps automations read-only by default while still letting results reach Slack, webhooks, email (via webhook), or ISV systems.

Targets

Two target types:

TypeShapeWhat it does
webhook{type: "webhook", url: "..."}POST the delivery payload to the URL. Supports env:VAR_NAME substitution resolved at bundle-load time.
callback{type: "callback", name?: "..."}Invoke the ISV-provided onAutomationResult handler registered with createAgent({onAutomationResult}). The optional name field lets ISVs route to specific handlers when they register several.

Slack delivery is just webhook pointing at a Slack incoming webhook URL. Email delivery works the same via any transactional email provider's webhook (SendGrid, Mailgun, etc.).

Payload shape

The runtime POSTs a JSON payload to webhook targets:

{
  "automation": "daily-revenue-digest",
  "status": "success",
  "timestamp": "2026-04-05T14:00:00Z",
  "result": "<the agent's final response text, max 16KB>",
  "truncated": false,
  "message": "<template-rendered string, if template was configured>",
  "data": { "mrr": 12.4, "growth": "+3.2%" }
}

data is populated automatically when the agent's last assistant message is valid JSON. That parsed object is available to templates.

Templates

If delivery.template is set, the runtime renders it with variables from data plus built-ins ({{automation}}, {{timestamp}}, {{result}}). Useful for shaping output per target:

{
  "delivery": {
    "targets": [{ "type": "webhook", "url": "env:SLACK_WEBHOOK" }],
    "template": "📊 *{{automation}}* — MRR ${{mrr}} ({{growth}})"
  }
}

Failure alerts

failureAlert is separate from delivery — it fires on failures, with a consecutive-failure threshold and cooldown:

{
  "failureAlert": {
    "after": 3,
    "cooldownMinutes": 60,
    "targets": [{ "type": "webhook", "url": "env:PAGERDUTY_WEBHOOK" }]
  }
}

after: 3 means don't page until 3 runs have failed in a row. cooldownMinutes: 60 means even during a sustained outage, only alert once per hour. Both defaults (after: 1, cooldownMinutes: 60) err on the side of earlier alerting.

State is in-memory. The consecutive-failure counter and last-alert timestamp live on the runtime process. Restarting the runtime resets the counter — a flapping service mid-cooldown may get re-alerted on restart. For hosted runtimes that cycle around deploys, expect occasional alert repetition at deploy boundaries.

Why delivery lives outside the tool system

Delivery is always allowed, even for automations without writeEnabled: true. Posting a summary to Slack isn't a "write" in the amodal sense — it's the runtime reporting what the agent found to the people who configured it. Writing means the agent itself calls a tool that changes external state. Delivery is just how automation results get where they need to go.

Managing Automations

# List all automations with their status and next run time
$ amodal ops automations list
 
  Name                 Trigger   Schedule       Status    Next Run
  ──────────────────── ───────── ────────────── ───────── ──────────────────────
  revenue-digest       cron      0 9 * * 1-5    active    2026-03-27 09:00 UTC
  incident-responder   webhook              active    (on event)
  drift-detector       cron      0 6 * * 1      paused
 
# Pause an automation (skips future runs until resumed)
$ amodal ops automations pause drift-detector
 Paused "API Drift Detector" will not run until resumed
 
# Resume a paused automation
$ amodal ops automations resume drift-detector
 Resumed "API Drift Detector" next run: 2026-03-30 06:00 UTC
 
# Manually trigger an automation (useful for testing)
$ amodal ops automations trigger revenue-digest
 Triggered "Daily Revenue Digest"
 Run started session auto_run_8k2mx4
 Streaming output to #finance-daily
 
# View the log of recent automation runs
$ amodal ops automations history revenue-digest
 
  Run ID           Started              Duration   Status     Output
  ──────────────── ──────────────────── ────────── ────────── ──────────
  run_9x2k4m       2026-03-26 09:00    42s        completed #finance-daily
  run_8m3k1n       2026-03-25 09:00    38s        completed #finance-daily
  run_7k4n2p       2026-03-24 09:00    51s        completed #finance-daily