01

What Archon Does

Imagine you could record your best debugging process and replay it perfectly every time — that's Archon.

The problem every AI coder hits

You ask an AI coding agent to fix a bug. Sometimes it plans carefully, runs tests, and opens a polished pull request. Other times it skips right to editing files, breaks something else, and needs babysitting for an hour.

The same request, different outcomes. Why? Because AI agents improvise — they decide their own process each run. Archon's insight: what if the process was deterministic, even when the intelligence wasn't?

💡
The Key Insight

Archon separates two things that AI tools usually mix together: the structure of what to do (plan → implement → test → review → PR) and the intelligence applied at each step. Structure is yours. Intelligence is the AI's.

What happens when you use Archon

Imagine you type: "Use Archon to add dark mode to the settings page." Here's exactly what happens next:

1
An isolated workspace is created

Archon creates a fresh git worktree — your main code is untouched while the AI works.

2
The workflow YAML is loaded

Archon reads your .archon/workflows/build-feature.yaml file — a recipe that defines phases: plan, implement, test, review, approve, PR.

3
Each node runs in sequence

AI nodes send prompts to Claude. Bash nodes run deterministic commands. Human-approval nodes pause and wait for you.

4
A PR appears when done

You come back to a finished pull request with tests passing and review comments addressed — without babysitting a single step.

The recipe that makes it all work

Here's an actual Archon workflow from the codebase. Notice how it reads almost like a to-do list — that's intentional:

CODE

nodes:
  - id: plan
    prompt: "Explore the codebase and create an implementation plan"

  - id: implement
    depends_on: [plan]
    loop:
      prompt: "Read the plan. Implement the next task."
      until: ALL_TASKS_COMPLETE

  - id: run-tests
    depends_on: [implement]
    bash: "bun run validate"
            
PLAIN ENGLISH

This is a list of tasks (nodes) that Archon will run in order.

First task: have the AI look through the codebase and write a plan.

Second task: start implementing. "depends_on: plan" means this only starts after planning finishes.

A loop means the AI keeps going until ALL_TASKS_COMPLETE — it self-monitors its own progress.

Third task: run the actual test suite. "bash" means this is a regular command, no AI needed — always the same result.

🔑
Why this vocabulary matters

When you tell an AI coding agent "add a bash node after the tests step," it knows exactly what to do. You've acquired the precise language of Archon workflows — now you can steer AI changes to the workflow file itself.

Check your understanding

Your AI agent sometimes skips writing tests. How does Archon solve this?

You want to run 3 different bug fixes at the same time using Archon. What prevents them from interfering with each other?

02

The Cast of Characters

Every package in Archon has one job — know who does what so you can tell AI where to make changes.

Meet the players

Archon is organized as a monorepo — one repository, many packages. Think of it like a film production company: different departments (packages) each specialize in one thing, but they all work on the same film (your workflow).

⚙️
@archon/workflows — The Director

Loads YAML files, validates them, and runs each node in the right order. Knows about DAGs, loops, and conditions. Lives in packages/workflows/.

🧠
@archon/core — The Memory

Stores conversations, sessions, and workflow runs in a database. Other packages ask core for history. Lives in packages/core/.

🔌
@archon/providers — The Translator

Speaks to AI models (Claude, Codex) and normalizes their responses. If you swap AI models, only this package changes. Lives in packages/providers/.

🖥️
@archon/server + @archon/web — The Interface

The web dashboard and API. Server handles HTTP requests; web is the React frontend. Lives in packages/server/ and packages/web/.

🗂️
@archon/git — The Coordinator

Creates and manages git worktrees so parallel workflow runs never collide. Lives in packages/git/.

Watch the components talk

When you trigger a workflow from the CLI, here's the conversation that happens between packages:

The file tree, demystified

Now that you know the cast, here's where each character lives on disk:

packages/ All packages live here
workflows/ The Director — YAML loader and workflow executor
core/ The Memory — database and shared business logic
providers/ The Translator — Claude/Codex adapters
server/ HTTP API server
web/ React dashboard (the visual UI)
git/ The Coordinator — worktree management
.archon/workflows/ Your workflow YAML files — committed to your own repos
🎯
How to use this when steering AI

When you ask an AI to "add a new node type," you can now say: "Add it to @archon/workflows — the executor in packages/workflows/src/executor.ts." The AI goes straight to the right file instead of searching the whole codebase.

Match the package to its role

@archon/workflows
@archon/core
@archon/providers
@archon/git

Reads YAML files and decides which node to run next

Drop here

Sends prompts to Claude and normalizes responses

Drop here

Saves all conversation history and session state

Drop here

Creates an isolated workspace so parallel runs don't collide

Drop here
03

Workflows in Action

Every YAML node is either deterministic (bash) or intelligent (AI prompt) — know which is which and you'll never be surprised.

Two kinds of work

Think of a workflow like a factory assembly line. Some stations are robots — they do the same thing every time, perfectly, mechanically. Other stations are skilled craftspeople — they bring judgment and creativity, but their output varies. Archon lets you mix both.

🤖

Bash Node (Robot)

Runs a shell command. Same input, always same output. Use for: running tests, committing code, building the project. No AI involved.

🧠

Prompt Node (Craftsperson)

Sends a message to an AI model and uses its response. Output varies by model mood. Use for: planning, writing code, reviewing changes.

🔄

Loop Node

Repeats until a condition is true. Can be AI-driven (loop until ALL_TASKS_COMPLETE) or human-gated (loop until APPROVED). Enables iteration without manual restarts.

Inside the workflow executor

This is from packages/workflows/src/executor.ts — the heart of Archon. It reads the workflow definition and decides what to run:

CODE

async executeNode(node: WorkflowNode, context: RunContext) {
  if (node.bash) {
    return await executeBash(node.bash, context);
  }
  if (node.prompt) {
    return await executePrompt(node.prompt, context);
  }
  if (node.loop) {
    return await executeLoop(node.loop, context);
  }
}
            
PLAIN ENGLISH

This function handles a single workflow step (node).

If the node has a "bash" field, run a shell command. These are the robots.

Wait for the shell command to finish and return the result.

If the node has a "prompt" field, send a message to an AI. These are the craftspeople.

If the node has a "loop" field, handle the repeat-until-done pattern.

Watch a workflow execute step by step

👤
You
⚙️
Workflows
🗂️
Git
🧠
Claude
🔧
Tests
Click "Next Step" to begin
💡
The DAG pattern

Archon uses a DAG to figure out which nodes can run in parallel. If two nodes don't depend on each other, Archon runs them simultaneously. That's how it stays fast even in complex workflows.

Quiz: Workflows in practice

You want to add automated linting to your workflow. Should it be a bash node or a prompt node?

Your 'implement' node keeps starting before the 'plan' node finishes. What's missing?

04

How Data Flows

Follow a prompt from your keyboard to Claude and back — understanding this lets you debug any workflow stuck.

The message pipeline

When a prompt node runs, your workflow YAML isn't sent directly to Claude. It goes through several transformations — like a message passed through relay runners in a relay race, each one adding context or translating the format.

1

YAML node prompt text

2

Preamble + conversation history injected

3

Provider adapter formats for Claude API

4

Claude streams response

5

Response saved to core DB & returned to executor

🔍
Why "preamble" matters

Archon injects a preamble — system-level instructions — before your prompt. This is where it tells Claude about the codebase, the current worktree, and what tools are available. If an AI node seems confused, the preamble is the first place to investigate.

How the provider adapter works

The packages/providers/src/registry.ts file is where Archon figures out which AI service to call. It's like a phone switchboard — the same message can be routed to different destinations based on configuration.

CODE

export const PROVIDER_CATALOG: Record<string, ProviderDescriptor> = {
  'claude': { name: 'Claude', factory: createClaude },
  'codex':  { name: 'OpenAI Codex', factory: createCodex },
};

function getProvider(id: string): BaseProvider {
  const descriptor = PROVIDER_CATALOG[id];
  if (!descriptor) throw new Error(`Unknown provider: ${id}`);
  return descriptor.factory();
}
            
PLAIN ENGLISH

This is a lookup table: string name → how to create that AI provider.

We know about two providers: Claude and Codex. Adding a new AI would mean adding a line here.

When a workflow needs an AI, it calls getProvider with a string like "claude".

If the provider name isn't recognized, throw an error immediately — fail fast, clear message.

Otherwise, call the factory function to create and return a configured provider instance.

Where data lives

Archon stores everything in a database managed by @archon/core. Think of it as a filing cabinet with different drawers for different things:

Core Database (SQLite or PostgreSQL)

📋
Sessions
💬
Messages
📊
Workflow Runs
📁
Codebases
Click any component to learn what it stores

Quiz: Tracing data through Archon

Your company wants to add support for Gemini AI. Which package would you primarily change?

A loop node is on iteration 3. How does Claude know what happened in iterations 1 and 2?

05

When Things Break

Every Archon failure falls into one of four categories — know the categories and you'll never be lost for more than 5 minutes.

The four failure modes

When a workflow stops working, it's almost always one of these four things. Like a car that won't start — before calling a mechanic, you check fuel, battery, keys, and neutral. Archon has its own checklist.

📄

YAML Validation Error

The workflow file has a syntax error or uses an unknown field. Archon catches this before running anything — check the error message for the exact line.

🧠

AI Node Failure

Claude returned an error or timed out. Check API key, rate limits, or whether the prompt is too long. The error will be in the workflow run's message history in core.

🔧

Bash Node Failure

A shell command returned a non-zero exit code. This is the most diagnosable failure — run the bash command manually in the worktree to see the exact error.

🔄

Loop Never Terminates

The AI keeps looping but ALL_TASKS_COMPLETE is never triggered. Usually the AI is stuck — check the last few messages to see what it's stuck on. Add a max_iterations limit to your loop.

Spot the bug: broken workflow

This workflow YAML has a real bug that would cause a node to run before its dependency is ready. Click the line you think is wrong:

Find the bug in this workflow:

1 nodes:
2 - id: plan
3 prompt: "Create a plan"
4 - id: implement
5 depends_on: [planning]
6 prompt: "Read the plan. Implement."
7 - id: run-tests
8 depends_on: [implement]

Your debugging playbook

When a workflow fails, here's the exact sequence to follow:

1
Check which node failed

The workflow run shows a status for each node. Find the first FAILED node — that's your starting point.

2
Read the error message

Bash failures have exit codes and stderr output. AI failures have API error codes. YAML failures name the offending line. The message is almost always enough.

3
Inspect the worktree

Navigate to the isolated worktree directory. Run the failing command manually. See the exact state the AI left the files in.

4
Review the message history

If an AI node failed, read the conversation history in core. Often you can see exactly where the AI went wrong and add a clarifying instruction to the prompt.

⚠️
The infinite loop trap

If an AI loop is running forever, it's usually because the AI doesn't know how to signal completion. The fix: make your until condition explicit in the prompt. Instead of "keep going until done," say "when all tasks are complete, output the exact text ALL_TASKS_COMPLETE." Specificity beats hope.

Final quiz: Debugging Archon

Your bash node running 'bun run validate' is failing. What's your fastest debugging move?

Your implement loop has been running for 20 iterations and showing no sign of stopping. What would prevent this in the future?