01

Meet OpenHuman

An AI assistant that actually knows you — because it remembers everything, connects everywhere, and runs on your machine.

What is OpenHuman?

Imagine having a super-smart assistant that lives on your computer, reads your emails, checks your calendar, browses your documents — and remembers all of it. That's OpenHuman.

It's a desktop application you install on your computer (Mac, Windows, or Linux). Unlike ChatGPT or Claude which live in the cloud and forget you the moment you close the tab, OpenHuman lives on your machine and builds a personal knowledge base from your actual data.

๐Ÿง 

Remembers Everything

Builds a “Memory Tree” — a knowledge graph of all your connected data, stored locally on your machine.

๐Ÿ”—

118+ Integrations

Connects to Gmail, Slack, Notion, GitHub, Calendar, and more — with one-click setup, no coding required.

๐Ÿ”’

Private by Default

Your data stays on your computer. Nothing gets sent to a server unless you explicitly ask the AI to do something.

What happens when you use it?

Let's trace what happens from the moment you open the app to when it responds to your question. Think of it like a postal system — you drop a letter (your question), it goes through a sorting facility (the Rust core), and a response comes back to your mailbox (the screen).

1
You type a question in the chat

The React frontend captures your message and sends it via HTTP

2
The Tauri shell relays it to the Rust core

A thin bridge passes your message from the JavaScript world into the Rust engine

3
The core consults your Memory Tree

It searches through everything it knows about you — your emails, docs, calendar — to find relevant context

4
An AI model generates a response

The core sends your question + relevant memories to an LLM and streams the answer back

5
You see the answer appear in real-time

The response streams back through the same path, appearing character by character on your screen

The starting line: where the app begins

Every application has a “front door” — the first line of code that runs when you launch it. In OpenHuman, that's a file called main.rs written in Rust. Let's see what it does:

CODE

fn main() {
    // Load settings from a .env file
    let _ = dotenvy::dotenv();

    // Start the error tracking system
    let _sentry_guard = sentry::init(...);

    // Grab the command-line arguments
    let args: Vec<String> =
        std::env::args().skip(1).collect();

    // Hand off to the core engine
    if let Err(err) =
        openhuman_core::run_core_from_args(&args)
    {
        eprintln!("{err}");
        std::process::exit(1);
    }
}
            
PLAIN ENGLISH

When the app starts, this function runs first...

Load configuration from a .env file (like a settings sheet)...

Turn on error tracking so the team can fix bugs they never see...

Collect whatever commands the user typed in the terminal...

...skipping the program name itself...

Pass those commands to the main engine that does all the real work...

If something goes wrong, show the error and exit.

๐Ÿ’ก
Key Insight: The Thin Entry Pattern

Notice how main.rs doesn't do anything “smart” itself? It just sets up the basics and hands off to openhuman_core. This is a pattern called separation of concerns — the front door is just a front door. The real house is inside.

Check your understanding

You want an AI assistant that has full context of your emails, calendar, and documents. Why does OpenHuman store your data locally on your machine instead of in the cloud?

A friend asks you: “Where does the main business logic of OpenHuman live?” Based on what you learned about main.rs, what's the best answer?

02

The Cast of Characters

OpenHuman is built from five main actors, each with a specific role. Think of them as a theater troupe — every actor has their part, and the show only works when they all play together.

Meet the troupe

Like a movie production where the director, actors, camera crew, script supervisor, and editor each have distinct jobs, OpenHuman splits its work across five components:

๐Ÿ–ฅ๏ธ
React Frontend (app/src/)

The “actor on stage” — the screens, buttons, and chat interface you see and click. Built with React and runs inside a web view.

๐Ÿ 
Tauri Shell (app/src-tauri/)

The “theater building” — manages the desktop window, starts/stops the core process, and bridges messages between the frontend and the Rust engine. Written in Rust + Tauri.

โš™๏ธ
Rust Core (src/)

The “director” — where all the real intelligence lives. Handles memory, integrations, agent logic, scheduling, and skills. Runs as a separate sidecar process.

๐Ÿ—„๏ธ
SQLite + Obsidian Vault (~/.openhuman/)

The “script supervisor's notebook” — SQLite holds structured data; Obsidian-compatible Markdown files let you browse your knowledge base.

โ˜๏ธ
External Services (Gmail, Slack, LLMs...)

The “guest stars” — third-party services the core talks to. 118+ integrations via OAuth, plus AI models for generating responses.

Where the code lives

The project is organized like a house with distinct rooms. Each folder has a clear purpose:

openhuman/ The entire project (a monorepo)
app/ Frontend: React UI + Tauri desktop shell
src/ React components, screens, services
src-tauri/ Rust desktop host (window management, IPC bridge)
src/ Rust core engine (the brain)
openhuman/ Domain logic: memory, channels, skills, cron...
core_server/ HTTP server, JSON-RPC handling, CLI dispatch
core/ Event bus, shared types, infrastructure
Cargo.toml Rust package definition

How they're wired together

Click any component to learn what it does and how it connects to the others:

Your Computer

๐Ÿ–ฅ๏ธ
React Frontend
๐Ÿ 
Tauri Shell
โš™๏ธ
Rust Core (sidecar)

External Services

๐Ÿ”—
Integrations
๐Ÿค–
LLM Models
Click any component to learn what it does

The layering in action: how the app boots

When you open the app, React loads a chain of “providers” — each one sets up a different piece of the puzzle before the UI appears. It's like building a sandwich layer by layer:

CODE

App() {
  return (
    <Sentry.ErrorBoundary>
      <Provider store={store}>
        <PersistGate>
          <BootCheckGate>
            <CoreStateProvider>
              <SocketProvider>
                <ChatRuntimeProvider>
                  <Router>
                    {/* Your screens appear here */}
                  </Router>
                </ChatRuntimeProvider>
              </SocketProvider>
            </CoreStateProvider>
          </BootCheckGate>
        </PersistGate>
      </Provider>
    </Sentry.ErrorBoundary>
  )
}
            
PLAIN ENGLISH

Wrap everything in an error catcher (if something crashes, show a nice error screen instead of going blank)...

Set up the global state manager (stores your user info, preferences, chat history)...

Wait for saved state to load from disk (so your settings persist between sessions)...

Check that the core engine is running and ready...

Connect to the core process for state updates...

Open a real-time socket for streaming AI responses...

Set up the chat system that talks to the AI...

Finally, show the screens and navigation!

๐Ÿ—๏ธ
Good to Know: Provider Chain Pattern

This nesting pattern is called the “Provider Chain” — each layer wraps the next, like Russian dolls. The outermost layer loads first, the innermost loads last. If you ever need to debug “why isn't my data loading?”, check whether the right provider is wrapping your component.

Check your understanding

You're building a new feature and need to decide where to put the logic. The feature involves analyzing email patterns to suggest calendar events. Where should this logic go?

If the PersistGate fails to load saved state from disk, what happens?

03

How the Pieces Talk

Components don't just sit there — they're constantly sending messages back and forth. Let's trace the communication paths that make OpenHuman work.

The postal system inside your app

Think of OpenHuman's internals like a city's postal system. The React frontend is a citizen dropping letters in a mailbox. The Tauri shell is the mail truck carrying letters between the citizen and the post office (the Rust core). Inside the post office, different departments (memory, agent, integrations) send internal memos to each other via pneumatic tubes (the event bus).

๐Ÿ“ฌ

JSON-RPC over HTTP

The “mail truck” — the frontend sends structured requests to the core, and the core sends structured responses back. Each request has a method name, parameters, and an ID.

๐Ÿ“ก

WebSocket (Socket.io)

The “phone line” — for real-time streaming. When the AI is typing a response, characters stream live to your screen through a persistent connection.

๐Ÿ””

Event Bus (Internal)

The “pneumatic tubes” — modules inside the Rust core broadcast events. When a cron job finishes, it publishes an event, and the channel module picks it up.

Trace a message from click to response

Watch what happens when you type “Summarize my latest emails” and press Enter:

๐Ÿ–ฅ๏ธ
React UI
๐Ÿ 
Tauri Shell
โš™๏ธ
Rust Core
๐Ÿ—„๏ธ
SQLite
๐Ÿค–
LLM
Click “Next Step” to begin

The internal memo system: Event Bus

Inside the Rust core, modules don't call each other directly. Instead, they broadcast events — like sending a memo to the entire office. Anyone who cares about that memo can act on it. Watch how a scheduled job triggers a notification:

๐Ÿ’ก
Key Insight: Decoupling via Events

Notice how the Cron module doesn't need to know about Slack, and the Agent doesn't need to know about Cron. They just broadcast events and react to events. This is called decoupling, and it's what makes the codebase maintainable. When you want to add a new feature (like sending digests to Discord), you just add a new subscriber — no need to touch the existing code.

What an event looks like in code

Events in OpenHuman are defined as a Rust enum called DomainEvent. Each variant is a different type of event that can happen:

CODE

pub enum DomainEvent {
    // An agent turn started
    AgentTurnStarted {
        session_id: String,
        channel: String,
    },
    // A memory entry was stored
    MemoryStored {
        key: String,
        category: String,
        namespace: String,
    },
    // A scheduled job completed
    CronJobCompleted {
        job_id: String,
        success: bool,
    },
}
            
PLAIN ENGLISH

Define all the types of events that can happen in the system...

“The AI started working on a request” โ€” which session and which channel (Slack, chat, etc.)...

“Something was saved to memory” โ€” what key, what category, which namespace...

“A scheduled job finished” โ€” which job and whether it succeeded.

Check your understanding

You notice that when you type a message, there's a brief delay before the AI starts responding. Which communication path is responsible for this round-trip?

You want to add a feature where cron job results are also sent to Discord. Based on how the event bus works, what's the best approach?

04

Memory & Integrations

The magic that makes OpenHuman “know” you — a personal knowledge base built from 118+ data sources, compressed and stored on your machine.

Your data, organized like a library

Imagine a library where every book is about you — your emails are in one section, your calendar events in another, your code commits in a third. A librarian (the Memory Tree system) reads every new book, writes a summary card, and files it in the right section. When you ask a question, the librarian knows exactly which books to pull.

That's essentially what OpenHuman does. Here's how the “librarian” works:

1
Connect your accounts

You click “Connect Gmail” and authorize with OAuth — one click, no passwords shared

2
Auto-fetch pulls data every 20 minutes

The core walks each active connection and pulls fresh data — new emails, calendar events, Slack messages, GitHub commits...

3
Data gets compressed into Markdown chunks

Each piece of data is converted to ≤3,000-token Markdown summaries — small enough to feed to the AI, rich enough to be useful

4
Chunks go into SQLite + Obsidian vault

Structured data in SQLite for fast search; Markdown files in an Obsidian-compatible vault you can browse

118+ connections, one-click setup

OpenHuman connects to the tools you already use. Each integration is exposed to the AI agent as a “tool” — a capability it can use. Think of each tool as a different hand the AI can reach out with:

๐Ÿ“ง

Communication

Gmail, Slack, Discord, Telegram. The agent can read and send messages across all your channels.

๐Ÿ“‹

Productivity

Notion, Google Calendar, Drive, Linear, Jira. Your tasks, docs, and schedules are always in context.

๐Ÿ’ป

Development

GitHub repos, commits, PRs, issues. The agent knows your codebase history and can act on it.

๐Ÿ’ฐ

Business

Stripe payments, analytics. Track revenue and customer data alongside your other tools.

๐Ÿ”—
Good to Know: Tool Surface

Each integration is exposed to the AI agent as a typed “tool” — meaning the agent can call specific actions like “send email”, “create calendar event”, or “search Notion”. The agent doesn't need to know how the tool works internally — it just calls the tool and gets a result. This is the same pattern used by MCP.

How memories get created

Let's watch the components collaborate when new data arrives from Gmail:

In code: the memory pipeline events

The Memory Tree system uses events to coordinate the pipeline. Here's what those events look like:

CODE

pub enum DomainEvent {
    // Memory events
    MemoryStored {
        key: String,
        category: String,
        namespace: String,
    },
    MemoryIngestionStarted {
        document_id: String,
        title: String,
    },
    MemoryIngestionCompleted {
        document_id: String,
        chunks_created: usize,
    },
}
            
PLAIN ENGLISH

The types of memory-related events...

“Something was saved” โ€” what key, what category, which bucket...

“The AI started reading a document to extract knowledge” โ€” which document, what's its title...

“The AI finished reading” โ€” same document, and how many knowledge chunks were created.

๐Ÿ’ก
Key Insight: The Chunk Size Matters

Why ≤3,000 tokens per chunk? Because that's small enough for an LLM's context window to process efficiently, but large enough to carry meaningful information. It's like choosing the right paragraph length for a summary card in a library catalog.

Check your understanding

You connected Gmail at 9am. At 11am you ask “What did my boss email about today?” How does OpenHuman already know about the emails that arrived between 9am and 11am?

Why does OpenHuman compress data before storing it in the Memory Tree? What's the main benefit?

05

Clever Engineering

The tricks that make OpenHuman fast, cheap, secure, and reliable — patterns you can ask AI to implement in your own projects.

TokenJuice: making every word count

Think of tokens like postage stamps. Every word sent to the AI costs a stamp. Most raw data — HTML emails, API responses, web pages — is packed with formatting junk that adds stamps without adding meaning. TokenJuice strips the junk so you send fewer stamps.

CODE

// Secret scrubbing patterns in main.rs
static SECRET_PATTERNS: Lazy<Vec<(Regex, &str)>> =
    Lazy::new(|| vec![
    (r"(?i)(bearer\s+)\S+",  "${1}[REDACTED]"),
    (r"(?i)(api[_-]?key[=:\s]+)\S+",
                          "${1}[REDACTED]"),
    (r"(?i)(token[=:\s]+)\S+",
                          "${1}[REDACTED]"),
    (r"sk-[a-zA-Z0-9]{20,}",
                          "[REDACTED]"),
]);
            
PLAIN ENGLISH

Define a list of secret patterns to scan for...

If you see “Bearer xyz123”, replace the token with [REDACTED]...

If you see “api_key=xyz123”, hide the key...

Same for any “token=” pattern...

And OpenAI-style “sk-” secret keys get fully redacted.

๐Ÿ’ก
Key Insight: Defense in Depth

OpenHuman doesn't just scrub secrets in one place — it scrubs them at multiple layers. The error reporting system (Sentry) scrubs secrets before sending crash reports. The TokenJuice compressor strips sensitive patterns before data touches the AI. This “defense in depth” approach means a bug in one layer won't leak your API keys.

Smart model routing: the right brain for the job

Not every question needs the smartest (and most expensive) AI model. “What time is it?” doesn't need the same brain as “Analyze my quarterly revenue trends.” OpenHuman routes each task to the right model — like a hospital routing patients to the right specialist:

๐Ÿ”ฌ

Reasoning Model

For complex analysis, code generation, and multi-step tasks. Like a specialist doctor — expensive but thorough. Used when the agent needs to think deeply.

โšก

Fast Model

For quick questions, formatting, and simple tasks. Like a general practitioner — fast and efficient. Most daily interactions use this model.

๐Ÿ‘๏ธ

Vision Model

For analyzing images, screenshots, and visual content. Activated only when images are involved — no wasted capacity.

You can also run models locally via Ollama for complete privacy — no data leaves your machine at all.

Security: multiple layers of protection

OpenHuman treats your data like a bank treats money — multiple vaults, multiple guards, multiple audit trails. Here are the key security layers:

Local-first storage All data stays on your machine in SQLite + files. Nothing in the cloud unless you explicitly ask.
OAuth (no passwords) Third-party connections use OAuth tokens, never your actual passwords. Revoke access anytime.
Secret scrubbing API keys, bearer tokens, and secrets are automatically redacted before data reaches any AI model or error report.
send_default_pii: false Error reporting (Sentry) is configured to never send personal information like your name, email, or IP address.
Encrypted locally Sensitive workflow data is encrypted at rest on your device. Even if someone accesses your disk, they can't read it.

Skills: giving the agent new abilities

The agent isn't limited to built-in tools. OpenHuman has a skills system that lets it learn new tricks — like installing apps on your phone. Skills run in a sandbox (a safe, isolated space) powered by QuickJS.

1

Skill author writes a skill package (JavaScript)

2

Package is published to the skills registry on GitHub

3

Agent discovers and loads the skill at runtime

4

Skill runs in a QuickJS sandbox with limited permissions

โš ๏ธ
Common Mistake: Confusing Skills with Plugins

Skills are NOT browser plugins. They don't inject JavaScript into third-party websites. They run inside the Rust core in a sandboxed QuickJS environment, with no access to external web pages. If you tell an AI to “add a skill that scrapes LinkedIn,” it would need to use the built-in scraper tool (which runs through proper channels), not inject code into LinkedIn's pages.

The big picture

You've now seen how OpenHuman works end-to-end. Here's the full architecture in one view:

๐Ÿ–ฅ๏ธ

You interact with the React UI

๐Ÿ 

Tauri relays messages to the core

โš™๏ธ

Rust Core orchestrates everything

๐Ÿง 

Memory provides context to the agent

๐Ÿค–

LLM generates intelligent responses

Final quiz

You're explaining OpenHuman's security approach to a friend who's worried about connecting their Gmail. Which argument best explains why their data is safe?

A startup wants to build an AI assistant similar to OpenHuman. They're worried about LLM costs. Which OpenHuman pattern should they adopt first?

You want to add a skill that automatically formats your Notion notes. Where does this skill's code actually run?