AI-powered business leaders

Build Your AI Workforce

Etherion is an open-source platform for deploying autonomous AI agent teams that work together to achieve complex business goals.

Platform Design Goals

Database-Enforced Isolation

Row-Level Security policies in PostgreSQL enforce tenant isolation. Every connection sets app.tenant_id. Application code cannot bypass tenant boundaries.

Checklist-Based Orchestration

Explicit task checklists replace rigid loops. Tool requests require justification (what/how/why). Sequential and parallel execution modes. Real-time progress via GraphQL subscriptions.

Knowledge Base

Vector search runs directly in PostgreSQL using pgvector with HNSW indexes. Configurable embedding models generate high-dimensional vectors for text and images. Per-tenant tables with Row-Level Security. MinIO object storage for multimodal retrieval.

Technical Architecture

Bare-metal architecture with full infrastructure ownership. FastAPI + GraphQL API, Celery workers for async execution, PostgreSQL with pgvector and Row-Level Security, Redis for pub/sub, MinIO for object storage, and HashiCorp Vault for secrets. Declarative infrastructure with NixOS + Ansible.

Deployment Options

Architectural blueprints with drafting tools

Self-Hosted

Deploy on your own bare-metal servers or VMs. NixOS modules provision PostgreSQL+pgvector, Redis, MinIO, HashiCorp Vault, and HAProxy. Ansible playbooks for fleet management. Complete control over data residency and infrastructure with zero cloud vendor lock-in.

Local Development

Run locally with Docker Compose or `etherion up`. PostgreSQL, Redis, and MinIO containers with named volumes. Python 3.11+ virtual environment. One-command bootstrap with `etherion bootstrap`.

Production Ready

Systemd service management. Alembic database migrations. HashiCorp Vault for secrets injection. HAProxy + Nginx for load balancing and DDoS protection. Patroni for PostgreSQL HA. FRRouting for BGP failover.

How It Works

From goal to execution in three simple steps

1

Delegate Your Goal

Start by describing your business objective in plain English. Go beyond simple tasks—assign complex, multi-step goals and provide the Orchestrator with the strategic context it needs.

Command Bar
Analyze our top three competitors' marketing from this quarter and generate a summary report
2

Observe the Workflow in Real-Time

Your private Orchestrator analyzes the goal, creates a plan, and assembles a team of specialist agents. Watch the entire reasoning process unfold in our transparent, Grok-style execution trace.

Interaction View
Thought:I need to identify the top three competitors...
Action:unified_research_tool
Observation:Found 3 competitors: CompanyA, CompanyB, CompanyC
Cost:$0.015
3

Receive the Result

The Orchestrator synthesizes the work of its specialist team into a single, cohesive final output that directly achieves your goal. Provide feedback to make your workforce even smarter over time.

Assistant

Competitor Marketing Analysis - Q4 2024

Based on my analysis of your top three competitors...

Key Findings:
  • • CompanyA increased social media spend by 40%
  • • CompanyB launched new content strategy
  • • CompanyC focused on influencer partnerships

Complete System Architecture

Bare-metal architecture with database-enforced multi-tenancy, asynchronous job execution, and real-time updates. Full infrastructure ownership with NixOS + Ansible.

flowchart LR subgraph Edge["<span style='color:#1e40af'>Load Balancer</span>"] DNS["<span style='color:#3b82f6'>DNS</span>"] --> HAProxy["<span style='color:#3b82f6'>HAProxy + Nginx</span>"] end HAProxy --> FE["<span style='color:#10b981'>Frontend<br/>(Next.js)</span>"] HAProxy --> API["<span style='color:#10b981'>API Server<br/>(FastAPI+GraphQL)</span>"] API <--> REDIS[("<span style='color:#ef4444'>Redis<br/>Cluster</span>")] API <--> SQL[("<span style='color:#8b5cf6'>PostgreSQL<br/>pgvector + Patroni</span>")] API --> MINIO[("<span style='color:#f59e0b'>MinIO<br/>Per-tenant buckets</span>")] API --> VAULT[("<span style='color:#ec4899'>HashiCorp Vault<br/>Secrets + Creds</span>")] BEAT["<span style='color:#6366f1'>Celery Beat<br/>Scheduler</span>"] --> WRK["<span style='color:#10b981'>Systemd Worker<br/>(Celery)</span>"] API --> WRK WRK <--> REDIS WRK <--> SQL style Edge fill:#dbeafe,stroke:#1e40af,stroke-width:2px style DNS fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style HAProxy fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style FE fill:#d1fae5,stroke:#10b981,stroke-width:2px style API fill:#d1fae5,stroke:#10b981,stroke-width:2px style REDIS fill:#fecaca,stroke:#ef4444,stroke-width:2px style SQL fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style MINIO fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style VAULT fill:#fce7f3,stroke:#ec4899,stroke-width:2px style BEAT fill:#e0e7ff,stroke:#6366f1,stroke-width:2px style WRK fill:#d1fae5,stroke:#10b981,stroke-width:2px

Knowledge Base Architecture

OAuth-secured connectors ingest data from your tools into PostgreSQL with pgvector. Configurable embedding models enable semantic search. All data is tenant-isolated with Row-Level Security.

flowchart TD OAuth["<span style='color:#3b82f6'>OAuth Consent</span>"] --> Vault["<span style='color:#8b5cf6'>HashiCorp Vault<br/>Encrypted tokens</span>"] Scheduler["<span style='color:#6366f1'>Celery Beat<br/>Scheduler</span>"] --> Worker["<span style='color:#10b981'>Ingestion Worker<br/>Celery Task</span>"] Drive["<span style='color:#f59e0b'>Google Drive</span>"] --> Worker OneDrive["<span style='color:#3b82f6'>OneDrive</span>"] --> Worker Airtable["<span style='color:#f59e0b'>Airtable</span>"] --> Worker Notion["<span style='color:#000000'>Notion</span>"] --> Worker HubSpot["<span style='color:#ff7a59'>HubSpot</span>"] --> Worker Jira["<span style='color:#0052cc'>Jira</span>"] --> Worker Slack["<span style='color:#4a154b'>Slack</span>"] --> Worker Vault --> Worker Worker --> PGVector["<span style='color:#06b6d4'>PostgreSQL + pgvector<br/>tenant-scoped docs</span>"] PGVector --> Embed["<span style='color:#ec4899'>Embedding Service<br/>Configurable models<br/>High-D vectors</span>"] Embed --> Search["<span style='color:#10b981'>Semantic Search<br/>HNSW Index</span>"] PGVector --> RLS["<span style='color:#ef4444'>Row-Level Security<br/>app.tenant_id</span>"] style OAuth fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Vault fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Scheduler fill:#e0e7ff,stroke:#6366f1,stroke-width:2px style Worker fill:#d1fae5,stroke:#10b981,stroke-width:2px style Drive fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style OneDrive fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Airtable fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Notion fill:#e5e7eb,stroke:#000000,stroke-width:2px style HubSpot fill:#fed7aa,stroke:#ff7a59,stroke-width:2px style Jira fill:#bfdbfe,stroke:#0052cc,stroke-width:2px style Slack fill:#ddd6fe,stroke:#4a154b,stroke-width:2px style PGVector fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Embed fill:#fce7f3,stroke:#ec4899,stroke-width:2px style Search fill:#d1fae5,stroke:#10b981,stroke-width:2px style RLS fill:#fecaca,stroke:#ef4444,stroke-width:2px

Dual-Orchestrator: The 2N+1 Loop

IO performs dual search (KB + web), evaluates teams, and enforces fail-closed tool approval. TeamOrchestrator executes the 2N+1 loop: N specialist agents work in parallel, each validating tool requests with what/how/why justification against the ToolManager registry. A final synthesis step integrates all findings into a coherent response.

flowchart TD Goal["<span style='color:#3b82f6'>User Goal</span>"] --> IO["<span style='color:#8b5cf6'>IO<br/>Dual Search</span>"] IO --> KBSearch["<span style='color:#10b981'>Search KB</span>"] IO --> WebSearch["<span style='color:#10b981'>Search Web</span>"] KBSearch --> EvalTeams["<span style='color:#f59e0b'>Evaluate Teams</span>"] WebSearch --> EvalTeams EvalTeams --> SelectTeam["<span style='color:#8b5cf6'>Select Team</span>"] SelectTeam --> Spec1["<span style='color:#10b981'>Specialist 1<br/>Parallel</span>"] SelectTeam --> Spec2["<span style='color:#10b981'>Specialist 2<br/>Parallel</span>"] SelectTeam --> SpecN["<span style='color:#10b981'>Specialist N<br/>Parallel</span>"] Spec1 --> ToolReq1["<span style='color:#ef4444'>Tool Request<br/>what/how/why</span>"] Spec2 --> ToolReq2["<span style='color:#ef4444'>Tool Request<br/>what/how/why</span>"] SpecN --> ToolReqN["<span style='color:#ef4444'>Tool Request<br/>what/how/why</span>"] ToolReq1 --> RegCheck1["<span style='color:#6366f1'>ToolManager<br/>Registry</span>"] ToolReq2 --> RegCheck1 ToolReqN --> RegCheck1 RegCheck1 --> PreApprove["<span style='color:#f59e0b'>Pre-Approved<br/>for Team?</span>"] PreApprove --> CredsCheck["<span style='color:#f59e0b'>Tenant Creds<br/>Available?</span>"] CredsCheck --> WriteOp["<span style='color:#ef4444'>Write Op?<br/>Confirm User</span>"] WriteOp --> ExecTool["<span style='color:#06b6d4'>Execute Tool</span>"] Spec1 --> Synthesis["<span style='color:#8b5cf6'>Synthesis<br/>Integrate 2N</span>"] Spec2 --> Synthesis SpecN --> Synthesis Synthesis --> Result["<span style='color:#10b981'>Final Response</span>"] style Goal fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style IO fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style KBSearch fill:#d1fae5,stroke:#10b981,stroke-width:2px style WebSearch fill:#d1fae5,stroke:#10b981,stroke-width:2px style EvalTeams fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style SelectTeam fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Spec1 fill:#d1fae5,stroke:#10b981,stroke-width:2px style Spec2 fill:#d1fae5,stroke:#10b981,stroke-width:2px style SpecN fill:#d1fae5,stroke:#10b981,stroke-width:2px style ToolReq1 fill:#fecaca,stroke:#ef4444,stroke-width:2px style ToolReq2 fill:#fecaca,stroke:#ef4444,stroke-width:2px style ToolReqN fill:#fecaca,stroke:#ef4444,stroke-width:2px style RegCheck1 fill:#e0e7ff,stroke:#6366f1,stroke-width:2px style PreApprove fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style CredsCheck fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style WriteOp fill:#fecaca,stroke:#ef4444,stroke-width:2px style ExecTool fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Synthesis fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Result fill:#d1fae5,stroke:#10b981,stroke-width:2px

Asynchronous Job Execution

Jobs run in the background using Celery workers and Redis as the message broker. Two worker pools handle different workloads: worker-agents for orchestration, worker-artifacts for ingestion and heavy processing. Real-time status updates stream via GraphQL subscriptions.

flowchart TD API["<span style='color:#10b981'>API Request<br/>executeGoal</span>"] --> Job["<span style='color:#8b5cf6'>Job Table<br/>Postgres RLS</span>"] Job --> Celery["<span style='color:#ef4444'>Celery Task<br/>Enqueued</span>"] Celery --> Redis["<span style='color:#ef4444'>Redis Broker<br/>Cluster</span>"] Redis --> WorkerA["<span style='color:#10b981'>Worker-Agents<br/>Orchestration</span>"] Redis --> WorkerB["<span style='color:#f59e0b'>Worker-Artifacts<br/>Ingestion + Heavy</span>"] WorkerA --> Repo["<span style='color:#06b6d4'>Repository<br/>MinIO + PostgreSQL</span>"] WorkerB --> Repo WorkerA --> Done["<span style='color:#10b981'>Job Complete</span>"] WorkerB --> Done Done --> PubSub["<span style='color:#ef4444'>Redis Pub/Sub<br/>job_trace_{job_id}</span>"] PubSub --> UI["<span style='color:#3b82f6'>GraphQL Subscription<br/>Real-time updates</span>"] Repo --> UI style API fill:#d1fae5,stroke:#10b981,stroke-width:2px style Job fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Celery fill:#fecaca,stroke:#ef4444,stroke-width:2px style Redis fill:#fecaca,stroke:#ef4444,stroke-width:2px style WorkerA fill:#d1fae5,stroke:#10b981,stroke-width:2px style WorkerB fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Repo fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Done fill:#d1fae5,stroke:#10b981,stroke-width:2px style PubSub fill:#fecaca,stroke:#ef4444,stroke-width:2px style UI fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px

MCP Tools with OAuth Security

All tools use Model Context Protocol (MCP) and connect to third-party systems via OAuth. OAuth tokens are encrypted in HashiCorp Vault. Tool calls validate against the ToolManager registry, require pre-approval for the team, and for write operations, require explicit user confirmation. Rate limiting via token bucket + Redis prevents API abuse.

flowchart TD User["<span style='color:#3b82f6'>Specialist<br/>Requests Tool</span>"] --> Approval["<span style='color:#ef4444'>Fail-Closed<br/>Approval</span>"] Approval --> Reg["<span style='color:#6366f1'>1. Registered<br/>in ToolManager?</span>"] Reg --> PreApp["<span style='color:#f59e0b'>2. Pre-Approved<br/>for Team?</span>"] PreApp --> Creds["<span style='color:#f59e0b'>3. Tenant Creds<br/>Available?</span>"] Creds --> Write["<span style='color:#ef4444'>4. Write Op?<br/>User Confirm</span>"] Write --> OAuth["<span style='color:#3b82f6'>OAuth Token<br/>Retrieval</span>"] OAuth --> Vault["<span style='color:#8b5cf6'>HashiCorp Vault<br/>Encrypted Storage</span>"] Vault --> RateLimit["<span style='color:#f59e0b'>Rate Limiter<br/>Token bucket + Redis</span>"] RateLimit --> MCPTool["<span style='color:#10b981'>MCP Tool<br/>Invoke</span>"] MCPTool --> API["<span style='color:#06b6d4'>Third-party API<br/>Slack, Jira, Gmail, etc.</span>"] API --> Result["<span style='color:#10b981'>Result<br/>to Specialist</span>"] style User fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Approval fill:#fecaca,stroke:#ef4444,stroke-width:2px style Reg fill:#e0e7ff,stroke:#6366f1,stroke-width:2px style PreApp fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Creds fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Write fill:#fecaca,stroke:#ef4444,stroke-width:2px style OAuth fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Vault fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style RateLimit fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style MCPTool fill:#d1fae5,stroke:#10b981,stroke-width:2px style API fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Result fill:#d1fae5,stroke:#10b981,stroke-width:2px

AI Assets Repository

Every artifact agents create is stored in MinIO and indexed in PostgreSQL. Documents, datasets, code, and media are searchable and retrievable. Full execution traces are archived as JSONL for replay and audit.

flowchart TD Repo["<span style='color:#06b6d4'>Repository Service</span>"] --- Assets["<span style='color:#f59e0b'>Assets<br/>docs, data, code, media</span>"] AgentA["<span style='color:#10b981'>Agent A</span>"] --> Repo AgentB["<span style='color:#10b981'>Agent B</span>"] --> Repo AgentC["<span style='color:#10b981'>Agent C</span>"] --> Repo Repo --> MINIO["<span style='color:#f59e0b'>MinIO Buckets<br/>tnt-{tenant}-assets</span>"] Repo --> PG["<span style='color:#06b6d4'>PostgreSQL<br/>tenant-scoped assets</span>"] BQ --> Search["<span style='color:#8b5cf6'>Vector Search<br/>Semantic retrieval</span>"] Repo --> Trace["<span style='color:#ef4444'>Execution Traces<br/>replays/{job_id}/trace.jsonl</span>"] Search --> AgentA Search --> AgentB Search --> AgentC Trace --> Replay["<span style='color:#6366f1'>Full-Fidelity Replay<br/>LangChain messages + IO</span>"] style Repo fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Assets fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style AgentA fill:#d1fae5,stroke:#10b981,stroke-width:2px style AgentB fill:#d1fae5,stroke:#10b981,stroke-width:2px style AgentC fill:#d1fae5,stroke:#10b981,stroke-width:2px style MINIO fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style PG fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Search fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Trace fill:#fecaca,stroke:#ef4444,stroke-width:2px style Replay fill:#e0e7ff,stroke:#6366f1,stroke-width:2px

Database-Enforced Multi-Tenancy

Row-Level Security policies in PostgreSQL enforce tenant isolation at the database layer. Every connection sets app.tenant_id. Application bugs cannot cause cross-tenant data leaks.

flowchart TD Request["<span style='color:#3b82f6'>API Request<br/>Bearer JWT</span>"] --> Middleware["<span style='color:#10b981'>Tenant Middleware<br/>Extract tenant_id</span>"] Middleware --> Context["<span style='color:#f59e0b'>Tenant Context<br/>ContextVar</span>"] Context --> Engine["<span style='color:#8b5cf6'>DB Engine<br/>Connection checkout</span>"] Engine --> SetLocal["<span style='color:#ef4444'>SET LOCAL<br/>app.tenant_id = X</span>"] SetLocal --> RLS["<span style='color:#ef4444'>Row-Level Security<br/>USING (tenant_id = current_setting)</span>"] RLS --> Query["<span style='color:#06b6d4'>Query Execution<br/>Automatic filtering</span>"] Query --> Result["<span style='color:#10b981'>Tenant-Scoped Results</span>"] style Request fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Middleware fill:#d1fae5,stroke:#10b981,stroke-width:2px style Context fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Engine fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style SetLocal fill:#fecaca,stroke:#ef4444,stroke-width:2px style RLS fill:#fecaca,stroke:#ef4444,stroke-width:2px style Query fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Result fill:#d1fae5,stroke:#10b981,stroke-width:2px

Multimodal Ingestion Pipeline

PyMuPDF extracts text and images from PDFs. Configurable embedding models generate high-dimensional vectors for both text and images. All embeddings are stored in PostgreSQL with pgvector HNSW indexes for fast cosine-distance search. Files stored in MinIO with per-tenant buckets.

flowchart TD Upload["<span style='color:#3b82f6'>Upload File<br/>PDF, Image, Text</span>"] --> MINIO["<span style='color:#f59e0b'>MinIO Bucket<br/>tnt-{tenant}-media</span>"] MINIO --> Worker["<span style='color:#10b981'>Worker-Artifacts<br/>Celery Task</span>"] Worker --> PyMuPDF["<span style='color:#8b5cf6'>PyMuPDF<br/>Extract text + images</span>"] PyMuPDF --> Embed["<span style='color:#ec4899'>Embedding Service<br/>Configurable models<br/>High-D vectors</span>"] Embed --> PGV["<span style='color:#06b6d4'>PostgreSQL + pgvector<br/>tenant-scoped docs</span>"] PGV --> Index["<span style='color:#ef4444'>HNSW Index<br/>COSINE distance</span>"] Index --> Search["<span style='color:#10b981'>Vector Search<br/>Semantic retrieval</span>"] style Upload fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style MINIO fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Worker fill:#d1fae5,stroke:#10b981,stroke-width:2px style PyMuPDF fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Embed fill:#fce7f3,stroke:#ec4899,stroke-width:2px style PGV fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Index fill:#fecaca,stroke:#ef4444,stroke-width:2px style Search fill:#d1fae5,stroke:#10b981,stroke-width:2px

Tool Request Queue and Validation

All tool requests require what/how/why justification. Requests are validated in 4 steps: (1) Is it registered in ToolManager? (2) Pre-approved for this team? (3) Are tenant credentials available? (4) For write operations, confirmed by user? Blueprint creation validates tools against the registry—no hallucinated tools can enter production. Fail-closed policy ensures every tool invocation is auditable and secure.

flowchart TD Specialist["<span style='color:#10b981'>Specialist Agent<br/>Needs Tool</span>"] --> Request["<span style='color:#3b82f6'>Tool Request<br/>what/how/why</span>"] Request --> Check1["<span style='color:#6366f1'>Check 1<br/>Registered?</span>"] Check1 --> Fail1["<span style='color:#ef4444'>Not in<br/>Registry</span>"] Check1 --> Check2["<span style='color:#f59e0b'>Check 2<br/>Pre-Approved?</span>"] Fail1 --> Reject["<span style='color:#ef4444'>Reject<br/>Request</span>"] Check2 --> Fail2["<span style='color:#ef4444'>Not in Team<br/>Allowlist</span>"] Check2 --> Check3["<span style='color:#f59e0b'>Check 3<br/>Creds OK?</span>"] Fail2 --> Reject Check3 --> Fail3["<span style='color:#ef4444'>Missing<br/>Credentials</span>"] Check3 --> Check4["<span style='color:#ef4444'>Check 4<br/>Write Op?</span>"] Fail3 --> Reject Check4 --> WriteYes["<span style='color:#ef4444'>Yes: Ask User</span>"] Check4 --> WriteNo["<span style='color:#10b981'>No: Proceed</span>"] WriteYes --> Confirm["<span style='color:#ef4444'>User Confirms</span>"] Confirm --> Execute["<span style='color:#06b6d4'>Execute Tool</span>"] WriteNo --> Execute Execute --> Result["<span style='color:#10b981'>Return Result</span>"] Reject --> Escalate["<span style='color:#f59e0b'>Escalate to User<br/>or IO</span>"] Result --> Specialist style Specialist fill:#d1fae5,stroke:#10b981,stroke-width:2px style Request fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Check1 fill:#e0e7ff,stroke:#6366f1,stroke-width:2px style Check2 fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Check3 fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style Check4 fill:#fecaca,stroke:#ef4444,stroke-width:2px style Fail1 fill:#fecaca,stroke:#ef4444,stroke-width:2px style Fail2 fill:#fecaca,stroke:#ef4444,stroke-width:2px style Fail3 fill:#fecaca,stroke:#ef4444,stroke-width:2px style WriteYes fill:#fecaca,stroke:#ef4444,stroke-width:2px style WriteNo fill:#d1fae5,stroke:#10b981,stroke-width:2px style Confirm fill:#fecaca,stroke:#ef4444,stroke-width:2px style Execute fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Result fill:#d1fae5,stroke:#10b981,stroke-width:2px style Reject fill:#fecaca,stroke:#ef4444,stroke-width:2px style Escalate fill:#fed7aa,stroke:#f59e0b,stroke-width:2px

Execution Modes

The Team Orchestrator selects execution mode based on task complexity. Sequential mode runs one specialist at a time. Parallel mode (future) will run all specialists concurrently. Mode selection is logged in execution trace events.

Sequential Mode

One specialist active at a time. Tool requests handled immediately. Checklists maintained throughout execution. Current default mode.

Predictable execution order
Lower resource usage
Easier debugging

Parallel Mode

All specialists run concurrently. Tool requests queued and processed in FIFO order. Deferred for future release.

Faster completion time
Higher throughput
Complex coordination

Full-Fidelity Replay System

Every job execution is recorded with complete LangChain message lists, tool IO, and specialist delegations. Traces are archived to MinIO as JSONL and indexed in PostgreSQL for semantic search. Replay artifacts enable 100% reconstruction of any past execution.

flowchart TD Runtime["<span style='color:#10b981'>Orchestrator Runtime<br/>Execute job</span>"] --> Trace["<span style='color:#8b5cf6'>Execution Trace<br/>PostgreSQL RLS</span>"] Trace --> Events["<span style='color:#3b82f6'>Trace Events<br/>TOOL_START/END<br/>SPECIALIST_REQUEST/RESPONSE</span>"] Events --> Redis["<span style='color:#ef4444'>Redis Pub/Sub<br/>job_trace_{job_id}</span>"] Redis --> WS["<span style='color:#06b6d4'>GraphQL Subscription<br/>Real-time stream</span>"] Trace --> Archive["<span style='color:#f59e0b'>Archive Task<br/>Job completion</span>"] Archive --> JSONL["<span style='color:#8b5cf6'>MinIO<br/>replays/{job_id}/trace.jsonl</span>"] Archive --> Transcript["<span style='color:#8b5cf6'>MinIO<br/>replays/{job_id}/transcript.md</span>"] JSONL --> PG["<span style='color:#06b6d4'>PostgreSQL<br/>tenant-scoped assets</span>"] Transcript --> PG PG --> Search["<span style='color:#10b981'>Vector Search<br/>Find past replays</span>"] style Runtime fill:#d1fae5,stroke:#10b981,stroke-width:2px style Trace fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Events fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Redis fill:#fecaca,stroke:#ef4444,stroke-width:2px style WS fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Archive fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style JSONL fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Transcript fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style PG fill:#cffafe,stroke:#06b6d4,stroke-width:2px style Search fill:#d1fae5,stroke:#10b981,stroke-width:2px

Authentication and OAuth

JWT-based authentication with invite-only onboarding. OAuth tokens encrypted in HashiCorp Vault. Subdomain validation enforces 8 rules, reserves 90+ system subdomains, and blocks 1662 banned words. Users cannot switch tenants after signup.

flowchart TD Signup["<span style='color:#3b82f6'>Password Signup<br/>email + subdomain</span>"] --> Validate["<span style='color:#8b5cf6'>Subdomain Validation<br/>8 rules + banned words</span>"] Validate --> Tenant["<span style='color:#10b981'>Create Tenant<br/>New tenant per user</span>"] Tenant --> User["<span style='color:#10b981'>Create User<br/>Bind to tenant</span>"] User --> JWT["<span style='color:#f59e0b'>Issue JWT<br/>tenant_id + user_id</span>"] OAuth["<span style='color:#3b82f6'>OAuth Flow<br/>Google, GitHub, Microsoft</span>"] --> State["<span style='color:#8b5cf6'>State Validation<br/>tenant_id + invite_token</span>"] State --> Tokens["<span style='color:#ec4899'>HashiCorp Vault<br/>{tenant}/{service}/oauth_tokens</span>"] Tokens --> JWT JWT --> Middleware["<span style='color:#ef4444'>Tenant Middleware<br/>Extract tenant_id</span>"] Middleware --> RLS["<span style='color:#ef4444'>SET LOCAL<br/>app.tenant_id</span>"] style Signup fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style Validate fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Tenant fill:#d1fae5,stroke:#10b981,stroke-width:2px style User fill:#d1fae5,stroke:#10b981,stroke-width:2px style JWT fill:#fed7aa,stroke:#f59e0b,stroke-width:2px style OAuth fill:#bfdbfe,stroke:#3b82f6,stroke-width:2px style State fill:#ddd6fe,stroke:#8b5cf6,stroke-width:2px style Tokens fill:#fce7f3,stroke:#ec4899,stroke-width:2px style Middleware fill:#fecaca,stroke:#ef4444,stroke-width:2px style RLS fill:#fecaca,stroke:#ef4444,stroke-width:2px

Local Development Workflow

Run locally with `etherion up` or Docker Compose. PostgreSQL, Redis, and MinIO containers with named volumes. Python 3.11+ virtual environment. One-command bootstrap and teardown with Etherion CLI.

Database

PostgreSQL 16 + pgvector extension
Named volumes for data persistence
Alembic migrations for schema

Local Services

Redis container for pub/sub
FastAPI dev server with hot reload
Celery worker for async tasks

Infrastructure

MinIO for object storage
pgvector for knowledge base
HashiCorp Vault for secrets

Database-Driven Custom Agents

Agent definitions are stored in PostgreSQL with tenant isolation. Each agent has a system prompt, tool allowlist, model configuration, and execution limits. The Platform Orchestrator creates agent team blueprints through conversational interaction in the Agents Foundry UI, which is strictly separated from task execution in the Threads Dashboard.

Agent Configuration

System Prompt: Defines agent behavior and expertise
Tool Names: Explicit allowlist of approved tools
Model: LLM provider and model selection
Limits: Max iterations, timeout, temperature

Agent Teams

Composition: Multiple specialist agents per team
Pre-Approved Tools: Team-level tool allowlist
Execution Policy: Max concurrent executions and timeouts
Versioning: Track changes with parent version links

Model Context Protocol Integration

MCP tools connect agents to external systems through OAuth. All write operations require explicit confirmation. Rate limiting and circuit breakers prevent API abuse. Tool definitions are database-driven with tenant-scoped allowlists.

Open Source Platform

Etherion is open source under the MIT License. The complete codebase, infrastructure modules, and documentation are available on GitHub. Deploy on your own bare-metal servers with full control over your data and infrastructure.

MIT Licensed

Full source code access. Fork, modify, and deploy without restrictions.

Self-Hosted

Deploy on your own servers. NixOS modules + Ansible playbooks included for complete infrastructure as code.

Community Driven

Agent-first contribution model. Complete technical documentation and development logs included.

Roadmap: Shipping to Production.

Core platform shipped with fully local execution. Now moving toward general availability.

Core Architecture & MVP

Complete

The foundational infrastructure, the 2N+1 Orchestrator, and the core agentic framework are built and validated.

Production Testing & Service Wiring

Complete

All services wired end-to-end (orchestrator, async workers, MCP toolchain, knowledge base, repository). Production-like environments validated, security hardened, observability instrumented.

Local Execution & Public Packages

Released

Etherion ships as fully self-contained Python packages on PyPI (etherion and etherion-tui). Install and run the entire platform locally — no cloud account required. The terminal UI provides one-command setup, OAuth provider management, agent orchestration, and live process monitoring, all backed by your own PostgreSQL, MinIO, and Redis stack.

General Availability

In Progress

Full public launch with managed cloud hosting, team collaboration, and enterprise SLA options — alongside continued support for fully self-hosted deployments.

Jonathan Nde, Founder of Etherion

Built by an Architect.

Jonathan Nde, Founder of Etherion.

Etherion was born from a single, powerful insight: the future of AI is not just about building better tools, but about fundamentally changing our relationship with work. As a self-taught, curious, and fast-thinking systems architect, My vision is to empower users to move beyond tedious implementation and focus on what humans do best: high-level strategy and creativity. Etherion is the culmination of this vision A Platform designed not just for coders, but for architects.