How It Works Use Cases Features Integrations Skills Hub Guides Docs GitHub
Open Source MIT License

Deploy your own
AI agent fleet.

Spindrel is a self-hosted platform for deploying AI bots that manage your projects, review your code, monitor your systems, and automate your workflows.

Your entire RAG loop, silk-wrapped.

Spindrel chat — Orchestrator bot organizing channels, managing workspace files, and delegating tasks in a live conversation
terminal
# Clone and run
git clone https://github.com/spindrel-dev/spindrel.git
cd spindrel
./setup.sh

# Your fleet is live at http://localhost:8000

An AI operations platform
you actually own.

A code review bot that comments on every PR. A research assistant that builds knowledge over time. A project management hub with AI-managed task boards. A DevOps monitor that alerts you on Slack. Spindrel runs on your hardware with your choice of LLM — each bot gets its own personality, tools, and domain expertise, orchestrated through persistent channels with real memory.

Any LLM Provider

OpenAI, Anthropic, Gemini, Ollama, OpenRouter, LiteLLM, vLLM — mix and match providers per bot.

Self-Hosted

Your data stays on your hardware. No cloud dependencies, no data leaving your network.

Composable

Layer skills, carapaces, tools, and integrations onto any bot. Build once, reuse everywhere.

Mission Control kanban board with task swimlanes and card editor

Built for orchestration at every layer.

Clients
Web UI Slack Discord GitHub API
Spindrel Core
Context Assembly Agent Loop Tool Dispatch Task Worker Workflow Engine Heartbeat Worker
LLM Providers
OpenAI Anthropic Gemini Ollama Any OpenAI-compatible
Tools & MCP
Local Python MCP Servers Client Tools
Storage
PostgreSQL pgvector Workspaces

Everything you need for autonomous AI agents.

Multi-Agent Fleet

Deploy specialized bots — each with its own model, persona, tools, and expertise. Bots delegate tasks to each other automatically.

Perfect for engineering teams with reviewers, QA, and leads.

Carapaces

Composable expertise bundles — snap skills, tools, and behavioral instructions onto any bot at runtime.

Give any bot code-review, research, or PM expertise in one toggle. Explore Skills Hub →

Persistent Memory

File-based memory: MEMORY.md, daily logs, and reference docs. All on disk, all indexed for RAG, nothing opaque.

Research assistants that remember findings across sessions.

Tasks & Scheduling

Heartbeats, recurring tasks, one-off deferred runs, cross-bot delegation — all managed by background workers.

DevOps monitors that check systems every hour automatically.

Tool System

Local Python tools, MCP servers, and client-side actions. Tool RAG selects only relevant tools per query.

Bots that create slides, manage tasks, or query databases.

Workflows

Multi-step automations with conditions, approval gates, and cross-bot coordination. Define in YAML, trigger from anywhere.

Content pipelines with draft, review, and approval steps.

Channel Workspaces

Per-channel file stores with 17 built-in templates or create your own. Active files auto-injected, archives searchable, everything indexed.

Each project gets its own organized file store and context.

Integrations

Slack, GitHub, Discord, Gmail, Frigate, ARR stack, and more. Build custom integrations with the plugin framework.

Connect to the services your team already uses. Browse Integrations →

Docker Sandboxes

Isolated code execution in long-lived containers. Per-bot profiles, admin locking, configurable resources.

Bots that safely run and test code in isolation.

Smart Retrieval

Hybrid BM25 + vector search, contextual retrieval, cross-encoder reranking — all with local models, zero API cost.

Large knowledge bases where dumping everything doesn't scale.
Bot fleet listing with model, tools, and skills counts

Works with any LLM provider. Mix and match per bot.

OpenAI
Anthropic
Google Gemini
Ollama
OpenRouter
LiteLLM
vLLM
Any /v1/chat

Ready to deploy your fleet?

Get started in minutes with Docker Compose.