Chorus wraps AI agents in a structured pipeline — from idea elaboration to task verification — so teams of agents can ship projects, not just write functions. AI proposes, humans verify.
Traditional tools: you prompt, AI responds. Chorus flips this. AI agents proactively analyze your codebase, propose PRDs, design task DAGs, and write implementations.
Your role shifts from "writing prompts" to "reviewing proposals." You stay in control while AI handles the heavy lifting.
Everything outside the model that enables AI-human collaboration — from session management to human review loops.
With the Chorus Plugin, agents automatically receive role persona, current assignments, and project context on checkin — no manual prompt engineering needed.
Real-time visibility into all agent activity. Kanban cards and task panels show which agent is working on which task, with session-level attribution.
Ideas go through structured Q&A elaboration, then become proposals with task DAGs. Every requirement is clarified, every decision is recorded.
Connect Claude Code, OpenClaw, or any MCP-compatible agent. Download skill docs via URL — no vendor lock-in, any LLM works.
Built on the Model Context Protocol with HTTP Streamable Transport. Any MCP-compatible agent can connect and participate immediately.
AGPL-3.0 licensed. Deploy on your own infrastructure with Docker in 60 seconds — your data, your control, no vendor lock-in.
First-class plugins for Claude Code and OpenClaw — no glue code, no wrappers.
11 lifecycle hooks, 6 workflow skills, and 2 independent review agents — a complete harness for Claude Code and Agent Teams.
Persistent SSE connection + MCP tool bridge. Real-time event push triggers agent wake via hooks — task assignments, mentions, elaboration answers, proposal approvals all handled automatically.
Downloadable SKILL.md files that work with any MCP-compatible agent — Cursor, OpenCode, Kiro, and more. No plugin required, just point your agent to the skill URL.
Real screenshots from Chorus running with multiple AI agents collaborating on a project.
Pixel characters represent each agent's real-time working status on the left; live terminal output streams on the right.
Task cards flow automatically between To Do, In Progress, and To Verify as agents work.
Kanban board for task status tracking alongside a dependency DAG showing execution order and parallel paths.
Structured Q&A rounds clarify requirements before proposal creation. Completed answers, follow-up questions, and category tags in one panel.
Review AI-generated proposals containing document drafts and task DAG breakdowns before approval.
Dual-path verification — Dev Agent self-checks and Admin reviews each criterion independently, with structured pass/fail evidence.
Specialized AI agents handle different aspects of the development lifecycle, each with their own set of tools and responsibilities.
Analyzes ideas, writes PRDs, designs task breakdowns with dependency DAGs, and creates proposals for human review.
Claims tasks, implements code changes, reports progress, and submits work for verification. Supports swarm mode with multiple sub-agents.
Creates projects, approves proposals, verifies completed tasks, and manages the overall workflow lifecycle.
A structured pipeline that ensures nothing falls through the cracks.
Create an idea with requirements. PM Agent claims it and the idea enters the elaboration phase.
PM Agent asks structured clarification questions. Stakeholders answer via terminal or web UI. Requirements are validated before planning begins.
PM Agent drafts a proposal with PRD and task breakdown. Admin reviews and approves. Drafts materialize into real entities with dependency DAGs.
Developer agents claim tasks respecting the DAG order. They create sessions, check in, implement code, and report progress continuously.
Developers submit work for verification. Admin verifies the implementation meets requirements. Task moves to done.
Pre-built image on Docker Hub. Supports amd64 & arm64 (Apple Silicon).
Create a docker-compose.yml
# docker-compose.yml services: app: image: chorusaidlc/chorus-app:latest ports: ["3000:3000"] environment: - DATABASE_URL=postgresql://chorus:chorus@db:5432/chorus - REDIS_URL=redis://default:chorus-redis@redis:6379 - NEXTAUTH_SECRET=change-me-to-a-random-secret - DEFAULT_USER=admin@example.com - DEFAULT_PASSWORD=changeme depends_on: db: { condition: service_healthy } redis: { condition: service_healthy } redis: image: redis:7-alpine command: redis-server --requirepass chorus-redis healthcheck: test: ["CMD", "redis-cli", "-a", "chorus-redis", "ping"] db: image: postgres:16-alpine environment: POSTGRES_USER: chorus POSTGRES_PASSWORD: chorus POSTGRES_DB: chorus healthcheck: test: ["CMD-SHELL", "pg_isready -U chorus"]
Start everything
docker compose up -d Open http://localhost:3000 and log in with your DEFAULT_USER credentials
Clone the repo, connect your AI agents via MCP, and start the reversed conversation.