In development. The AI layer is on the roadmap — what you're reading is the vision, not what's live today. See what's live in Regen →
Built for agents and humans Equally
Every competitor has bolted AI onto a human-centric architecture Fluidify rebuilt incident management from the ground up for a world where AI agents are first-class operators
"We are not adding AI to incident management We are rebuilding incident management for the age of agents"
What agent native actually means
Not a feature difference, an architectural difference that compounds over time
Agents are first-class actors
Agents have their own identity, permission scope, and audit trail equal to humans. The incident timeline records every action by actor type, not by session type. You see exactly what each agent did, why, and what it found.
Every action has a confidence score and a gate
No agent fires blindly. Every action carries a confidence score (0–100), a risk classification (Read-only / Low-risk / Medium-risk / Destructive), and a gate type Configurable per team, per service, per environment
MCP is the integration protocol
Fluidify is both an MCP consumer and an MCP server. As a consumer, the triage agent calls Datadog, Kubernetes, Linear, GitHub, and more before your phone rings. As a server, any external AI agent Claude, GPT, your internal bot can call Fluidify natively.
The agent mode spectrum
No forced mode. Each team configures how much autonomy their agents have, per service, per severity, per environment from fully autonomous to fully manual. Mode is a first-class concept in the timeline and post-mortem.
The agent mode spectrum
Configure autonomy per team, per service, per environment. No forced mode.
| Mode | Who acts |
|---|---|
| Fully Autonomous | Agent acts, no human needed |
| Co-Pilot | Agent proposes, human approves |
| Human-Led | Human acts, agent assists |
| Fully Manual | Human only |
Senior SRE at 3am
Before the on-call engineer unlocks their phone, the agent has already done this.
The engineer's role was a single tap. Every other action was agent-executed, agent-logged, and agent-audited.
Fluidify as an MCP server
Claude, GPT, Cursor, or your internal ops bot can call Fluidify natively. Any AI agent in your stack can open incidents, add timeline entries, execute runbooks, and query history.
Incident Operations
create_incident(title, severity, service, context)
acknowledge_incident(incident_id, agent_id, note)
resolve_incident(incident_id, resolution_summary)
add_timeline_entry(incident_id, content, actor_type)
escalate_incident(incident_id, tier, reason)
Intelligence Queries
search_incident_history(query, service, days)
get_triage_context(incident_id)
match_historical_pattern(alert_fingerprint)
get_service_health_profile(service)
Runbook Execution
list_runbooks(service, trigger_type)
execute_runbook(runbook_id, incident_id, params)
get_execution_status(execution_id)
approve_runbook_step(execution_id, step_id, approver)
On-Call & Scheduling
get_current_oncall(service)
page_engineer(engineer_id, incident_id, channel)
get_schedule(team, days)
The intelligence layer
Every incident makes the next one faster to resolve. Your incident intelligence stays in your infrastructure and compounds over time.
Incident patterns
Root cause clusters identified across hundreds of incidents, surfaced as priority reliability findings.
Response playbooks
Auto-generated from recurring resolution patterns — the runbook library that writes itself from practice.
Service profiles
MTTD, MTTR, top failure modes, deploy-to-incident correlation — per service.
Agent learning
Each triage outcome refines the similarity model for the next incident. Correct call or not — it learns.
Tribal knowledge
Engineer timeline notes, Slack thread findings, post-mortem insights — extracted, indexed, never lost.
Compounding advantage
After 12 months, your triage agent knows your stack better than most of your engineers. After 24 months, it is the most valuable reliability asset you have.
From here to the vision
| Horizon | Theme | Agent capability | MCP milestone |
|---|---|---|---|
| v0.10.0 — Now | Foundation | AI assist: summarise, draft, @mention bot | Fluidify MCP Server (beta) — read incidents |
| v0.x — Near | Agent Scaffolding | Co-pilot mode: agent proposes, human gates | MCP integrations: Datadog, Linear, K8s |
| v1.x — Mid | Autonomous Ops | Triage agent, runbook execution, confidence gates | Full MCP ecosystem: write, resolve, escalate |
| v2.x — Long | Multi-Agent | Triage + Comms + Runbook agents in parallel | Fluidify as ops MCP hub for external agents |
Principles that cannot change
Agents and humans share one operational layer — no separate agent console, same timeline, same permissions.
MCP-first, not MCP-compatible — every integration is built as an MCP tool. The webhook era is over.
Immutable audit trail — every action, human or agent, is timestamped server-side and append-only.
Self-hosted is a first-class citizen — every agent capability available in cloud runs self-hosted.
BYO AI key — your incident data goes to your API key, your infrastructure. Not Fluidify's AI.
Open core, not open washing — the community edition is complete and production-ready.
Ready to run the AI layer?
Regen v0.10.0 is live. Deploy in under 5 minutes and see what's available today.