Argo Rollout MCP Server
Moving from standard Kubernetes Deployments to progressive delivery is painful. You have to write massive Rollout YAML files, duplicate your services, and set up complex Prometheus queries just to make sure a deployment doesn't crash your system. And once a canary is running, you're constantly refreshing dashboards and running kubectl to figure out when it's safe to promote or abort.
AI assistants should be able to orchestrate this safely. But they can't β not without deeply understanding Argo Rollout structures, zero-downtime overlaps, and your live cluster state.
This server fixes that. It exposes the full progressive delivery lifecycle β K8s deployment validation, YAML generation, canary stepping, blue-green cutovers, and health monitoring β as a set of MCP tools, resources, and prompts. Any MCP-compatible AI assistant (Claude, Cline, or your own agent) can use them to manage deployments the way a senior release engineer would.
Three things make this different:
-
Zero YAML Onboarding. The server has built-in execution generators. Ask the assistant to migrate an app, and it auto-fetches the live K8s Deployment, preserves all resource limits, probes, and env vars, and generates the exact Rollout CRD and Services needed directly into the cluster. No copy-pasting required.
-
Intelligent Lifecycle Control. This server doesn't just push YAML. It allows the AI to manage the live rollout: pausing, resuming, promoting, aborting, and seamlessly linking
AnalysisTemplateCRDs to Prometheus for automated health checks. -
Built-in Playbooks. The server ships with workflow paths for A/B testing, cost-aware deployments, and guided canary rollouts β exposed as MCP prompts. The assistant doesn't guess; it follows battle-tested deployment playbooks baked into the protocol.
Key Featuresβ
Deployment & Migration
- Evaluate K8s Deployments for Rollout readiness
- Automatically convert Deployments into Rollouts (direct or workloadRef for Argo CDβmanaged apps)
- Convert Rollouts back into standard Deployments
Lifecycle Orchestration
- Create complex Canary, Blue-Green, and A/B configurations
- Update container images natively to trigger rollouts
- Promote canaries to the next step, or fully to 100%
- Pause and resume active progressive rollouts safely
Observation & Context
- Read real-time Rollout statuses (phase, ready replicas)
- View cluster-wide active traffic strategies
- Audit historical ReplicaSet hashes natively
Validation & Safety
- Instant emergency aborts throwing traffic back to stable
- Wrap Prometheus queries into AnalysisTemplates
- Deploy intelligent ML-based promotion flows
Multi-Cluster Support
- Connect natively across namespaces securely using
KUBECONFIG
Built-in Guidance
- Pre-loaded AI prompts guiding conversational Canary, Blue-Green, and Cost-Aware progressive delivery
Architectureβ
The server is organized into layered service modules β tools on top, business logic in the middle, and the Python Kubernetes client at the bottom.
How it works in practice:
- An AI assistant connects to the server over HTTP (or stdio)
- It discovers the extensive Argo-centric tools and resources automatically
- When a user asks something like "Convert my staging frontend to a Canary", the assistant calls the appropriate tools in sequence β validate, generate, apply
- Every mutating action translates into declarative K8s API patches
- Results flow back to the assistant, which continuously monitors progress via resources
Tech Stackβ
| Category | Technologies |
|---|---|
| Language | Python 3.12+ |
| MCP Framework | FastMCP |
| Protocol | Model Context Protocol (MCP) |
| Kubernetes | Argo Rollouts CRDs Β· K8s Python Client |
| Transport | HTTP Β· stdio |
| Infrastructure | Docker Β· uv |
Quick Startβ
You'll need a running Kubernetes cluster with Argo Rollouts Controller installed, a valid kubeconfig, and Docker or Python 3.12+. If Argo Rollouts is not installed, you can install it via the Helm MCP Server.
Docker (recommended):
docker pull talkopsai/argo-rollout-mcp-server:latest
docker run --rm -it \
-p 8768:8768 \
-v ~/.kube:/app/.kube:ro \
-e K8S_KUBECONFIG=/app/.kube/config \
talkopsai/argo-rollout-mcp-server:latest
Mount the full ~/.kube directory (not just config) so certificate paths referenced in your kubeconfig (e.g. minikube, kind) are available inside the container.
Point your MCP client at it (e.g. in mcp.json or .cursor/mcp.json):
{
"mcpServers": {
"argo-rollout": {
"url": "http://localhost:8768/mcp",
"description": "MCP Server for managing Argo Rollouts and K8s Progressive Delivery"
}
}
}
Securityβ
Never hardcode secrets in configuration deployments. Use namespace isolation where necessary. Ensure appropriate RBAC permissions inside your KUBECONFIG for CRD manipulation. Review generated strategies (apply=False) prior to executing true migrations natively.
Project Layoutβ
argo-rollout-mcp-server/
βββ argo_rollout_mcp_server/
β βββ tools/ # MCP Tools (argo, generators, orchestration)
β βββ resources/ # Rollout, health, metrics, history, cluster
β βββ prompts/ # Onboarding, canary, bluegreen, rolling, cost, multicluster
β βββ services/ # Argo Rollouts service, generator, orchestration
β βββ server/ # FastMCP setup
β βββ utils/
βββ Dockerfile
βββ pyproject.toml
βββ README.md
Next Stepsβ
- Configuration β Environment variables, Docker, and access control
- Tools β Full reference for all MCP tools
- Resources and Prompts β Real-time streams and guided workflows
- Common Workflows β Step-by-step workflow guides
- Examples β Quick reference and prompts