Configuration
CI-Copilot supports configuration via environment variables and Docker Compose. This page details how to configure the agent for both quick start and production usage.
Prerequisites​
| Requirement | Details |
|---|---|
| Python | 3.12+ |
| uv | Package manager for from-source installation |
| OpenAI API Key | Or key for another supported provider |
| GitHub PAT | Personal Access Token with repo scope |
| Docker | For Docker Compose quick start |
Quick Start with Docker Compose (Recommended)​
No cloning required. You just need two files: docker-compose.yml and .env.
1. Create docker-compose.yml​
services:
ci-copilot:
image: talkopsai/ci-copilot:latest
container_name: ci-copilot
ports:
- "10102:10102"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GITHUB_PERSONAL_ACCESS_TOKEN=${GITHUB_PERSONAL_ACCESS_TOKEN}
- LLM_PROVIDER=openai
- LLM_MODEL=gpt-4o-mini
- LLM_HIGHER_PROVIDER=openai
- LLM_HIGHER_MODEL=gpt-5-mini
- LLM_DEEPAGENT_PROVIDER=openai
- LLM_DEEPAGENT_MODEL=o4-mini
- LOG_LEVEL=INFO
restart: unless-stopped
networks:
- ci-copilot-net
talkops-ui:
image: talkopsai/talkops:latest
container_name: talkops-ui
environment:
- TALKOPS_CI_COPILOT_URL=http://localhost:10102
ports:
- "8080:80"
depends_on:
- ci-copilot
restart: unless-stopped
networks:
- ci-copilot-net
networks:
ci-copilot-net:
driver: bridge
2. Create .env​
OPENAI_API_KEY=your_openai_api_key_here
GITHUB_PERSONAL_ACCESS_TOKEN=your_github_pat_here
3. Start​
docker compose up -d
# CI-Copilot running at http://localhost:10102
# TalkOps UI running at http://localhost:8080
LLM Provider Configuration​
CI-Copilot supports multiple LLM providers. Set the provider and model via environment variables.
| Variable | Purpose | Example |
|---|---|---|
LLM_PROVIDER | Default provider | openai, anthropic, google_genai, azure_openai |
LLM_MODEL | Default model | gpt-4o-mini, gpt-4o |
LLM_HIGHER_PROVIDER | Provider for complex tasks | openai |
LLM_HIGHER_MODEL | Model for complex tasks | gpt-5-mini |
LLM_DEEPAGENT_PROVIDER | Provider for deep agents | openai |
LLM_DEEPAGENT_MODEL | Model for deep agents | o4-mini |
Supported Providers​
| Provider | Status |
|---|---|
| OpenAI | ✅ Supported |
| Anthropic | ✅ Supported |
| Google Gemini | ✅ Supported |
| AWS Bedrock | ✅ Supported |
| Azure OpenAI | ✅ Supported |
From Source Installation​
For development or customization:
1. Clone and Install​
git clone https://github.com/talkops-ai/ci-copilot.git
cd ci-copilot
uv venv --python=3.12
source .venv/bin/activate # On Unix/macOS
# or
.venv\Scripts\activate # On Windows
uv pip install -e .
2. Environment Setup​
cp .env.example .env
# Edit .env — at minimum set OPENAI_API_KEY and GITHUB_PERSONAL_ACCESS_TOKEN
All available configuration options can be found in
ci_copilot/config/default.py. You can set any of these via your.envfile to customize CI-Copilot's behavior.
3. Start the A2A Server​
uv run --active ci-copilot \
--host localhost \
--port 10102 \
--agent-card ci_copilot/card/ci_copilot.json
4. Run TalkOps UI (Optional)​
docker run -d \
--name talkops-ui \
-e TALKOPS_CI_COPILOT_URL=http://localhost:10102 \
-p 8080:80 \
talkopsai/talkops:latest
Set TALKOPS_CI_COPILOT_URL to wherever your CI-Copilot server is listening. Open http://localhost:8080 and start with something like: "Create a CI pipeline for my-org/my-app".