Skip to main content

The AWS Orchestrator Agent

Writing high-quality Terraform for AWS is time-consuming. You have to juggle IAM roles, network boundaries, and state management while constantly referencing the AWS provider documentation to ensure you aren't missing a required variable.

The AWS Orchestrator Agent is built to do this for you. Instead of writing HCL by hand, you simply describe the infrastructure you want to build.


How it works: The Request Flow Pipeline

Unlike naive code-generation tools that simply output a wall of code and hope it compiles, the AWS Orchestrator uses a complex, Deep Agent architecture that takes its time to thoroughly plan, generate, and validate the code.

When you ask it to "Create an S3 bucket with versioning and customer-managed KMS encryption," here is what happens:

  1. Semantic Routing (SupervisorAgent):

    • The user query is captured by the Supervisor (the highest-level router). Bound by SUPERVISOR_PROMPT, its sole job is to distinguish between general chatter and infrastructure requests.
    • It intercepts the intent and calls the @tool transfer_to_terraform, passing a distilled, intent-based task_description.
  2. Context Bridging (input_transform):

    • The pipeline triggers the TFCoordinator's input_transform. This merges the user's intent with environmental configs (like TERRAFORM_WORKSPACE) to build the TFCoordinatorContext.
  3. The Planning Subgraph (tf-planner):

    • The Coordinator hands off to the PlannerSupervisorAgent, which sequences three highly specific, sequential sub-agents:
      • Requirements Analyser: Discovers the specific AWS services involved and mapping attributes.
      • Security & Best Practices: Correlates the request with SOC 2, HIPAA, and enforces rules like "always use SSE with KMS."
      • Execution Planner: Generates the structured SKILL.md file that governs exactly how .tf files will be written.
  4. Code Generation (tf-generator):

    • Reading the SKILL.md from the virtual filesystem (/workspace/), the generator writes main.tf, variables.tf, etc.
  5. Sandbox Validation (tf-validator):

    • A utility tool (sync_workspace_to_disk) physically materializes the virtual text onto the host. The tf-validator then runs shell terraform init/fmt/validate commands. Errors are fed back into the generator automatically.
  6. Delivery & Deployment (github-agent):

    • Validated code hits the Human-in-the-Loop (HITL) gate. Upon user approval, the GitHub MCP JIT agent pushes code using direct API endpoints, bypassing brittle shell git commands.

Because this is a deeply thoughtful, enterprise-grade generation process, building a robust module relies on a continuous evaluation loop ensuring it is syntactically valid HCL before handing it back to you.


Built on A2A and A2UI Protocols

This agent was built from the ground up to use the A2A (Agent-to-Agent) Protocol.

Through the A2UI (Agent-to-User Interface) extension, the Orchestrator renders rich, interactive components directly in the TalkOps UI. The extract_progress utility within the Supervisor specifically intercepts __interrupt__ chunks and tool_result status codes, emitting AgentResponse streaming payloads. This allows you to see real-time tool logs directly in the TalkOps dashboard (e.g., > ⚙️ Tool Call (security_compliance_tool)) rather than waiting silently.


Quick Start

The recommended way to launch the Orchestrator locally is through Docker Compose. We leverage a three-tier LLM architecture optimized for google_genai.

Ensure your .env contains the required constraints (refer to the Configuration Guide):

# Provide mandatory LLM and GitHub tokens in .env
export GOOGLE_API_KEY="your_api_key"
export GITHUB_PERSONAL_ACCESS_TOKEN="your_github_pat"
export TERRAFORM_WORKSPACE="./workspace/terraform_modules"

Boot the API and TalkOps UI:

docker compose up -d

Access the interface natively at http://localhost:8080.

If you want to dive into the code or contribute to the agent architecture, check out our GitHub Repository.