Portable AI workflows in
simple YAML
Orchestrate LLM agents with built-in observability, cost tracking, and security.
Review PRs for security issues. Run via CLI, GitHub webhook, or CI pipeline.
name: security-review
description: Review PRs for security issues
inputs:
- name: owner
type: string
- name: repo
type: string
- name: pr_number
type: number
steps:
- id: get_diff
github.get_pull_request_diff:
owner: "{{.inputs.owner}}"
repo: "{{.inputs.repo}}"
number: "{{.inputs.pr_number}}"
- id: review
type: llm
model: balanced
prompt: |
Review this diff for security vulnerabilities.
Flag any issues found with severity and remediation.
{{.steps.get_diff.content}}
- id: comment
github.create_comment:
body: "{{.steps.review.response}}" name: iac-review
description: Parallel analysis of infrastructure changes
steps:
- id: plan
shell.run: terraform plan -no-color
workdir: ./infrastructure
- id: analyze
type: parallel
steps:
- id: risk
type: llm
model: balanced
prompt: |
Summarize the RISK of these infrastructure changes.
Flag any security concerns, cost increases, or breaking changes.
{{.steps.plan.stdout}}
- id: network
type: llm
model: balanced
prompt: |
Summarize the NETWORK OPERATIONS impact of these changes.
Note any IP changes, security group updates, or routing modifications.
{{.steps.plan.stdout}}
- id: report
type: llm
model: fast
prompt: |
Create a brief executive summary combining these analyses:
## Risk Assessment
{{.steps.analyze.risk.response}}
## Network Impact
{{.steps.analyze.network.response}} name: incident-response
description: Triage alerts and suggest remediation
inputs:
- name: service
type: string
- name: message
type: string
steps:
- id: triage
type: parallel
steps:
- id: logs
shell.run: kubectl logs -l app={{.inputs.service}} --tail=50
- id: changes
github.list_commits:
repo: "{{.inputs.service}}"
since: "24h"
- id: analyze
type: llm
model: balanced
prompt: |
Alert for {{.inputs.service}}: {{.inputs.message}}
Recent logs:
{{.steps.triage.logs.stdout}}
Recent commits:
{{.steps.triage.changes.commits}}
Provide: 1) Root cause hypothesis 2) Suggested fix
- id: notify
slack.post_message:
channel: "#incidents"
text: "🚨 *{{.inputs.service}}*\n{{.steps.analyze.response}}" name: oncall-handoff
description: Generate shift handoff summary
steps:
- id: gather
type: parallel
steps:
- id: incidents
pagerduty.list_incidents:
status: open
- id: deploys
github.list_deployments:
environment: production
since: "12h"
- id: issues
jira.search:
jql: "priority = Critical AND status != Done"
- id: summarize
type: llm
model: balanced
prompt: |
Create a concise on-call handoff:
Open incidents: {{.steps.gather.incidents}}
Recent deploys: {{.steps.gather.deploys}}
Critical issues: {{.steps.gather.issues}}
Format: What's hot, what to watch, pending actions.
- id: post
slack.post_message:
channel: "#oncall"
text: "📋 *Shift Handoff*\n{{.steps.summarize.response}}" Everything you need for production AI workflows
YAML-First
Define complex workflows in simple, readable YAML. No SDK required.
Any LLM
Anthropic, OpenAI, Ollama, or others with one config change.
Production Ready
Built-in observability, cost tracking, and security controls.
Flexible Runtime
CLI, API server, webhooks, or scheduled execution.
Connectors
GitHub, Slack, Jira integrations with zero code.
Auditable
Security review YAML, not agent code scattered across codebases.
Real-world use cases
See how teams use Conductor to automate their workflows.
How it works
Write YAML
Define your workflow in a simple YAML file
Configure
Set your LLM provider credentials
Run
Execute from CLI, API, or on a schedule
Iterate
Refine prompts and add more steps
Ready to get started?
Install Conductor and run your first workflow in minutes.
go install github.com/tombee/conductor@latest