Workspace Configuration
Each workspace has its own config.yaml at workspaces/<workspace>/config.yaml. This file controls which pipeline stages run, their iteration budgets, LLM assignments, execution behavior, memory limits, and presets.
Workspace configuration is independent — changing one workspace never affects another.
Full Example
# workspaces/my-workspace/config.yaml
# =============================================================================
# Pipeline Configuration
# =============================================================================
pipeline:
topic_refinement:
provider: fast
iterations: 2
# Core stage — cannot be disabled
research:
provider: default
iterations: 3
enabled: true
outline:
provider: default
iterations: 2
enabled: true
first_draft:
provider: strong
iterations: 2
# Core stage — cannot be disabled
content_review:
provider: default
iterations: 1
enabled: true
fact_check:
provider: default
iterations: 2
enabled: true
style_review:
provider: default
iterations: 1
enabled: true
revision:
provider: strong
iterations: 2
enabled: true
final_polish:
provider: default
iterations: 1
enabled: true
consistency_check:
provider: fast
iterations: 1
enabled: true
# =============================================================================
# Execution Control
# =============================================================================
execution:
pause_mode: "after_each_stage"
# =============================================================================
# Memory Settings
# =============================================================================
memory:
max_identity_tokens: 2000
max_topic_tokens: 3000
max_stage_input_tokens: 6000
topic_summarize_threshold: 5000
# =============================================================================
# Pipeline Presets
# =============================================================================
presets:
quick_draft:
description: "Fast draft with minimal review"
overrides:
topic_refinement: { iterations: 1 }
research: { iterations: 1 }
outline: { iterations: 1 }
first_draft: { iterations: 1 }
content_review: { iterations: 1 }
fact_check: { enabled: false }
style_review: { iterations: 1 }
revision: { iterations: 1 }
final_polish: { iterations: 1 }
consistency_check: { enabled: false }
thorough:
description: "Maximum quality, higher cost and time"
overrides:
topic_refinement: { iterations: 3 }
research: { iterations: 5 }
outline: { iterations: 3 }
first_draft: { iterations: 3 }
content_review: { iterations: 2 }
fact_check: { iterations: 3 }
style_review: { iterations: 2 }
revision: { iterations: 3 }
final_polish: { iterations: 2 }
consistency_check: { iterations: 2 }
fiction:
description: "Creative writing — no fact checking"
overrides:
fact_check: { enabled: false }
research: { iterations: 1 }
minimal:
description: "Core agents only — topic brief + draft"
overrides:
research: { enabled: false }
outline: { enabled: false }
content_review: { enabled: false }
fact_check: { enabled: false }
style_review: { enabled: false }
revision: { enabled: false }
final_polish: { enabled: false }
consistency_check: { enabled: false }
Pipeline Configuration
The pipeline section defines settings for each of the 10 writing stages:
pipeline:
<stage_name>:
provider: <provider_name>
iterations: <number>
enabled: true|false
Pipeline Stages
| # | Stage name | Output artifact | Core? |
|---|---|---|---|
| 1 | topic_refinement |
topic_brief.md |
✅ |
| 2 | research |
research_notes.md |
❌ |
| 3 | outline |
outline.md |
❌ |
| 4 | first_draft |
draft_v1.md |
✅ |
| 5 | content_review |
content_review.md |
❌ |
| 6 | fact_check |
fact_check.md |
❌ |
| 7 | style_review |
style_review.md |
❌ |
| 8 | revision |
draft_v2.md |
❌ |
| 9 | final_polish |
final.md |
❌ |
| 10 | consistency_check |
consistency_report.md |
❌ |
Core stages (topic_refinement and first_draft) cannot be disabled. Setting enabled: false on a core stage is ignored.
Stage Fields
| Field | Required | Description |
|---|---|---|
provider |
Yes | Name of an LLM provider defined in Global Configuration. |
iterations |
Yes | Maximum number of iteration passes the agent gets. Must be a positive integer. |
enabled |
No | Whether this stage runs. Default: true. Core stages are always enabled. |
Choosing Providers for Stages
Match the provider to the task complexity:
| Provider | Best for |
|---|---|
fast |
Quick, low-stakes tasks — topic refinement, consistency check |
default |
General-purpose — research, outline, reviews |
strong |
High-quality output — first draft, revision |
See Global Configuration — LLM Providers for how to define providers.
Execution Control
execution:
pause_mode: "after_each_stage"
Controls whether the pipeline pauses between stages or iterations.
pause_mode value |
Behavior |
|---|---|
after_each_stage |
Pause after each stage completes. User clicks “Execute Next” to continue. |
after_each_iteration |
Pause after every iteration within a stage. Most granular control. |
none |
Run the entire pipeline to completion without pausing. |
The user can override the pause behavior at runtime using the web UI controls:
- Execute Next — run the next stage/iteration, then pause
- Execute All — run to completion, ignoring the configured pause mode
- Pause — pause after the current stage/iteration finishes
- Cancel — stop the pipeline
Memory Settings
memory:
max_identity_tokens: 2000
max_topic_tokens: 3000
max_stage_input_tokens: 6000
topic_summarize_threshold: 5000
Controls how much context is injected into agent prompts.
| Field | Default | Description |
|---|---|---|
max_identity_tokens |
2000 |
Maximum tokens from identity memory (voice, style guide, author bio) injected into agent context. |
max_topic_tokens |
3000 |
Maximum tokens from topic memory injected into agent context. |
max_stage_input_tokens |
6000 |
Maximum tokens from stage input artifacts (previous stage outputs) injected into agent context. |
topic_summarize_threshold |
5000 |
When a topic file exceeds this token count, it is summarized before injection. |
All values must be positive integers. Higher values provide more context to agents but increase API costs and may hit model context limits.
Pipeline Presets
Presets are named configurations that override pipeline settings for common use cases. Users can select a preset when creating a project.
presets:
<preset_name>:
description: "Human-readable description"
overrides:
<stage_name>: { <field>: <value>, ... }
Built-in Presets
| Preset | Description | Key changes |
|---|---|---|
quick_draft |
Fast draft with minimal review | All iterations set to 1, fact check and consistency check disabled |
thorough |
Maximum quality | Higher iteration counts across all stages |
fiction |
Creative writing | Fact check disabled, minimal research |
minimal |
Core agents only | Only topic refinement and first draft run |
Override Precedence
When a project is created, settings are resolved in this order (highest to lowest priority):
- Per-project overrides — specified by the user at project creation
- Preset overrides — from the selected preset
- Workspace pipeline defaults — from this config file
- Core stage protection —
topic_refinementandfirst_draftcannot be disabled regardless of overrides
Creating Custom Presets
Add a new entry under presets:
presets:
blog_post:
description: "Optimized for blog posts — research + draft + light review"
overrides:
research: { iterations: 2 }
outline: { iterations: 1 }
content_review: { iterations: 1 }
fact_check: { enabled: false }
style_review: { iterations: 1 }
revision: { iterations: 1 }
final_polish: { iterations: 1 }
consistency_check: { enabled: false }
Validation
When a workspace is loaded, its configuration is validated:
- All referenced
providernames must exist in the globalllm_providers iterationsmust be a positive integer- Core stages (
topic_refinement,first_draft) cannot be disabled - Memory token limits must be positive
pause_modemust be one of:after_each_stage,after_each_iteration,none
Invalid configuration is reported as an error when the workspace is accessed. It does not prevent the server from starting — only the affected workspace is unavailable.
Next Steps
- Global Configuration — LLM providers, search, and server settings
- Environment Variables — all environment variables
- Pipeline Architecture — detailed stage descriptions and execution model
- Agents Architecture — agent specifications, tools, and iteration behavior