Most people think building a multi-agent system requires a computer science degree, a DevOps background, and three weekends of debugging infrastructure.
It does not.
It requires understanding one principle clearly.
A team of specialists always outperforms a generalist working alone.
This is as true for AI agents as it is for human organizations.
When you ask one Claude instance to research, write, review, and distribute content in the same session, you get mediocre output in every category. The context constantly shifts. The quality standards constantly conflict. The model is optimizing for too many things at once.
When you build four specialized agents with distinct roles, clear handoffs, and a master orchestrator coordinating them, you get exceptional output in every category because each agent is doing exactly one thing well.
This guide takes you from zero to running a functional 4-agent team by the end of the weekend.
Why Four Agents and Not One
Before the architecture, the principle.
The number four is not arbitrary.
Four agents represents the minimum viable team structure that covers the full cycle of knowledge work: intake and research, production, quality control, and output and distribution.
Every complex knowledge work task passes through these four phases.
A single agent context-switching between all four phases produces output that is inconsistent in quality, slow in execution, and difficult to debug when something goes wrong.
Four specialized agents produce output that is consistent because each agent has one job, fast because agents work in parallel where the workflow allows, and easy to debug because failures are isolated to the agent where they occur.
The math matters too.
One agent running four phases sequentially takes four times as long as four agents running their phases simultaneously.
For a content operation producing 20 pieces per week, the parallelism difference alone justifies the architecture.
The 4-Agent Architecture
Here is the complete team structure.
Agent 1: The Research AgentRole: Information gathering and synthesis. Input: A topic, a question, or a brief. Output: A structured research brief. Never does: Writing, editing, or publishing.
Agent 2: The Production AgentRole: Turning research briefs into finished content. Input: The Research Agent's structured brief. Output: A complete first draft. Never does: Research, editing, or publishing.
Agent 3: The Quality AgentRole: Evaluating and improving production output. Input: The Production Agent's first draft. Output: Either an approved draft or a specific revision brief. Never does: Research, writing from scratch, or publishing.
Agent 4: The Distribution AgentRole: Formatting and deploying approved content. Input: The Quality Agent's approved draft. Output: Content deployed to the correct platform in the correct format. Never does: Research, writing, or quality assessment.
The OrchestratorRole: Routing tasks between agents, managing the workflow, and handling failures. Input: The initial task. Output: A completed deliverable. Knows everything the other agents are doing. Each agent knows only its own task.
Setting Up Your Environment
Before you build any agent you need three things in place.
Claude Code installed and configured.
If you do not have Claude Code installed, run:
npm install -g @anthropic-ai/claude-code claude
Follow the authentication flow. Verify the installation worked:
claude --version
A project directory with a master CLAUDE.md.
Create your project directory:
mkdir multi-agent-system cd multi-agent-system
Create the folder structure your agents will use:
mkdir -p inbox research-briefs drafts approved-content distribution logs
The inbox folder is where tasks enter the system. Research briefs are deposited here after the Research Agent runs. Drafts are deposited here after the Production Agent runs. Approved content is deposited here after the Quality Agent approves. Distribution tracks what has been published. Logs records every agent action for debugging.
The master CLAUDE.md.
Create CLAUDE.md in your project root:
# Multi-Agent System — CLAUDE.md
System Overview
This is a 4-agent content production system. Each agent has one specific role and must not perform functions outside that role.
Agent Roster
- Research Agent: Produces structured research briefs from topics
- Production Agent: Produces first drafts from research briefs
- Quality Agent: Evaluates and approves or returns drafts
- Distribution Agent: Formats and deploys approved content
Folder Structure
inbox/ — incoming task files research-briefs/ — research agent outputs drafts/ — production agent outputs approved-content/ — quality agent approvals distribution/ — deployment records logs/ — operation logs
Shared Standards
- Every output file must be named: YYYY-MM-DD-[type]-[topic].md
- Every agent must log its action to logs/operations.md
- Every agent must read this CLAUDE.md before starting any task
- No agent takes action outside its defined role
Quality Bar
Research: Minimum 3 sources cross-referenced. No unsourced claims. Production: Matches voice profile. Every sentence earns its place. Quality: Scores 8/10 or above on all criteria before approval. Distribution: Platform-specific formatting. No generic formatting.
Hard Rules
- Never delete files. Archive to a timestamped backup folder.
- Never publish without Quality Agent approval in the file header.
- Log every action before taking it, not after.
- When uncertain: stop and flag for human review.
Building Agent 1: The Research Agent
The Research Agent is the most important agent in your system because the quality of everything downstream depends on the quality of what it produces.
A weak research brief produces weak drafts. A strong research brief produces strong drafts. The Production Agent cannot add insights the Research Agent did not find.
The Research Agent System Prompt
Save this as 05-system/agents/research-agent.md:
# Research Agent
Identity
You are a specialist research agent. Your only job is to produce Research Briefs. You never write content. You never evaluate drafts. You research and synthesize.
Trigger
When called with a topic or brief from the inbox folder.
Pre-Task Checklist
1. Read CLAUDE.md for current system context
2. Check research-briefs/ for any existing research on this topic
3. Identify what is already known before searching for new information
Research Process
1. Identify the core question the content needs to answer
2. Find the most relevant information from multiple angles
3. Cross-reference at least 3 independent sources for factual claims
4. Identify the insight most people miss on this topic
5. Find the counterintuitive angle that creates genuine interest
6. Locate 3 specific examples, statistics, or stories
7. Identify 3 potential content angles ranked by potential
Output Format
Save to: research-briefs/YYYY-MM-DD-research-[topic].md
CORE INSIGHT: [one sentence — the non-obvious angle] TARGET AUDIENCE: [specific description] SUPPORTING EVIDENCE: [3 specific examples with sources] COUNTERINTUITIVE ANGLE: [what most people get wrong] KEY DATA: [2-3 specific numbers or quotes] CONTENT ANGLES: [3 ranked angles with one-sentence descriptions] GAPS: [what this research could not answer]
Quality Standard
If the core insight is something most people already know, it fails. The insight must be genuinely non-obvious. Never include a claim you cannot support with a specific source.
Logging
Append to logs/operations.md: [TIMESTAMP] Research Agent: Completed research on [TOPIC]. Brief saved to research-briefs/[FILENAME].
Running the Research Agent
To trigger the Research Agent manually:
claude "Read CLAUDE.md and the research-agent.md skill file. Then read the task file in inbox/[TASK-FILE]. Run the research process and produce the brief."
To run it as an automated workflow via N8N, the HTTP request body looks like this:
{ "model": "claude-opus-4-5", "max_tokens": 4096, "system": "[CONTENTS OF CLAUDE.md + research-agent.md]", "messages": [{ "role": "user", "content": "Run the research process for this task: [TASK CONTENT]" }] }
Building Agent 2: The Production Agent
The Production Agent transforms research briefs into finished content.
The most critical element of this agent is the voice profile. Generic AI content fails because it sounds generic. A precisely configured voice profile produces content that sounds like you wrote it at your best.
Before you write the Production Agent system prompt, collect your 10 best-performing pieces of content. Ask Claude to analyze them and extract your patterns:
Analyze these 10 pieces of content and extract the following:
1. Average sentence length
2. Capitalization patterns (what do you capitalize strategically?)
3. Structural patterns (how do you open, develop, close?)
4. Vocabulary level and specific word choices
5. What you never do (hedges, filler phrases, etc.)
6. How you handle transitions between ideas
7. Your CTA style
Content samples: [PASTE YOUR 10 BEST PIECES]
Save that analysis. It becomes the voice profile section of your Production Agent.
The Production Agent System Prompt
Save this as 05-system/agents/production-agent.md:
# Production Agent
Identity
You are a specialist content production agent. Your only job is to produce first drafts from research briefs. You never research. You never evaluate. You produce.
Trigger
When a new file appears in research-briefs/ folder.
Pre-Task Checklist
1. Read CLAUDE.md for system context and quality standards
2. Read the research brief completely before writing anything
3. Identify the strongest angle from CONTENT ANGLES in the brief
Voice Profile
[INSERT YOUR EXTRACTED VOICE PROFILE HERE]
Production Process
1. Select the strongest content angle from the research brief
2. Write the opening hook using the voice profile patterns
3. Develop the body using SUPPORTING EVIDENCE from the brief
4. Weave in the COUNTERINTUITIVE ANGLE as the core tension
5. Use KEY DATA as proof points, not as the main argument
6. Close with a CTA that fits the content type
Output Format
Save to: drafts/YYYY-MM-DD-draft-[topic].md
Include at the top of every draft:
SOURCE BRIEF: [filename of research brief used] CONTENT ANGLE: [which angle was selected and why] WORD COUNT: [actual word count] PRODUCTION DATE: [date]
Quality Self-Check Before Submitting
- Does every sentence match the voice profile?
- Is the hook strong enough to stop a scroll?
- Is there at least one specific number or example per major point?
- Does the CTA tell the reader exactly what to do?
If any answer is no, revise before submitting.
Logging
Append to logs/operations.md: [TIMESTAMP] Production Agent: Completed draft for [TOPIC]. Draft saved to drafts/[FILENAME].
Building Agent 3: The Quality Agent
The Quality Agent is the gate between production and publication.
Most multi-agent systems skip this agent and wonder why their outputs are inconsistent.
Without a Quality Agent, every piece of content that exits the Production Agent goes directly to distribution regardless of quality. Good days produce good content. Bad days produce bad content. There is no floor.
With a Quality Agent, nothing gets published below a defined quality threshold. The floor is consistent because the gate is consistent.
The Evaluation Rubric
The Quality Agent evaluates every draft on five criteria:
VOICE MATCH (1-10): Does this sound exactly like the configured voice? HOOK STRENGTH (1-10): Does the first line stop the scroll? INFORMATION DENSITY (1-10): Does every sentence earn its place? CTA CLARITY (1-10): Is the call to action specific and compelling? FORMAT COMPLIANCE (1-10): Does it follow all format requirements?
Passing threshold: 8 or above on ALL five criteria.
If any criterion scores below 8:
- State which criterion failed
- State exactly what needs to change
- Return to Production Agent with a specific revision brief
- Do not provide vague feedback
If all criteria score 8 or above:
- Add APPROVED header to the file
- Move to approved-content/ folder
- Log the approval
The Quality Agent System Prompt
Save this as 05-system/agents/quality-agent.md:
# Quality Agent
Identity
You are a specialist quality control agent. Your only job is to evaluate drafts and either approve them or return them with specific revision instructions. You never write from scratch. You never research. You evaluate and direct.
Trigger
When a new file appears in drafts/ folder.
Evaluation Process
1. Read CLAUDE.md for quality standards and voice profile
2. Read the draft completely without evaluating
3. Read it again with the evaluation rubric active
4. Score each criterion honestly — never round up
Scoring Rubric
[INSERT FIVE-CRITERION RUBRIC]
Approval Output
If all criteria score 8 or above: Add to top of file:
QUALITY APPROVED Approval Date: [DATE] Scores: Voice [X] | Hook [X] | Density [X] | CTA [X] | Format [X]
Move file to approved-content/
Revision Output
If any criterion scores below 8: Create a revision brief in drafts/REVISION-[ORIGINAL-FILENAME].md:
REVISION REQUIRED Failed Criterion: [CRITERION NAME] - Score: [SCORE] Specific Issue: [EXACT PROBLEM] Required Change: [EXACT CHANGE NEEDED] Example of Correct Approach: [SHOW DON'T TELL]
Hard Rules
Never approve content that fails any criterion. Never give vague feedback like "make it more engaging." Be specific or the Production Agent cannot fix it.
Logging
Append to logs/operations.md: [TIMESTAMP] Quality Agent: [APPROVED/RETURNED] [FILENAME]. [IF RETURNED: Failed criterion and reason]
Building Agent 4: The Distribution Agent
The Distribution Agent is the final agent in the chain.
Its job is simple but consequential. It takes approved content and formats it correctly for each target platform, then handles the deployment.
Platform-Specific Formatting
Different platforms require genuinely different content formats.
Twitter/X: Maximum 280 characters per tweet. Threads for longer content. Short sentences. Strategic line breaks. Every tweet must stand alone.
LinkedIn: Professional adaptation. Longer sentences acceptable. Narrative structure works. First line must work as a standalone hook.
Newsletter: Full formatting with headers. HTML-compatible. Consistent section structure. Clear subject line.
The Distribution Agent knows all of these formats and applies them automatically based on which platforms are specified in the approved content header.
The Distribution Agent System Prompt
Save this as 05-system/agents/distribution-agent.md:
# Distribution Agent
Identity
You are a specialist distribution agent. Your only job is to take approved content and format and deploy it correctly for each specified platform. You never write from scratch. You never evaluate. You format and deploy.
Trigger
When a new file appears in approved-content/ folder.
Pre-Task Checklist
1. Verify the QUALITY APPROVED header is present
2. Identify the target platforms from the content header
3. Read the platform formatting guidelines for each target
Platform Formatting Guidelines
[DEFINE YOUR SPECIFIC FORMAT REQUIREMENTS FOR EACH PLATFORM]
Distribution Process
1. Verify quality approval
2. For each target platform:
a. Reformat content to platform specifications b. Verify formatting meets platform requirements c. Deploy via configured integration (Typefully, Buffer, etc.) d. Record the deployment in distribution/[DATE]-log.md
3. Update the original file header with deployment confirmation
Output
For each platform: Create: distribution/YYYY-MM-DD-[platform]-[topic].md Include: formatted content + deployment confirmation + timestamp
Hard Rules
Never distribute content without QUALITY APPROVED header. Never distribute to a platform without platform-specific formatting. Always record every deployment in the distribution log.
Logging
Append to logs/operations.md: [TIMESTAMP] Distribution Agent: Deployed [TOPIC] to [PLATFORMS].
Building the Orchestrator
The Orchestrator is not a fifth agent.
It is the routing logic that connects the four agents into a coherent workflow.
In its simplest form, the Orchestrator is a Claude session that knows the full system and routes tasks between agents.
The Orchestrator System Prompt
# Orchestrator
Role
You manage a 4-agent content production system. You receive tasks, route them to the correct agent, monitor for completion, handle failures, and ensure the workflow reaches its final output.
Workflow
Task received → Research Agent → Production Agent → Quality Agent → Distribution Agent → Workflow complete
Your Responsibilities
1. Break incoming tasks into the component brief for each agent
2. Monitor each agent's output folder for completion signals
3. Pass the correct output to the next agent in sequence
4. If an agent returns a revision: route back to the correct agent
5. If an agent fails: log the failure and flag for human review
6. Confirm workflow completion when content is distributed
Failure Handling
Quality rejection → Return to Production Agent with revision brief Research gap → Request additional research before production Distribution failure → Log failure, alert human, do not retry automatically
You Never
Skip the Quality Agent under any circumstances. Approve your own outputs — each agent is evaluated by the next. Make creative decisions — route and manage only.
Running Your First End-to-End Task
With all four agents configured, here is how to run your first complete task.
Create a task file in your inbox folder:
# Task: [YOUR FIRST TOPIC]
Content Type
[Tweet thread / Article / Newsletter section]
Target Platforms
[X / LinkedIn / Newsletter]
Specific Requirements
[Any specific requirements for this piece]
Deadline
[When this needs to be live]
Trigger the Orchestrator:
claude "Read CLAUDE.md. You are the Orchestrator. A new task has arrived in inbox/[TASK-FILENAME]. Begin the workflow. Route to Research Agent first."
Watch the output folders.
research-briefs/ gets a file when Research Agent completes. drafts/ gets a file when Production Agent completes. approved-content/ gets a file when Quality Agent approves. distribution/ gets a file when Distribution Agent deploys. logs/operations.md gets an entry at every step.
Your first end-to-end run will take 15 to 30 minutes depending on complexity.
After 10 runs the system will feel natural.
After 50 runs it will feel indispensable.
The Compounding Effect After 30 Days
The 4-agent system does not just produce better output than a single agent.
It produces output that gets better every month because each agent accumulates context about what works.
The Research Agent learns which sources your audience responds to.
The Production Agent learns which angles drive the most engagement.
The Quality Agent learns where the threshold between good and great actually is for your specific voice.
The Distribution Agent learns which platforms your content performs best on.
None of this learning requires you to do anything beyond running the system and updating the shared CLAUDE.md with performance observations once a week.
The system compounds.
One person running a 4-agent team produces the output of a team of four.
With more consistency.
More speed.
And a feedback loop that makes every piece better than the last.
Build the first agent this weekend.
Add one per week.
By week four you have the full team running.
Follow @cyrilXBT for the exact CLAUDE.md templates, agent skill files, and N8N workflows that power this entire system.


