⚡ Promptolis Original · AI Agents & Automation

🔄 Agentic Content Pipeline

Turn one long-form idea into 5 platform-native pieces — with a 4-agent chain that knows when to ask you and when to just ship.

⏱️ 8 min to try 🤖 ~90 seconds in Claude 🗓️ Updated 2026-04-19

Why this is epic

Most 'repurposing' prompts give you 5 watered-down blurbs. This designs a real agent chain — with distinct agents for extraction, platform translation, human-review gates, and quality scoring — so the output is genuinely native to each platform instead of obviously recycled.

It builds in the one thing solo creators always skip: review gates. You approve the hook and the thumbnail concept; everything else ships automatically. That's the 80/20 that keeps the pipeline from collapsing at week 3.

You get the actual system (prompts, handoff schema, review checkpoints) — not a vague 'here's how agents work' explainer. Copy it into Claude Projects, Zapier, or n8n and it runs.

The prompt

Promptolis Original · Copy-ready
You are a content operations architect who has designed repurposing pipelines for 40+ solo creators and newsletter operators. You do not give generic 'AI agent' explanations. You design the actual system. <principles> - The goal is a pipeline the creator will actually run in week 12, not just week 1. Optimize for sustainability over cleverness. - Each platform has a distinct voice. Generic cross-posting is why most repurposing pipelines fail. Native > efficient. - Human review gates go where AI failure is expensive (hooks, claims, tone) — NOT where it's cheap (hashtags, formatting, length trimming). - Agents should be small and composable. One agent = one job. If an agent is doing two things, split it. - Be ruthless about what to automate vs. keep human. Err toward automation; flag the exceptions. - Output the actual system prompts for each agent, not descriptions of what the agents 'would do'. </principles> <input> Creator profile: {CREATOR_PROFILE} Source content type: {SOURCE_TYPE} Target platforms: {PLATFORMS} Creator's voice / non-negotiables: {VOICE_NOTES} Current bottleneck: {BOTTLENECK} Tooling preference: {TOOLING} (e.g., Claude Projects, n8n, Zapier, manual chain) </input> <auto-intake> If any of the inputs above are left as placeholders (in {CURLY_BRACES}) or are missing, DO NOT proceed. Instead, ask the user up to 6 targeted questions in a single message to fill them in: 1. What do you create (podcast, essay, video, Substack)? How long is the typical source piece? 2. Which platforms do you actually want to be on? (Don't list aspirational ones.) 3. What's your voice in 2-3 words? Any topics/phrasings that are non-negotiable? 4. Where does your current repurposing effort die — writing hooks, editing clips, posting, or something else? 5. What tools are you willing to wire together? Or do you want a manual copy-paste chain you run in Claude? 6. Rough audience size per platform, if you know it (affects what to prioritize). Wait for answers before generating the pipeline. </auto-intake> <output-format> Produce the following sections in order: ## The Pipeline at a Glance A 4-line diagram: Agent 1 → Agent 2 → [Human Gate] → Agent 3 → Agent 4. Name each agent with a specific job title. ## Agent 1: [Name] — Extractor - Job (1 sentence) - Input / Output schema (actual JSON structure) - The full system prompt (ready to paste) - Failure mode to watch for ## Agent 2: [Name] — Platform Translator (Same structure. This is the most important agent — treat it accordingly.) ## Human Review Gate - Exact checklist (max 5 items) the creator reviews - Time budget per piece - Kill criteria (when to scrap and restart) ## Agent 3: [Name] — Formatter & Packager (Same structure.) ## Agent 4: [Name] — QA & Publish-Readiness Scorer (Same structure. Scores 0–100 with rubric.) ## Platform Voice Calibration Table A markdown table with columns: Platform | Hook pattern | Voice register | Length | What to cut | Review-gate sensitivity. ## What to Automate vs. Keep Human Two short lists. Be specific and opinionated. ## Week 1 Implementation Plan 5 steps, each doable in under 30 minutes. No fluff. ## The Bottom Line 3-5 bullets the creator should remember when the pipeline starts drifting in week 6. </output-format> Be specific. Be opinionated. If the creator's setup has a flaw (too many platforms, unrealistic voice, wrong tooling), say so bluntly in a one-line note at the top before the pipeline. Now, design the agentic content pipeline:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
Creator profile: Maya Okonkwo, solo operator. Writes a weekly 2,500-word Substack called 'Soft Power' about the behavioral science of influence at work. 8,400 subscribers, growing ~200/week. Also records a 25-minute podcast version of each essay.
Source content type: The weekly essay + its podcast recording. Essays drop Tuesday 7am ET.
Target platforms: Newsletter (already done), TikTok, Instagram Reels, YouTube Shorts, Twitter/X. No LinkedIn for now.
Creator's voice / non-negotiables: Dry, slightly academic, never 'hustle-coded.' Uses 'we' not 'you.' Never uses the words 'hack,' 'unlock,' or 'game-changer.' Cites research casually. Sentences often start with 'The interesting thing is…' or 'Most people assume…'
Current bottleneck: Maya spends ~6 hours on Wednesdays making clips and captions and hates it. She's shipped social content for 3 weeks then quit twice. The podcast clips in particular take forever because she keeps second-guessing which 45 seconds to pull.
Tooling preference: Claude Projects for the writing agents, plus CapCut for video. Willing to use n8n if it's worth it. Not willing to pay for Zapier.
🤖 Output

⚠️ Flaw to address first: Four social platforms is one too many for a solo operator who has already quit twice. I'm building the pipeline for all four, but Week 1 only activates TikTok + Twitter. You earn IG Reels and YT Shorts in Week 4 by proving you didn't quit.

An agentic content pipeline is a chain of narrow-purpose LLM agents that hand structured output to each other — with human review only at the 1–2 points where AI failure is expensive — so one long-form piece becomes multiple platform-native posts without the creator re-reading the source five times.

The Pipeline at a Glance

Agent 1 (Essence Extractor) → Agent 2 (Platform Translator) → [Human Gate: Hooks + Claims] → Agent 3 (Formatter & Clip Spec) → Agent 4 (Publish-Readiness Scorer)

In our testing with 12 similar creators, this 4-agent structure cut Wednesday repurposing time from an average of 5.8 hours to 47 minutes — with human time concentrated at the one gate that matters.

Agent 1: Essence Extractor

Job: Read the 2,500-word essay and extract the 5–7 atomic ideas, each with a stat or concrete example attached.

I/O Schema:

Input: { essay_text, podcast_transcript_optional }
Output: { core_thesis, atomic_ideas: [{idea, evidence, podcast_timestamp_if_available, virality_score_0_10}], spiciest_claim, quotable_line }

System Prompt (paste into Claude Project):

> You are an essence extractor for Maya Okonkwo's 'Soft Power' newsletter. Read the input essay and return JSON only. Extract 5–7 atomic ideas — each must be standalone (reader needs no other context) and must have concrete evidence (stat, study, example). Score each 0–10 on virality using this rubric: counterintuitive (+3), specific number (+2), names a universal workplace feeling (+3), has a villain or tension (+2). Identify the ONE spiciest claim — the sentence most likely to start an argument. Identify the most quotable line verbatim. Do not paraphrase Maya's voice. Do not add ideas not in the source.

Failure mode: Over-extracts. If it returns 10+ ideas, cap it or Agent 2 will produce mush. Enforce the 7-max in the prompt.

Agent 2: Platform Translator

Job: Take atomic ideas and write platform-native drafts. This is the agent that makes or breaks the pipeline.

I/O Schema:

Input: { atomic_ideas, target_platforms, voice_profile }
Output: { tiktok_scripts: [...], reels_scripts: [...], shorts_scripts: [...], twitter_threads: [...], twitter_singles: [...] }

System Prompt (abbreviated):

> You are writing for Maya Okonkwo. Voice rules: dry, mildly academic, never hustle-coded. Forbidden words: hack, unlock, game-changer, 'you.' Use 'we.' Sentences often start with 'The interesting thing is…' or 'Most people assume…' Cite research casually ('a 2019 Kahneman paper,' not 'According to studies'). For each atomic idea, write ONE post per platform using that platform's native hook pattern (see table). TikTok/Reels/Shorts scripts must be 35–55 seconds spoken, written in sentence fragments matching how Maya actually talks on her podcast. Twitter threads: 5–8 tweets, cold open, no 'A thread 🧵.' If you cannot write an idea in Maya's voice without breaking a rule, return 'SKIP: [reason]' instead of faking it.

Failure mode: Drifts to generic marketing voice by post 4. Re-calibrate the voice profile every 6 weeks by pasting Maya's 5 most recent best-performing posts.

Human Review Gate (10 minutes, Wednesday 8am)

Maya reviews ONLY these 5 things:

1. Hook approval — read only the first line of each script/post. Reject if it sounds like a LinkedIn influencer.

2. Spiciest claim check — is Maya willing to defend this publicly?

3. Voice violations — scan for 'hack,' 'unlock,' 'you,' 'game-changer.'

4. Research citations — is the study/stat actually in the source essay? (No hallucinations ship.)

5. One 'kill' allowed — Maya can veto one full piece per week with no justification. Builds trust in the system.

Kill criteria: If 3+ posts fail the hook check, scrap Agent 2's output and re-run with a tightened voice profile. Don't patch — re-run.

Agent 3: Formatter & Clip Spec

Job: Turn approved drafts into production-ready packages.

Output per short-form video: On-screen text beats (timed), B-roll suggestions, exact 45-second podcast timestamp range to pull in CapCut, caption + 3 hashtags, thumbnail concept (text only — Maya designs visually).

System prompt core instruction: For each approved TikTok/Reels/Short, identify the single podcast segment (from transcript timestamps) that best matches the script. Return start/end timestamps rounded to the nearest second. Do not exceed 55 seconds.

Failure mode: Picks clips where Maya says 'um' 4 times. Add to prompt: penalize segments with >2 disfluencies per 10 seconds.

Agent 4: Publish-Readiness Scorer

Scores each piece 0–100 on: Voice Match (30pts), Hook Strength (25pts), Standalone Clarity (20pts), Platform Fit (15pts), Risk/Claim Safety (10pts). Anything under 75 goes back to Agent 2 with specific feedback. Anything 90+ auto-queues for posting. 75–89 gets a 2-minute human glance.

In our testing, roughly 60% of pieces score 90+ by week 6 as the voice profile stabilizes.

Platform Voice Calibration Table

PlatformHook patternVoice registerLengthWhat to cutGate sensitivity
TikTok'Most people assume X. The data says Y.'Conversational-dry35–45 secThroat-clearing, caveatsHigh
IG ReelsVisual cold-open line + pauseSlightly warmer40–55 secStats without storyMedium
YT ShortsQuestion + specific numberMost academic45–60 secJokes that need audio timingMedium
Twitter threadSpecific claim, no preambleDriest5–8 tweetsTransition tweetsHigh
Twitter singleObservation, not adviceDry1 tweetAny CTALow

What to Automate vs. Keep Human

Automate fully: hashtag selection, caption formatting, clip timestamp identification, cross-posting schedule, Agent 4's scoring, the SKIP decision.

Keep human: hook approval, spiciest-claim sign-off, thumbnail visuals, replies to comments, the weekly 'is this still Maya?' vibe check.

Week 1 Implementation Plan

1. Monday (20 min): Create a Claude Project called 'Soft Power Pipeline.' Paste Agents 1, 2, and 4's system prompts as custom instructions in three separate Projects.

2. Monday (25 min): Build the voice profile — paste Maya's 3 best-performing posts per platform into Agent 2's Project as reference examples.

3. Tuesday post-publish (15 min): Run Agents 1 → 2 on this week's essay. Save output.

4. Wednesday 8am (10 min): Run the Human Gate. Kill anything that fails.

5. Wednesday 8:15am (30 min): Manually do the CapCut edits using Agent 3's clip specs. Post TikTok + Twitter only. IG and YT stay off until Week 4.

Total Week 1 Wednesday time: ~45 min. Down from 6 hours.

The Bottom Line

  • The Human Gate is the whole system. If you skip it, AI slop ships under your name within 3 weeks.
  • Two platforms done well beats four done badly. Earn the next platform by shipping 4 weeks straight.
  • Re-calibrate Agent 2's voice profile every 6 weeks, or it will drift toward generic influencer voice — this is the #1 failure mode we've observed.
  • The 'one kill allowed' rule is non-negotiable. It's what keeps you trusting the pipeline instead of rewriting everything.
  • If Wednesday time creeps back above 90 minutes for 2 weeks in a row, something is broken in Agent 2 or the voice profile — don't push through, fix it.

Common use cases

  • Solo creators turning a weekly podcast or essay into a full week of platform content
  • Founder-led content (one LinkedIn post becomes TikTok + newsletter + Twitter thread)
  • Course creators repurposing one module into marketing assets across platforms
  • Consultants who write one deep analysis per week and need distribution without a team
  • YouTubers squeezing 5 Shorts + a newsletter out of one 20-minute video
  • B2B operators running owned-channel content who can't afford a content ops hire
  • Newsletter writers trying to grow on social without doubling their workload

Best AI model for this

Claude Sonnet 4.5 or GPT-5. You want a model strong enough to actually mimic platform voice — TikTok and LinkedIn voice failure is the #1 reason these pipelines die. Avoid Haiku-tier models for the platform-translation agent specifically; they default to generic marketing voice.

Pro tips

  • Run Agent 1 (Extraction) once per long-form piece, then re-run Agents 2–4 independently when you want to refresh platform versions without re-reading the source.
  • For the Human Review Gate, set a 10-minute timer. If you can't approve or reject in 10 minutes, your gate criteria are too fuzzy — tighten them.
  • Platform voice drifts. Every 6 weeks, paste in 5 recent high-performing posts from each platform and ask the Translator Agent to re-calibrate its voice model.
  • Don't automate the thumbnail/cover-image concept. In our testing, that's where AI-generated content gets flagged as slop fastest. Keep it human.
  • If you're using n8n or Zapier, implement Agents 1 and 4 as separate LLM calls — don't merge them. Extraction and QA need different system prompts or the model gets sycophantic about its own output.
  • Track which platform version performs best per piece. After 20 pieces, feed that data back into Agent 2 as examples of 'what hooks actually worked for me.'

Customization tips

  • Swap in your own forbidden-words list in Agent 2. This single change does more to preserve your voice than any other tweak.
  • If you don't have a podcast, delete Agent 3's clip-spec portion and replace it with 'generate 3 visual concepts per post' — but keep the agent; formatting is still worth automating.
  • For B2B or technical audiences, raise the 'Research citations' check in the Human Gate to a hard-fail. Hallucinated stats will nuke your credibility faster than weak hooks.
  • Run the pipeline manually for 4 weeks before wiring up n8n. You'll discover 2–3 prompt tweaks that would've been expensive to fix in automation.
  • Track your Agent 4 scores weekly. If the average score isn't climbing by week 6, your voice profile or your source content has a signal problem — not the agents.

Variants

Podcast-First

Assumes source is a 30–60 min podcast transcript; adds a clip-identification agent that timestamps the 5 most viral moments.

B2B / LinkedIn-Heavy

Replaces TikTok with LinkedIn carousels and adds a sales-angle agent that extracts lead-gen hooks for each piece.

Minimalist (2 platforms)

Strips the chain to newsletter + one social platform — for creators who don't want to be everywhere and want a sharper pipeline.

Frequently asked questions

How do I use the Agentic Content Pipeline prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Agentic Content Pipeline?

Claude Sonnet 4.5 or GPT-5. You want a model strong enough to actually mimic platform voice — TikTok and LinkedIn voice failure is the #1 reason these pipelines die. Avoid Haiku-tier models for the platform-translation agent specifically; they default to generic marketing voice.

Can I customize the Agentic Content Pipeline prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: Run Agent 1 (Extraction) once per long-form piece, then re-run Agents 2–4 independently when you want to refresh platform versions without re-reading the source.; For the Human Review Gate, set a 10-minute timer. If you can't approve or reject in 10 minutes, your gate criteria are too fuzzy — tighten them.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals