The Best Claude Prompts for 2026 — Calibrated for Opus 4 & Sonnet 4.5

Claude responds to prompts differently than ChatGPT. It honors XML structure more strictly, follows multi-step reasoning more reliably, and writes prose that's emotionally legible in ways GPT-5 sometimes flattens. The Originals below are hand-crafted with these differences in mind. Each one is tested on Opus 4 and Sonnet 4.5; each ships with a complete example output. Free, MIT-licensed.

Last reviewed: April 2026 · Tested on Claude Opus 4 + Sonnet 4.5.

Why Claude rewards different prompt design than ChatGPT

Five differences matter when you're crafting Claude prompts:

  1. XML structure is non-optional, not optional. Claude's tokenizer + post-training make XML tags a strong signal. Markdown headers in long prompts often produce noisier output.
  2. Long-context reasoning is unusually strong. Opus 4's 1M-token context with prompt caching means you can paste a 200-page document and ask structural questions. ChatGPT can do this but with more drift.
  3. Tone-following is more reliable. If you tell Claude "be direct, do not use platitudes," it will follow. ChatGPT often defaults back to soft hedging.
  4. Refusals are more thoughtful. Claude will tell you when a request is genuinely unsafe; it won't refuse for cosmetic reasons. This makes it ideal for grief, recovery, and difficult-life writing prompts.
  5. Code reasoning depth. For complex codebase-level tasks (refactoring, architecture review), Opus 4 outperforms GPT-5 in our internal tests.

Top 12 Claude Originals (calibrated for Opus 4)

These are the Originals where Claude's particular strengths shine — long-context reasoning, prose precision, tonal consistency, structured analysis.

Best for: by use case

How to use these prompts in Claude

  1. Choose Opus 4 for complex tasks. Sonnet 4.5 for high-volume / fast-iteration tasks.
  2. Enable prompt caching if you'll re-run the same prompt structure with different inputs (saves 90% on tokens).
  3. Use the auto-intake feature. Each Original asks for missing fields rather than guessing.
  4. For the API: paste the full XML prompt into the system message, then the user message contains your specific input.
  5. For Claude.ai: paste the full prompt in the message box. Long prompts work fine — Claude doesn't truncate.

FAQ

Do I need Opus 4, or will Sonnet work?

Sonnet 4.5 handles 80-90% of the Originals at near-Opus quality, much faster, much cheaper. Save Opus for: long-form fiction, codebase-level reasoning, multi-step strategic decisions. Each Original lists its recommended model.

How do these compare to ChatGPT prompts?

Some Originals work better in Claude (anything tone-sensitive or long-form). Some work better in ChatGPT (anything requiring rapid iteration or web-search integration). Most are model-agnostic. See: Best ChatGPT Prompts for the parallel list.

Can I use these via Claude API?

Yes. The XML structure is API-friendly. For high-volume use, set up prompt caching — Anthropic discounts cached prompts by 90%. Each Original works as a system prompt with the user's input as the user message.

What about Claude Code or Claude in IDEs?

The coding-related Originals (Codebase Archaeologist, Code Review Teacher) work in Claude Code, Cursor, Zed, and other Claude-powered IDEs. Paste them as system instructions or context.

More from Promptolis

Get new Originals every Friday

2-3 hand-crafted Originals per week. No spam, no upsells, unsubscribe in 1 click.

Or subscribe directly on Beehiiv →