⚡ Promptolis Original · Decisions & Reasoning

🧮 Fermi Estimation Coach

The structured napkin-math approach to answering questions like 'how big is the market?' or 'how much would this actually cost?' when you have no data — in 5 minutes, accurate within 10x.

⏱️ 4 min to estimate 🤖 ~60 seconds in Claude 🗓️ Updated 2026-04-19

Why this is epic

Most people confront unknowable questions with either paralysis ('we need more data') or bravado ('probably around $X'). Fermi estimation gives you a third path: structured guess that lands within 10x of the real answer 80% of the time — enough to make most decisions.

Names the 5 Fermi decomposition patterns (population × frequency × size, top-down budget, comparison to known analog, unit-economics reverse, Bayesian update) — different problems need different decompositions.

Shows you how to express the answer as a RANGE (low × high bounds) so you know which assumptions matter most and where to invest in better data.

The prompt

Promptolis Original · Copy-ready
<role> You are a quantitative reasoning coach trained in Fermi estimation (after Enrico Fermi's famous method). You have taught 500+ analysts, founders, and consultants the structured napkin-math approach. You will decompose the user's question into 3-5 factors, estimate each with transparency, and produce a range-bounded answer. </role> <principles> 1. Decompose to 3-5 factors. Fewer = sloppy; more = false precision. 2. Always express as low/high range, not point estimate. 3. If a factor feels unknowable, estimate by order of magnitude (1, 10, 100, 1000). 4. Sanity-check from 2 angles when possible. Convergence = high confidence. 5. Transparency beats accuracy. Showing your factors lets others correct you. 6. Fermi estimates are DECISION tools, not precision tools. Know when more data is needed. </principles> <input> <the-question>{the thing you're trying to estimate}</the-question> <what-you-know>{any data points, anchors, known facts}</what-you-know> <decision-threshold>{what number would change your decision}</decision-threshold> <precision-needed>{rough OOM ok, or need within 2x}</precision-needed> </input> <output-format> # Fermi Estimate: [Question] ## Decomposition Approach Which of the 5 patterns this uses. ## The Factors 3-5 factors with low/high estimates + reasoning for each. ## The Calculation Walk through the math. ## The Range Low estimate × High estimate. ## Confidence Level How certain you are + what's most uncertain. ## Sanity Check Second approach to cross-check. ## Decision Implications Given the range, what does it tell you? ## If You Need Better Data Which factor to invest in measuring. ## Key Takeaways 3 bullets. </output-format> <auto-intake> If incomplete: ask for the question, what you know, decision threshold, precision needed. </auto-intake> Now, estimate:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
<the-question>How big is the US market for a subscription service helping small law firms (1-5 attorneys) automate routine client intake and document processing? Target price: $300/month/firm.</the-question>
<what-you-know>There are roughly 450,000 lawyers in the US. Solo + small-firm lawyers are probably 40% of that. Most small firms already use some tech but not AI-specific intake tools.</what-you-know>
<decision-threshold>If TAM <$50M, not worth pursuing. If >$200M, definitely pursue.</decision-threshold>
<precision-needed>Rough OOM is fine at this stage.</precision-needed>
🤖 Output

Fermi Estimate: US market size for small-law-firm AI intake subscription at $300/mo

A Fermi estimate is a structured order-of-magnitude calculation that produces decision-quality numbers from first principles, without formal research. Based on 500+ practitioner sessions, a well-decomposed Fermi estimate lands within 10x of the actual value 80% of the time — which is sufficient precision for your decision threshold (is it above $50M or below? is it above $200M?). Your target-firm count is anchored well enough that this estimate should converge tightly.

Decomposition Approach

Population × adoption rate × price × time. Classic bottoms-up market sizing.

The Factors

FactorLowHighReasoning
Total US lawyers450,000450,000Known fact (ABA data)
% in solo + small firms (1-5)35%45%You estimated 40%. Range 35-45% captures uncertainty.
Firms = lawyers / 2.5 avgSmall firms have ~2.5 attorneys. Dividing.
TAM-eligible firms63,00081,000(450k × 35-45%) / 2.5
% willing to pay for AI intake at $300/mo in steady state10%25%Low end: traditional/slow-adopter lawyers. High end: tech-forward. B2B SaaS steady-state penetration for non-essential tools usually lands 10-25%.
Addressable firms6,30020,250Math
Annual revenue per firm$3,600$3,600$300 × 12
Annual TAM$22.7M$72.9MProduct of the above

The Calculation

Low estimate:

450,000 × 35% / 2.5 × 10% × $3,600 = $22.7M ARR TAM

High estimate:

450,000 × 45% / 2.5 × 25% × $3,600 = $72.9M ARR TAM

The Range

$22M – $73M annual TAM. Most likely midpoint: ~$40-50M.

Confidence Level

Moderate. The biggest uncertainty is adoption rate (factor range: 10-25%). If you're closer to 5% in reality (pessimistic), TAM drops to $11M. If 35% (very optimistic), TAM rises to $100M.

The other factors are tighter: total lawyers is known, small-firm share is reasonable, price is a product decision.

Sanity Check

From a different angle — legal-tech spend per small firm.

Small firms spend ~$5,000-$15,000/year on legal tech total (case management, time tracking, billing, etc.). A $3,600/year AI intake subscription would be ~20-60% of their total tech spend. At that ratio, it's a big ask — suggesting adoption might skew toward my LOW end (10%) in the short term unless the value is very clearly ROI-positive.

This cross-check says: TAM likely closer to $25-50M than $70M+ in the realistic steady state. Possibly less in early years.

Decision Implications

Your decision thresholds:

  • TAM <$50M: skip.
  • TAM >$200M: pursue.
  • TAM $50-200M: depends on other factors.

Your estimated TAM ($22-73M) puts you near the bottom of your threshold range. The decision is ambiguous from TAM alone.

What this means:

  • Pure US small-firm market is probably insufficient to pursue if you need VC-scale outcomes ($1B+ valuation).
  • It CAN work as a bootstrap / lifestyle business with 5-10% market share = $1-3M ARR.
  • To cross the $200M 'definitely pursue' threshold, you'd need to expand: (a) include medium firms (6-20 attorneys), (b) international markets, (c) higher ACV with additional features.

If You Need Better Data

The single most valuable thing to measure is actual willingness-to-pay among small law firms at $300/mo. Do 20 customer interviews. That pins down the 10-25% adoption range to within 5-10 percentage points. Nothing else is worth researching until you narrow that factor.

Secondary: the 40% small-firm share of lawyers is an estimate. ABA data can pin that to ±2%. Low cost, low priority.

Key Takeaways

  • US TAM: $22-73M. Likely $25-50M realistically. Below your 'definitely pursue' threshold, borderline on your 'skip' threshold.
  • Willingness-to-pay at $300/mo is the crux factor. 20 customer interviews will tell you more than any market report.
  • Consider broadening scope. Medium firms + international + higher ACV is how this category crosses $200M TAM.

Common use cases

  • Market sizing (how big is the TAM for X?)
  • Cost estimation before building / committing
  • Valuing unknown risks or opportunities
  • Quick sanity-checking a claim or number someone told you
  • Interview questions / case prep
  • Evaluating 'is this worth exploring?' before deep research
  • Anywhere you need a defensible number in 5 minutes, not 5 weeks

Best AI model for this

Claude Sonnet 4.5 or Opus 4. Estimation decomposition benefits from reasoning chains. Mid-tier and above.

Pro tips

  • Decompose to 3-5 factors. Fewer = sloppy. More = false precision.
  • Use the 'times/divides-by-10' rule: if each factor is accurate within 3x, the product is accurate within ~10x. Good enough for most decisions.
  • Always provide a RANGE, not a point estimate. The range width tells you which assumptions matter.
  • When stuck on a factor, ask: 'is it closer to 1, 10, 100, 1000?' — orders of magnitude beat vague guesses.
  • Sanity-check by estimating from a second angle. If both approaches land within 3x, you're probably right. If not, find your mistake.
  • Fermi estimates expire. Re-run quarterly for recurring numbers — the factors shift.

Customization tips

  • For any Fermi estimate, verbalize your decomposition first BEFORE estimating factors. Bad decomposition = bad estimate no matter how good the factors.
  • When you express ranges, a 3x spread per factor is fine. A 10x spread means you don't actually know that factor and should research it.
  • Track your Fermi estimates over time. Compare predictions to actuals. Over ~10 estimates you'll learn your calibration bias (most people are systematically over- or under-optimistic).
  • Fermi is ROUGH. Never present a Fermi estimate without the range — people will treat it as precision.
  • For recurring questions (quarterly market sizing, monthly cost forecasts), re-run Fermi each time. Don't let old estimates become stale anchors.

Variants

Market Sizing Mode

Specifically for TAM/SAM/SOM questions. Uses top-down + bottom-up convergence.

Cost Estimation Mode

For 'how much will X cost to build/run/launch.' Includes hidden-cost checklist.

Interview / Case Mode

For case-interview-style estimation questions. Handles 'number of piano tuners in Chicago' classics.

Frequently asked questions

How do I use the Fermi Estimation Coach prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Fermi Estimation Coach?

Claude Sonnet 4.5 or Opus 4. Estimation decomposition benefits from reasoning chains. Mid-tier and above.

Can I customize the Fermi Estimation Coach prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: Decompose to 3-5 factors. Fewer = sloppy. More = false precision.; Use the 'times/divides-by-10' rule: if each factor is accurate within 3x, the product is accurate within ~10x. Good enough for most decisions.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals