⚡ Promptolis Original · Business & Strategy
🎯 Go-to-Market Stress Test
Finds the single load-bearing assumption in your GTM plan most likely to be wrong — and the $500 experiment to test it before you burn $500K.
Why this is epic
Most GTM reviews give you 40 polite suggestions. This gives you ONE brutal diagnosis — the specific assumption that, if wrong, collapses the whole plan.
It translates every risk into a cheap, time-boxed experiment. Not 'do more research' — an actual test you could run next Tuesday for under $1,000.
It distinguishes between assumptions you *believe* you've validated and assumptions you've actually tested. Founders routinely confuse the two, and that's where money dies.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
Startup/product: Lattica — an AI copilot for commercial real estate brokers that auto-drafts offering memos (OMs) from raw property data. $400/month per broker seat. Target customer: Mid-market CRE brokerages (20-200 brokers), specifically investment sales teams in multifamily and industrial. GTM plan: Land via outbound to brokerage COOs/heads of operations (not individual brokers). 3-person SDR team, 80 calls/day each, targeting 8% meeting-book rate, 25% meeting-to-pilot rate, 60% pilot-to-paid rate. 6-month paid pilots at 10-seat minimum ($4k/month), expand to firm-wide in year 2. Goal: 40 firms signed in 12 months, $4M ARR. LinkedIn content + 2 industry conferences as top-of-funnel support. Key numbers: Projected blended CAC $18k per firm. ACV $48k year 1, $140k year 2 at full expansion. 18-month payback. Gross margin 82%. $3.2M raised, 22-month runway, burning $145k/month. What the founder is most worried about: Whether brokers will actually use it daily vs. treating it as shelf-ware after the pilot. What the founder thinks is already validated: (1) Brokers hate writing OMs — confirmed in 60 discovery calls. (2) COOs are the right buyer — 3 design partners were bought this way. (3) $400/seat is the right price — design partners didn't push back. (4) Output quality is good enough — 4 of 5 design-partner brokers said OMs are 'better than what our analysts produce.'
GTM Stress Test: Lattica
The Diagnosis
The single assumption most likely to be wrong: that the COO is actually the economic buyer for a tool whose value accrues to individual brokers who are usually 1099 contractors with their own P&Ls.
You have three design partners who bought this way, and you're treating that as validation. It isn't — it's a sample biased toward the kind of COO who enjoys piloting tech, which is roughly the 5% of CRE brokerage COOs who will ever take your call. The structural problem: in most mid-market CRE firms, brokers split commissions 50/50 or 60/40 with the house and pay for their own tools out of the broker side. A COO pushing a $400/seat mandate onto 40 brokers is picking a fight with 40 revenue-generating contractors. That fight doesn't happen at scale. Your outbound motion assumes it will.
Why This Assumption Is Fragile
- Mechanism of failure: SDRs book meetings with curious COOs → COO loves the demo → COO says 'let me check with the senior brokers' → 2 of 10 senior brokers object ('I have an analyst for this,' 'I don't trust AI with my deals') → COO drops it. You never see the failure in your funnel because it looks like a stalled deal, not a structural no.
- Evidence you'd expect if true: Your design partners should look like the typical target — they don't. Check: were any of your three design-partner COOs former brokers themselves? Did they mandate adoption or make it opt-in? How many of their brokers actually logged in weekly after month 2?
- Math that breaks: Your 25% meeting-to-pilot rate assumes the COO can unilaterally buy. If the real rate is 8% (which is what top-down software sells into fragmented-agent businesses historically converts at — see Compass, VTS, Buildout early data), your CAC balloons from $18k to ~$56k. At $48k Y1 ACV, that's negative payback in year 1 and you don't have the runway to reach year 2 expansion.
- Confidence it IS wrong: 72%. The design-partner signal is real but small-sample and self-selected. The structural economics of broker-as-contractor are not.
The 'Believed vs. Validated' Audit
| Claim | Status | Why |
|---|---|---|
| Brokers hate writing OMs | Believed, not validated. | 60 discovery calls measure stated preference, not willingness to change behavior. Brokers also 'hate' CRM data entry and use it anyway when forced. |
| COOs are the right buyer | Weakly validated. | 3 design partners ≠ a repeatable motion. Need to see the 4th-10th close with the same buyer persona before believing this. |
| $400/seat is the right price | Not validated. | Design partners didn't push back because pilots were discounted/free-ish and paid by the house. You haven't tested price against a broker paying from their own split. |
| Output quality is good enough | Believed, not validated. | 'Better than our analysts' is a compliment in a demo. The validation signal is: does the broker send the AI-drafted OM to a client without rewriting it? Measure that in the design partners now. You probably haven't. |
The Cheapest Experiment
Run a 'broker-pays' pilot at one non-design-partner firm.
- What you run: Get a warm intro to one mid-market brokerage. Offer 10 brokers individual access at $300/month, billed to *them*, not the house. No COO mandate. 60-day commitment.
- Cost: ~$2,500 and 3 weeks to set up. Maybe $8k in discounted seats if you refund non-users.
- Falsifies the assumption if: Fewer than 4 of 10 brokers sign up voluntarily, OR more than half churn after 60 days. That tells you broker-level willingness-to-pay is the actual gate, and your top-down motion is solving for the wrong buyer.
- Strengthens it if: 7+ brokers sign up and 5+ renew. Then COO-led deals are a distribution accelerator, not a necessity, and your funnel math can survive.
- What you do with each outcome: If falsified, you have two choices — drop the price to something a broker will expense without thinking ($99/seat) and rebuild the model, or pivot to selling the OM output as a per-document service ($50/OM) to sidestep seat economics entirely. If strengthened, double down on outbound and raise the seat price — you have more pricing power than you think.
Two Runner-Up Assumptions
1. The 8% meeting-book rate on cold outbound to brokerage COOs. CRE COOs are notoriously hard to reach and famously allergic to SDR sequences. Test: run a 2-week, 800-call sprint against a clean list before you hire SDRs 2 and 3. If you're under 3%, your whole sales capacity model is off by 2x.
2. Year 2 expansion from 10-seat pilot to firm-wide. You're projecting 3.5x ACV expansion. Every B2B SaaS company in history has over-projected this. Test: add one question to every pilot kickoff — 'what specifically would have to be true in month 6 for you to roll this to the whole firm?' If the answer is vague across 5 pilots, your Y2 revenue plan is fiction.
The Uncomfortable Question
If your three design partners canceled tomorrow, how many of them would a senior broker fight to keep?
If the answer is 'I don't know' or 'probably the COO would make that call,' you don't have product-market fit with the user — you have pilot-market fit with a buyer. Those are different things, and only one of them compounds.
Common use cases
- Pre-launch GTM review before a Series A raise
- Stress-testing a new product line inside an established company
- Evaluating a pivot before committing engineering resources
- Sanity-checking a category-creation play
- Reviewing a co-founder's deck before you wire them money
- Killing your own darlings before a board meeting does it for you
- Comparing two GTM options by running each through the test
Best AI model for this
Claude Opus 4 or GPT-5 Thinking. This task rewards deep causal reasoning about business model dependencies — faster models tend to list generic risks instead of finding the one load-bearing beam.
Pro tips
- Paste your actual plan — slides, docs, internal memos. The more specific your inputs, the sharper the diagnosis. Vague in, vague out.
- Include the numbers you're NOT sure about. The model is better at finding fragile math when you flag your own uncertainty.
- Run it twice: once for the plan as written, once for a 'bear case' version. Compare which assumptions appear in both.
- After you get the output, ask a follow-up: 'What would have to be true for this assumption to be safe?' — that inverts the analysis usefully.
- Share the experiment designs with your team before debating them. The exercise of picking one to run is more clarifying than the plan itself.
- Don't argue with the diagnosis on the first read. Sit with it for 24 hours. The assumptions that sting are usually the correct ones.
Customization tips
- Paste your actual plan, not a sanitized version. The diagnosis gets sharper with messy, real numbers — rounded projections hide the fragility.
- In the 'what you think is validated' section, list EVERYTHING, including things that feel obvious. Half the value is watching the model reclassify your 'obvious' items as 'believed.'
- If you're pre-revenue, replace CAC/LTV with whatever proxy numbers you're using — the model will still find the load-bearing assumption, it just shifts from unit economics to demand assumptions.
- Run the output past your most skeptical advisor or co-founder before dismissing any of it. The assumptions that feel most unfair are often the ones the model got right.
- Save the output and re-run the same prompt 90 days later with updated numbers. Watching which assumptions moved from 'believed' to 'validated' (or didn't) is the real GTM progress bar.
Variants
Category Creation Mode
Reweights the analysis toward education/awareness assumptions, which dominate risk in new categories.
Enterprise Sales Mode
Focuses on buyer/user split, procurement friction, and champion risk instead of consumer funnel math.
Pivot Diagnostic
Compares the new GTM against the old one and identifies which failure modes are actually resolved vs. just renamed.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals