⚡ Promptolis Original · Business & Strategy
🎯 MVP Scope Definer
Ruthless MVP scope reduction — concierge MVP + fake-it strategies grounded in Lean Startup + Inspired + actual successful MVP patterns.
Why this is epic
Most 'MVP Scope Definer' prompts online produce generic, template-quality output. This one is structured like production-grade prompt engineering — role definition, principles, input schema, output format, auto-intake.
Research-backed: Ruthless MVP scope reduction — concierge MVP + fake-it strategies grounded in Lean Startup + Inspired + actual successful MVP patterns.
Designed for practitioner-level depth, not generalist skim. Works across Claude, ChatGPT, and Gemini with consistent quality.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
Business idea: AI-powered fitness coach that designs weekly workout programs based on user goals + equipment + history Target customer: Intermediate lifters (1-3 years experience) who are self-directed but want optimization Core value prop: Replace the $100-200/month personal trainer with $15/month AI that designs equivalent-quality programs Current feature list: (1) User onboarding + goal questionnaire, (2) Workout designer AI, (3) Exercise video library, (4) Progress tracking + charts, (5) Weight/rep logging, (6) Mobile app, (7) Social features (friends, competition), (8) Nutrition recommendations, (9) Integration with wearables (Apple Health, Garmin), (10) Community forum, (11) Payment/subscription Timeline: 6 weeks to MVP Technical capability: Me (full-stack dev) + my cofounder (product/design, no code) Core hypothesis: Intermediate lifters will pay $15/month for AI-designed workout programs that produce similar strength/hypertrophy gains as paid coaching Scope my MVP.
Your MVP Scope Definition — AI Fitness Coach
The Core Hypothesis
Your hypothesis as written: 'Intermediate lifters will pay $15/month for AI-designed workout programs that produce similar gains as paid coaching.'
This is actually THREE hypotheses stacked:
1. H1: People will engage with AI-designed programs (vs copy from internet / books)
2. H2: The programs produce results comparable to paid coaching
3. H3: People will pay $15/month for this value
Rest-of-this-analysis assumes: focus on H1 first. H2 requires months of user training data; H3 requires H1 to be true. Don't try to test all three with one MVP.
Your actual MVP hypothesis: Intermediate lifters will USE AI-designed programs consistently for 4+ weeks when given to them (measuring: workout completion rate, weekly program updates, willingness to continue).
Payment test comes in V2.
What to Build (Minimum)
Core MVP — 2 features:
1. Program generation via chat interface (no account required initially)
- User inputs: current experience level, goal (strength / hypertrophy / fat loss), days per week available, equipment
- Output: complete 4-week program (exercises, sets, reps, weight progression logic)
- Backend: GPT-4/Claude call with custom prompt (YOU wrote in a previous pack — use the Fitness Workout Programming Pack prompt as the engine)
- Frontend: simple chat interface OR form + text output
2. Week-over-week program updates
- After user reports how the week went (sets/reps completed at what weights), AI adjusts next week
- User reports via simple form: 'Monday — completed 4x8 @ 155 bench, felt RPE 8' etc.
That's it. Everything else is cut.
What to Cut (Scope Creep)
From your 11-feature list:
- ❌ Exercise video library — YouTube exists; link to it
- ❌ Progress tracking + charts — spreadsheet for now; charts after V1
- ❌ Mobile app — web-responsive site works. Native app is V3+ territory.
- ❌ Social features — distraction from core validation
- ❌ Nutrition recommendations — separate problem, separate MVP
- ❌ Wearable integrations — delightful but not core. V2.
- ❌ Community forum — don't build; point to r/fitness
- ❌ Payment/subscription — charge nothing for first 30 users; validate usage first
- ❌ User accounts + onboarding — first 10 users get personalized response via email; MVP V1 doesn't even need accounts
- ❌ Fancy progress tracking — simple Google Sheets import/export is fine
Fake-It Strategies
Concierge MVP (HIGHLY RECOMMENDED — do this BEFORE writing code):
- Take 10 first users. Do their program generation MANUALLY using the pack prompt + your own expertise
- Send them via email as PDF
- Ask for weekly check-ins via email
- Do this for 4 weeks
Why this is brilliant:
- You learn EXACTLY what questions users ask, what they're confused about, what's missing
- You test the core hypothesis (do they actually use programs?) WITHOUT building software
- You test pricing (ask them: 'If this were software, what would you pay?') without payment infrastructure
- You refine the prompt based on real user feedback
- 2 weeks of effort vs 6 weeks of coding
Wizard of Oz MVP (if you want to test with more users):
- Build a simple landing page + form where user enters their requirements
- On submission, YOU (manually) run the prompt and email response
- User thinks it's automated; you spend 10 min per user
- Test with 30-50 users, validate the product concept before automating
Success Metrics
For 10 concierge-MVP users over 4 weeks:
- Program completion rate: how many users completed 80%+ of the workouts? Target: 70%+.
- Week 4 retention: how many completed the full 4-week program? Target: 60%+.
- Willingness to continue: 'Would you want another 4-week program?' Target: 50%+ say yes.
- Quality feedback: unprompted suggestions on what would improve the program. Count them.
At 70%+ completion + 50%+ willingness-to-continue: validate hypothesis H1. Move to V2.
At 40-70%: partial validation. Iterate on the prompt; try another 10 users.
Below 40%: core hypothesis is weaker than expected. Interview those who dropped off.
Kill Criteria
Kill this MVP if after 4 weeks:
- Completion rates below 30% (people don't actually use what you give them)
- All users say programs are 'too generic' or 'not adapting enough' (prompt can't deliver value)
- Feedback says 'I'd rather just follow 5/3/1' (alternatives are good enough)
In kill scenario, you've spent 2-4 weeks learning rather than 6 months building. Cheap failure = feature, not bug.
4-8 Week Build Plan
Option A: Pure concierge MVP (2 weeks + 4 weeks of user workouts):
Week 1:
- Refine the fitness programming prompt (you have the Pack — use that as baseline)
- Create user intake form (Google Form or Typeform, 5 min to build)
- Recruit 10 users (LinkedIn post, r/fitness post, gym board)
Week 2:
- Run intake with 10 users
- Manually generate programs via prompt + your expertise
- Email PDF programs
Weeks 3-6 (user workout period):
- Weekly check-ins with each user
- Update programs based on feedback
Week 7-8 (analysis + decision):
- Measure completion rates + willingness to continue
- Decide: iterate, pivot, or kill
Option B: Simple chat interface (6 weeks build, less personalized):
Week 1: Design + prompt refinement
Week 2: Build chat interface (React + Next.js, hit LLM API)
Week 3: Build weekly-update form (user reports, then AI responds)
Week 4: Testing with yourself + 3 friends
Week 5: Launch to 30 users via recruiting channels
Week 6: Observe usage, measure completion rates
Recommendation: Do Option A first (concierge MVP).
Reasoning: you're 2 people, full-stack + product. You can code this — but do you have customer discovery done? Concierge MVP IS customer discovery. Code after you know what to build. Option A teaches more; Option B just ships code.
Key Takeaways
- Your hypothesis is actually 3 hypotheses stacked. Test H1 (engagement) first, H3 (payment) later.
- MVP = 2 features: program generation + week-over-week updates. Cut 9 other 'MVP' features.
- Do CONCIERGE MVP before writing code. Manually generate programs for 10 users via email. Learn more in 2 weeks than coding teaches in 6 weeks.
- Fake what you can fake. Video library = YouTube links. Progress tracking = spreadsheets. Payments = free first 30 users.
- Success metrics: 70%+ program completion, 50%+ willingness to continue after 4 weeks.
- Kill criteria: <30% completion or alternatives 'good enough'. Cheap failure beats expensive failure.
- Skip payment/subscription entirely in V1. Charge $0 for first 30 users. Validate usage first.
- User accounts + onboarding are often NOT MVP features. First 10 users get manual white-glove treatment via email.
Common use cases
- Professionals who need structured thinking on this topic, not vague advice
- Practitioners making specific decisions with real stakes
- Anyone tired of generic AI responses to domain-specific questions
- Users wanting depth over breadth — one thing done well, not 10 things done poorly
Best AI model for this
Any LLM for scoping exercises. Claude Opus 4.7 for nuanced hypothesis breakdown.
Pro tips
- Paste your real situation (with specific numbers and context), not generic 'help me with X' framing. The prompt rewards specificity.
- If the prompt asks auto-intake questions, answer them fully before expecting output — incomplete inputs produce incomplete outputs.
- For ambiguous situations, run the prompt twice with different framings. Compare outputs. Often reveals the right path.
- Save the outputs you value. Iterate on them across sessions rather than re-running from scratch.
- Pair with a human expert for high-stakes decisions — the prompt is a first-draft tool, not a final authority.
- Share what worked back with us (promptolis.com/contact). Helps us refine future versions.
- The research citations inside the prompt are real — look them up if a specific claim matters for your decision.
Customization tips
- For marketplace MVPs (two-sided platforms), the chicken-and-egg problem is the core challenge. Concierge both sides first: manually match buyers and sellers via email/calls for first 10-20 transactions. Proves there's actual liquidity demand before building matching infrastructure.
- For hardware MVPs, the logic inverts — you usually can't fake-it-with-software. But you CAN: 3D print a prototype before tooling, have 10 users hand-build a kit version, validate with a crowdfunding campaign before production run. Hardware MVPs often combine with pre-orders.
- For B2B SaaS MVPs, white-glove onboarding (you personally onboard first 5-10 customers, even with expensive enterprise pricing) beats self-service initially. Customers accept the manual process in exchange for high-touch support.
- For consumer apps with network effects, single-player value has to work first. Don't try to launch a social feature to 10 users. Launch the core utility (journal, tracker, tool) that works for 1 person. Network value layers in V2+.
- For developer tools, open-source the core first, charge for hosted/enterprise later. Stripe's 7-lines-of-code MVP became the biggest developer tool by showing 'it just works' before building admin panels.
- For e-commerce product MVPs, test with a limited product run (50-100 units) + direct-to-consumer via Shopify + Meta ads. Test willingness to buy at your target price. Don't manufacture 5,000 units of an untested product.
- For education/course MVPs, deliver the first cohort entirely manually: live Zoom sessions, manually graded assignments, email communication. Record sessions for future async version. Tests whether content works before building platform.
- If you find yourself listing 10+ features as 'must have for MVP,' that's the signal scope has collapsed. Force-rank them 1-10. Cut below #2. Add back only after validating #1 and #2 work and produce learning.
Variants
Default
Standard flow for most users working on this topic
Beginner
Simplified output for users new to the domain — less jargon, more foundational explanation
Advanced
Denser output assuming practitioner-level baseline knowledge
Short-form
Compressed output for quick decisions, under 500 words
Frequently asked questions
How do I use the MVP Scope Definer prompt?
Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.
Which AI model works best with MVP Scope Definer?
Any LLM for scoping exercises. Claude Opus 4.7 for nuanced hypothesis breakdown.
Can I customize the MVP Scope Definer prompt for my use case?
Yes — every Promptolis Original is designed to be customized. Key levers: Paste your real situation (with specific numbers and context), not generic 'help me with X' framing. The prompt rewards specificity.; If the prompt asks auto-intake questions, answer them fully before expecting output — incomplete inputs produce incomplete outputs.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals