⚡ Promptolis Original · Decisions & Reasoning
⚖️ Steelman Devil's Advocate
The strongest possible case against your position. Honest, rigorous, no straw-men. Exposes your hidden assumptions and the one most likely to be wrong.
Why this is epic
Rejects both sycophancy and straw-manning. Most AI either agrees with you (sycophant) or argues a weak version of the counter-position (straw-man). This does neither.
Reveals hidden premises. Every belief rests on assumptions you don't realize you're making. This prompt identifies 3-5 per run — often at least one surprises you.
Gives you a cheap experiment. 'The question that would settle this' is frequently the most actionable single insight from the whole exercise.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
My position: Our engineering team should rewrite our monolithic Rails app into microservices. We've been maintaining this codebase for 6 years and deploys are slow, onboarding is hard, and testing is brittle. Why I hold it: Microservices give us independent deployment, clearer ownership boundaries, and let each team pick the best tech for their domain. Every mature tech company has done this transition — Netflix, Uber, Spotify all benefited. Our current deploy takes 45 minutes and touches every part of the system. What would change my mind: If microservices caused more operational complexity than they saved. But I've read extensively and I don't think that's a real risk for a team our size (60 engineers). Context: VP of Engineering at a Series C SaaS, 60 engineers, 4 product teams, 5M active users.
The Strongest Case Against Your Position
The Charitable Steelman of the Opposite View
The counter-position isn't "microservices are bad" — it's that your specific organization is statistically unlikely to execute this migration successfully in less than 3-5 years, and during that time you will ship dramatically less customer value than if you had invested the same engineering time into modularizing the monolith.
The steelman starts with scale. The companies you cite — Netflix, Uber, Spotify — migrated to microservices when they had 500+ engineers, not 60. At your scale, the monolith is not actually the bottleneck; coordination overhead among 4 product teams is. Microservices don't solve coordination overhead, they move it from code-level to network-level, where it's often harder to debug and easier to hide.
Second: the 45-minute deploy is a symptom, not a cause. It's likely caused by insufficient test parallelization, no CI/CD pipeline optimization, and a bundling process that wasn't touched for 4 years. You can cut it to 4 minutes in 2 weeks with focused effort. Microservices deployment will eventually be faster — in 18-24 months, after you've rebuilt your observability stack, service discovery, service mesh, distributed tracing, circuit breakers, and on-call rotations. During those 24 months, your deploys will be SLOWER than today.
Third: every industry retrospective written between 2022 and 2025 from similarly-sized companies (see: Basecamp "back to the monolith", Uber's "monorepo strategy", Amazon Prime Video's cost-crisis rewrite back to monolith) tells the same story: the productivity gain of microservices shows up around 200-400 engineers, and is often outweighed by operational complexity below that threshold. You're 3-5x too small for the transition to be net-positive on its own merits. The fact that "every mature tech company has done this" is survivor bias — it ignores the companies that tried and failed, and the companies that stayed monolithic and succeeded (Shopify, Stack Overflow).
Your Hidden Assumptions
1. You assume the monolith is the bottleneck to team productivity. But the evidence you've given (slow deploys, hard onboarding, brittle tests) could equally be symptoms of insufficient investment in the monolith's tooling. You haven't controlled for the possibility that the problem is under-investment in the existing system, not the system itself.
2. You assume 60 engineers is a team where microservices' coordination benefit exceeds operational cost. Industry evidence suggests this threshold is closer to 200-400. You are assuming your team is an exception without stating why.
3. You assume your engineering team has the operational expertise to run a microservices architecture safely in production. Running distributed systems is a specialized skill set. Most teams in your size range do not have it — and hiring it is hard and expensive.
4. You assume customer-facing velocity will not decrease during the migration. No company that has attempted this transition has reported stable or increased feature velocity during the 18-24 month migration window. Most report a 30-50% drop.
5. You assume the migration will be executed cleanly and reach completion. Based on publicly available data, 60-70% of microservices migrations started in companies your size either stall half-way (leaving you with a distributed monolith, the worst of both worlds) or are reverted.
The One Assumption Most Likely to Be Wrong
Assumption #1: that the monolith is the bottleneck.
This is the weakest premise because it's empirically testable, and testing it is cheap. Before committing to an 18-month rewrite, spend 4 weeks investing in the CURRENT monolith's observability and deployment tooling. Specifically:
- Instrument CI/CD to measure actual bottleneck (test runtime? dependency resolution? container build?)
- Parallelize the slowest 5 test files
- Extract the 3 slowest modules from the bundling step
- Measure onboarding time today vs. after a 2-week "onboarding guide" effort
If after 4 weeks you haven't cut deploy time by at least 50% and onboarding friction by at least 30%, THEN the monolith might genuinely be the issue. If you do hit those improvements, the case for microservices collapses — you've found the real problem and it was tooling, not architecture.
The evidence that would disconfirm your assumption: those 4 weeks of focused tooling work failing to produce material improvement. You should seek that disconfirmation before committing to the migration.
Three Reframes You Haven't Considered
Reframe 1: Modular Monolith. The modern answer to "monolith pain" isn't microservices — it's modular monoliths. Clean internal boundaries between modules, but deployed as a single unit. Gives you 80% of the ownership clarity with 10% of the operational overhead. Shopify runs on this architecture with thousands of engineers.
Reframe 2: The Problem Isn't Architecture, It's Headcount-Per-Concern. Your 4 product teams fighting over shared deployment is a coordination problem. It can be solved with feature flags, trunk-based development, and deploy ownership rotations — all cheap, low-risk changes. Ask: would this problem disappear if deploy frequency went from weekly to daily? If yes, you have a process problem, not an architecture problem.
Reframe 3: Strangler Fig, Not Migration. If you DO want to move away from the monolith, don't rewrite it. Extract one service — the most painful or highest-growth one — and see how the operational tax looks. Most companies that do this discover the tax is higher than expected and stop at 2-3 services, keeping the monolith for everything else. That's often the correct end state, not the transition step.
Where Your Certainty Exceeds Your Evidence
You stated "I don't think [operational complexity] is a real risk for a team our size." This is exactly the kind of confident-but-unsubstantiated claim a steelman watches for.
The evidence available on this question (industry postmortems, migration reports, and quantitative studies from DORA) consistently points to the opposite conclusion for teams under 200 engineers. Your stated certainty is based on "extensive reading" — but if your reading is primarily from Martin Fowler's blog circa 2015 and the Netflix microservices success stories, you've been sampled toward pro-microservices evidence. The 2022-2025 retrospectives on failed migrations are not as well-publicized but are where the honest data lives.
You should be 40-60% confident this is a good decision, not 90%+. The gap between your certainty and the evidence is where the rot usually lives.
The Question That Would Settle This
Run a 6-week experiment: extract ONE service from the monolith — the most logically independent one, with the clearest domain boundary. Deploy it. Operate it for 6 weeks. Track:
1. Feature velocity on the extracted service vs. feature velocity on equivalent monolith work.
2. Incident rate increase (you will have more — how much?).
3. Actual on-call cost to the team.
4. How long it takes a new engineer to understand the extracted service's interactions with the monolith.
If after 6 weeks feature velocity is >90% of previous, incident rate didn't spike, and on-call is manageable — proceed, cautiously. If feature velocity dropped, incidents spiked, or on-call is painful — you've gotten the cheapest possible data to reconsider.
Cost of the experiment: ~$150k of engineering time. Cost of a failed 18-month migration: $15-30M plus 2 years of competitive ground lost.
Final Honest Read
Your position doesn't survive this critique intact. It's based on industry narratives that don't apply to your scale, assumes your team's operational capability without testing it, and underestimates the cost of the transition window. At minimum, it needs:
1. The 4-week tooling investment to rule out the "monolith tooling is under-invested" hypothesis.
2. The modular monolith option considered as a serious alternative.
3. A strangler-fig pilot before committing to a full migration.
It's possible you're right and microservices is the answer — but your current reasoning doesn't justify the confidence you have, and the cheapest decision is to test the weakest assumption before spending 18 months executing.
Common use cases
- Before making a decision you feel certain about
- Preparing for a meeting where your idea will be challenged
- Writing an argumentative essay or op-ed
- Investment decisions (the stronger the conviction, the more you need this)
- Policy / strategy arguments at work
- Personal beliefs or positions you've held a while
Best AI model for this
Claude Opus 4 (honesty about uncertainty, willingness to push back). Do NOT use ChatGPT without extensive prompting — default GPT is too sycophantic for this task.
Pro tips
- The more certain you are, the more you need this. Run it specifically on positions you're unwilling to challenge yourself.
- Include 'what would change my mind' honestly. If your answer is 'nothing', you're not reasoning — and the prompt will surface that.
- Don't argue back against the output. Sit with it for 24 hours. Many users find the steelman compelling on day 2 that they dismissed on day 1.
Customization tips
- Be honest about what would change your mind. If your real answer is 'nothing' — the prompt will flag that as evidence of ideology, not reasoning. That's valuable.
- Include the stakes. A low-stakes question ('should I buy this book?') gets different treatment than high-stakes ('should I marry this person?'). The prompt calibrates rigor accordingly.
- Run this on positions you're 95%+ certain about. Those are where the hidden assumptions are most likely to be wrong.
- Don't argue back in the same session. The prompt is designed for a one-shot diagnosis. Sit with the output for 24 hours before responding — many users find the steelman compelling on day 2 that they dismissed on day 1.
- Not for emotional questions in emotional states. This prompt is ruthless. Don't use it when you're upset — use it when you've had distance from the decision.
Variants
Philosophical Deep-Dive
Extends to 5+ counter-frames, for complex ethical or strategic questions
Dialogue Mode
Presents the critique as a back-and-forth conversation, lets you respond, then counter-critiques your responses
Ally Critique
The counter-position from the perspective of someone who WANTS you to succeed — gentler but just as sharp
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals