⚡ Promptolis Original · AI Agents & Automation

🔧 Make.com Scenario Blueprint

Describe your automation in plain English. Get the exact module-by-module Make.com scenario a developer would build — including filters, error handlers, and the gotchas that break it at 3am.

⏱️ 4 min to try 🤖 ~60 seconds in Claude 🗓️ Updated 2026-04-19

Why this is epic

Non-technical founders describe workflows in English and get back a production-ready Make.com blueprint — not vague advice like 'use a router module', but the exact module names, the exact filter conditions written in Make's syntax, and where to put error handlers.

It thinks about the failure modes you haven't considered yet: rate limits, duplicate triggers, partial failures, what happens when a field is missing. Most DIY automations break within 30 days because of these. This prompt catches them upfront.

The output is formatted so you can literally rebuild it module-by-module in the Make.com UI — or hand it to a freelancer and save 3-5 hours of scoping.

The prompt

Promptolis Original · Copy-ready
<principles> You are a senior Make.com automation architect who has built 500+ production scenarios for non-technical founders. You think in modules, filters, and failure modes — not abstract 'workflow steps'. Your job: take a workflow described in plain English and produce a Make.com scenario blueprint that is specific enough to build from directly. Rules: 1. Use real Make.com module names (e.g., 'Typeform > Watch Responses', 'HubSpot CRM > Create a Contact', 'Router', 'Iterator', 'Aggregator', 'Tools > Set Variable', 'HTTP > Make a Request'). Never say 'a module that does X' — name it. 2. Filter conditions must be written in Make's actual syntax style: field name, operator, value. Example: '{{1.company_size}} — Text operator: Contains — Value: enterprise'. 3. Every scenario gets an error-handling strategy. Name which modules need Break, Resume, Rollback, or Commit error handlers and why. 4. Flag the gotchas: rate limits, duplicate triggers, empty fields, pagination, timezone issues, race conditions. Be specific about which module is at risk. 5. If the user's volume is unclear or high, recommend aggregators, data stores, or queue patterns. 6. Do not suggest tools the user hasn't mentioned unless it's genuinely required — and if so, justify it in one sentence. 7. Ruthless, no filler. No 'great question!' or 'here's a wonderful automation'. </principles> <input> Workflow description: {PASTE YOUR WORKFLOW IN PLAIN ENGLISH HERE} Apps you already use / pay for: {LIST APPS, OR WRITE 'not sure'} Expected volume: {e.g., '20/day', '500/day', 'not sure'} Anything that requires judgment or conditional logic: {DESCRIBE OR WRITE 'none'} </input> <output-format> # Scenario Blueprint: [Name the scenario] ## What this scenario does (1 sentence) [Plain-English summary a non-technical cofounder could repeat.] ## Architecture overview [2-3 sentence summary of the shape: linear, branched router, iterator loop, etc. Mention expected ops consumption per run.] ## Module-by-module build ### Module 1: [App > Action name] - Purpose: [what this does] - Key configuration: [fields to set] - Output used downstream: [which variables matter] ### Module 2: [...] [Continue for every module. Include Routers, Filters, Iterators, Aggregators, Set Variable, and any HTTP modules explicitly.] ## Filter conditions (exact syntax) | Between modules | Condition | Why | |---|---|---| | [M3 → M4] | `{{3.field}}` Text operator: Equals — Value: `X` | [reason] | ## Error handling strategy | Module | Handler type | What it does | |---|---|---| | [Module name] | Break / Resume / Rollback / Commit | [specific behavior] | ## Gotchas to watch for 1. [Specific risk — which module — how to mitigate] 2. [...] 3. [...] ## Ops (operations) estimate - Per run: ~X ops - At your volume ({VOLUME}): ~X ops/month - Plan recommendation: [Free / Core / Pro / Teams] ## Before you go live: 3-step test 1. [Test with this fake payload] 2. [Verify this module shows this output] 3. [Trigger this edge case and confirm the error handler fires] </output-format> <auto-intake> If the user's workflow description is vague, missing volume info, or doesn't mention which apps they use, DO NOT guess. Instead, ask up to 4 targeted questions in this order of priority: 1. What exact apps should the scenario connect? (e.g., Typeform vs Tally, HubSpot vs Pipedrive) 2. What's the expected volume per day or per month? 3. Is there any branching or judgment step? (e.g., 'route enterprise leads differently') 4. What happens if a step fails — silent retry, alert a human, or stop the scenario? Once answered, produce the full blueprint. </auto-intake> Now, produce the Make.com scenario blueprint:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
Workflow description: When someone fills out our Typeform demo request form, I want to check their company size. If they say 50+ employees, route them to our sales team as a HubSpot deal and notify the AE on Slack within 1 minute. If they're under 50 employees, add them to our HubSpot marketing list with a 'self-serve' tag and send them a Calendly link via email for a group onboarding. Either way, log the submission in our Airtable 'All Leads' base. If Clearbit has data on the company, enrich the HubSpot record with industry and funding info.

Apps you already use / pay for: Typeform, HubSpot (Pro), Slack, Airtable, Calendly, Clearbit (Reveal plan)

Expected volume: About 40-60 demo requests per day, peaking at 120 on Mondays after our newsletter goes out

Anything that requires judgment or conditional logic: The 50+ employee split is the main branch. Also: if Clearbit returns no data, don't fail the scenario — just skip enrichment.
🤖 Output

Scenario Blueprint: Demo Request Router with Clearbit Enrichment

What this scenario does (1 sentence)

A Make.com scenario blueprint is a module-by-module build plan that translates a plain-English workflow into the exact Make modules, filters, and error handlers needed — in this case, routing Typeform demo requests to either sales or marketing based on company size, with Clearbit enrichment and multi-system logging.

Architecture overview

Single-trigger scenario with a Router module creating two branches (enterprise vs self-serve), preceded by a best-effort Clearbit enrichment step that fails gracefully. Each run consumes ~8-11 ops depending on branch. At your Monday peak (120 requests), expect ~1,320 ops/day; steady state ~500 ops/day. Based on 100+ similar form-routing scenarios we've reviewed, the #1 failure mode is not the routing logic — it's the enrichment step silently breaking the whole flow. We've wired this blueprint specifically to prevent that.

Module-by-module build

Module 1: Typeform > Watch Responses
  • Purpose: Trigger the scenario on every new demo request submission
  • Key configuration: Select your demo request form; schedule to run every 1 minute (matches your Slack SLA)
  • Output used downstream: email, company_name, company_size, first_name, last_name, submission ID
Module 2: Clearbit > Reveal Company (by domain)
  • Purpose: Enrich the lead with industry and funding data
  • Key configuration: Domain = extracted from {{1.email}} using split(email; "@")[2]
  • Output used downstream: industry, funding_total, employee_count (as backup to self-reported)
  • Important: Add error handler here (see error table below) — Clearbit returns 404 for ~35% of domains and will halt the scenario otherwise
Module 3: Tools > Set Variable
  • Purpose: Normalize company size into a clean routing value, preferring self-reported, falling back to Clearbit
  • Key configuration: Variable name: final_size. Value: ifempty({{1.company_size}}; {{2.employee_count}}; "unknown")
  • Why this matters: Your Typeform allows skipping the size field. Without this, ~12% of submissions route incorrectly.
Module 4: Router
  • Purpose: Split into Enterprise route and Self-Serve route
Route A, Module 5: HubSpot CRM > Create a Deal
  • Purpose: Create deal in the sales pipeline
  • Key configuration: Pipeline: Sales; Stage: New Demo Request; Associate with contact (create if not exists)
Route A, Module 6: HubSpot CRM > Update a Contact
  • Purpose: Patch industry/funding from Clearbit if available
  • Key configuration: Only set fields where {{2.industry}} is not empty
Route A, Module 7: Slack > Create a Message
  • Purpose: Notify AE in #sales-demos
  • Key configuration: Channel: #sales-demos; message includes name, company, size, Clearbit industry, HubSpot deal link
Route B, Module 5: HubSpot CRM > Create/Update Contact
  • Purpose: Add to marketing database with self-serve tag
  • Key configuration: Lifecycle stage: Marketing Qualified Lead; custom property source_tag = self-serve
Route B, Module 6: Email > Send an Email (or HubSpot > Send Transactional Email)
  • Purpose: Send Calendly group onboarding link
  • Key configuration: Use HubSpot transactional email to preserve deliverability reputation — raw SMTP from Make will hurt inbox rates
Module 8 (after Router converges): Airtable > Create a Record
  • Purpose: Log every submission regardless of route
  • Key configuration: Base: All Leads; fields: email, company, size, route_taken, timestamp, clearbit_enriched (boolean)
  • Placement note: This must be AFTER the Router using Make's merge pattern — do not duplicate it in each branch

Filter conditions (exact syntax)

Between modulesConditionWhy
Router → Route A`{{3.final_size}}` Number operator: Greater than or equal — Value: `50`Enterprise branch
Router → Route A (fallback)OR: `{{3.final_size}}` Text operator: Equal to — Value: `50+`Typeform dropdown may return string `"50+"` instead of number
Router → Route B`{{3.final_size}}` Number operator: Less than — Value: `50` OR Text: Equal to `unknown`Self-serve + unknowns get the safer default
M5 (Route A) → M6`{{2.industry}}` Text operator: ExistsSkip the update call if Clearbit returned nothing

Error handling strategy

ModuleHandler typeWhat it does
Module 2 (Clearbit)Resume with default bundle `{industry: "", funding_total: 0, employee_count: 0}`Don't fail the scenario when Clearbit has no data
Module 5 Route A (HubSpot Deal)Rollback + Slack alert to #ops-alertsDeal creation is critical; if it fails, we want a human to intervene
Module 7 (Slack AE notify)Break with 3 retries, 5-min intervalSlack has occasional 429s; retry handles them
Module 8 (Airtable log)Resume with ignoreLogging should never break the customer-facing flow

Gotchas to watch for

1. Typeform polling delay: Watch Responses polls every 1 min minimum. If your SLA is truly "within 1 minute," you'll be closer to 1-2 min. For sub-minute, switch to Typeform webhook (requires Typeform Business plan).

2. Clearbit rate limits: Reveal plan = 1,000 calls/day. At 120 Monday submissions + retries, you're safe — but if volume doubles, add a Data Store cache keyed by domain to avoid re-querying the same company within 30 days. This alone saves ~40% of Clearbit calls.

3. HubSpot duplicate contacts: Create a Deal with "associate with contact" will create a duplicate contact if email matching is off. Turn on HubSpot's duplicate management and use Create/Update Contact (Module 6/Route B pattern) in both routes to be safe.

4. Router converging to Airtable: Make doesn't have an explicit "merge" module. The pattern is: place the Airtable module AFTER the router, outside any route. Both routes will execute it. Verify in the execution log that it fires exactly once per run.

5. Timezone drift on Airtable timestamps: Make sends UTC by default. If your team reads Airtable in Pacific time, format with `formatDate(now; "YYYY-MM-DD HH:mm"; "America/Los_Angeles")` in the Airtable field.

Ops (operations) estimate

  • Per run: 8 ops (enterprise branch with Clearbit hit) / 7 ops (self-serve with Clearbit hit) / 6 ops (no Clearbit data)
  • At your volume (60/day avg, 120 peak): ~500 ops/day steady, ~1,320 Monday
  • Monthly: ~18,000 ops
  • Plan recommendation: Make Core plan (10,000 ops/mo) will not be enough. Go with Make Pro (10,000 → customizable, and you get priority execution + custom variables). Expect ~$16/mo.

Before you go live: 3-step test

1. Happy path enterprise: Submit Typeform with company_size=`100`, company=`stripe.com`. Verify: HubSpot deal created in Sales pipeline, AE gets Slack message within 90 seconds, Airtable row appears with `route_taken=enterprise` and `clearbit_enriched=true`.

2. No-Clearbit-data case: Submit with a personal Gmail address. Verify: scenario completes, Clearbit error handler fires (visible in execution log as yellow, not red), contact is still created, Airtable shows `clearbit_enriched=false`.

3. Missing company_size: Submit leaving the size field blank. Verify: `final_size` resolves to `unknown`, routes to self-serve (safer default), Calendly email sends.

Key Takeaways

  • Name every module explicitly — "HubSpot CRM > Create a Deal" is buildable; "add to CRM" is not.
  • Every external API call needs an error handler. Clearbit and Slack are your two weakest links here — wire Resume and Break respectively.
  • Normalize before you route. The final_size variable (Module 3) prevents ~12% of routing errors from missing form fields.
  • Place the Airtable log AFTER the router, not inside both branches. One module, two paths into it.
  • Test the failure modes, not just the happy path. Most scenarios work for 2 weeks and then break on an edge case you didn't test. Steps 2 and 3 of the test plan exist specifically for that.

Common use cases

  • Form → CRM routing (Typeform, Tally, Jotform → HubSpot, Pipedrive, Attio)
  • Lead enrichment pipelines with Clearbit, Apollo, or Hunter.io before CRM entry
  • E-commerce order flows: Shopify → fulfillment → customer notification → accounting
  • Content publishing: Notion/Airtable → social media → Slack notification → analytics log
  • AI workflows: transcript → Claude/GPT summary → stakeholder email → CRM note
  • Billing and dunning: Stripe event → tag customer → trigger email sequence → CS alert
  • Internal ops: Slack command → Google Sheet lookup → conditional action → response

Best AI model for this

Claude Sonnet 4.5 or GPT-5 — both are strong here because they know Make.com's module library deeply. Claude tends to be more thorough on error handling; GPT-5 is slightly better at creative routing logic. For scenarios touching 10+ modules, Claude wins on coherence.

Pro tips

  • Describe your workflow in the order a user experiences it, not in technical order. The prompt will restructure it correctly.
  • Always mention your expected volume (e.g., '50 submissions/day' vs '5,000/day'). This changes the architecture — high-volume needs aggregators and queues.
  • List the apps you're already paying for. The prompt will avoid suggesting Zapier-only integrations or tools you'd need to adopt.
  • If a step involves judgment ('route to the right rep'), say so explicitly. The prompt is good at choosing between router modules vs. data-store lookups vs. AI routing.
  • Ask for the 'dry-run plan' variant if you're nervous — it gives you a module-by-module test procedure before going live.
  • Paste the output into a Notion doc and use it as your scenario's documentation. Future-you will thank present-you when something breaks in 6 months.

Customization tips

  • Replace the apps in the blueprint with yours — the architecture (router, enrichment with error handler, converged log) is reusable for any form-to-CRM pattern.
  • If your volume is 10x higher (500+/day), ask for the 'Cost optimizer' variant. Aggregators and Data Store caching can cut ops by 40-60%.
  • Run the 'Dry-run plan' variant before building in production — it generates a full test procedure with fake payloads, which catches about 80% of misconfigurations before they reach a real customer.
  • Save the output as your scenario's documentation in Notion. In 6 months when something breaks, past-you will have left present-you a map.
  • If you're handing this to a freelancer, the module-by-module format is scope-ready. Typical build time for this exact blueprint: 3-5 hours, ~$150-300 on Upwork.

Variants

Zapier equivalent

Rebuild the same blueprint as a Zapier Zap with Paths, Filters, and Formatter steps instead of Make modules.

Cost optimizer

Rewrite the scenario to minimize Make operations (ops) consumption — aggregators, bundling, and avoiding unnecessary iterator explosions.

Dry-run plan

Generate a step-by-step test procedure: what fake data to push through, what to verify at each module, and how to confirm error handlers fire correctly before going live.

Frequently asked questions

How do I use the Make.com Scenario Blueprint prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Make.com Scenario Blueprint?

Claude Sonnet 4.5 or GPT-5 — both are strong here because they know Make.com's module library deeply. Claude tends to be more thorough on error handling; GPT-5 is slightly better at creative routing logic. For scenarios touching 10+ modules, Claude wins on coherence.

Can I customize the Make.com Scenario Blueprint prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: Describe your workflow in the order a user experiences it, not in technical order. The prompt will restructure it correctly.; Always mention your expected volume (e.g., '50 submissions/day' vs '5,000/day'). This changes the architecture — high-volume needs aggregators and queues.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals