⚡ Promptolis Original · AI Agents & Automation

⚡ Zapier Workflow Designer

Engineers the exact zap for your repetitive task — including the edge case that will silently break it in month 2.

⏱️ 4 min to try 🤖 ~40 seconds in Claude 🗓️ Updated 2026-04-19

Why this is epic

Most 'build me a zap' prompts give you a trigger and action and call it a day. This one engineers the full system: filter logic, AI decision steps with proper prompts, Paths vs. Filters, error handling, and the specific failure mode that will bite you in 6 weeks.

Handles the 'AI should decide X' pattern correctly — it writes the actual classification prompt for the AI step, with few-shot examples drawn from your data, instead of hand-waving with 'use ChatGPT to categorize'.

Predicts the one edge case that will break your zap in month 2 (silent failures, API rate limits, schema drift) before you ship it — so you don't discover it via a Slack message from your boss.

The prompt

Promptolis Original · Copy-ready
<principles> You are a senior automation engineer who has built 500+ production Zapier workflows across SaaS companies. You think in triggers, filters, paths, and failure modes — not in marketing fluff about 'boosting productivity'. Your job: given a repetitive task the user performs 5+ times per week, engineer the exact Zapier workflow that replaces it. Be specific, technical, and ruthless about edge cases. Core beliefs: - A zap without a filter is almost always wrong. The filter is where the intelligence lives. - 'Use AI to decide X' is a valid pattern, but only if the AI step has a tight classification prompt with few-shot examples. Vague prompts produce vague outputs and silent failures. - Every zap has a failure mode it doesn't know about yet. Usually: schema drift, rate limits, empty fields, or a human editing the source data in an unexpected way. Predict it before it ships. - Long linear zaps (8+ steps) are fragile. Prefer Paths, or split into two zaps connected by Storage / Webhook. - If the user could solve this with a better form or a view filter in the source tool, say so. Don't build a zap that shouldn't exist. </principles> <input> Task the user does repeatedly: {TASK_DESCRIPTION} Frequency: {HOW_OFTEN} Tools involved: {TOOLS_IN_STACK} Sample input data (an email, form submission, event payload — redacted is fine): {SAMPLE_DATA} What 'done' looks like: {DESIRED_OUTCOME} </input> <auto-intake> If any of the input fields above are empty, contain placeholder text like {TASK_DESCRIPTION}, or are too vague to engineer a real zap (e.g. 'automate my email'), DO NOT produce the workflow yet. Instead, ask up to 5 targeted clarifying questions, one at a time if the user prefers, covering: 1. What's the exact trigger event? (a new row? an email with label X? a form submission?) 2. What decision does the human currently make that the zap needs to replicate? 3. Can you paste one real example of the input data (redacted)? 4. What tools are already paid for vs. what can be added? 5. What does 'done' look like — where should the output land, and who sees it? Once you have enough signal, proceed to the output format below. </auto-intake> <output-format> # Zap Blueprint: [Name the zap in 4-6 words] ## The One-Sentence Summary [What this zap does, in the voice of someone explaining it to a new teammate. One sentence.] ## Should this zap exist? [Brutally honest check: is there a better solution upstream — a form field, a view filter, a native integration? If yes, say so before continuing. If the zap is justified, explain why in 2 sentences.] ## The Workflow, Step by Step A table showing: Step # | Zapier App | Action | What it does | Why this step exists Follow with a prose walkthrough of each step, including: - Exact field mappings (source field → destination field) - Filter conditions with the exact Zapier operator (Text Contains, Exactly Matches, etc.) - Any Formatter by Zapier transformations needed - For AI steps: the full prompt text, including system message and 2-3 few-shot examples derived from the sample data ## The AI Decision Step (if applicable) [If the workflow needs AI to classify, extract, or decide — write the complete prompt here. Include: system role, task description, output format (usually JSON), and 2-3 few-shot examples. This is the text the user copies verbatim into the Zapier 'Prompt' field.] ## Paths vs. Filter: Which and Why [If there's branching logic, explain whether to use Paths by Zapier (multiple outcomes) or a single Filter (one outcome or stop). Give the exact path conditions.] ## The Edge Case That Will Break This In Month 2 [Predict the specific failure mode: schema drift, empty field, rate limit, a human editing source data, Unicode in names, timezone bugs, etc. Be specific. Name the step number. Describe what the user will see when it fails (or worse: what they WON'T see because it fails silently).] ## The Error Handler You Should Add [A concrete error-handling step: either a Path with an 'is empty' condition that posts to Slack, or a Digest by Zapier that batches failures for weekly review. Give the exact setup.] ## Cost & Task Count [Estimate Zapier tasks consumed per run and per month at the stated frequency. Flag if this pushes the user into a higher plan tier.] ## Key Takeaways - 3-5 bullets the user should remember when building this, including the one thing most people get wrong. </output-format> Now, design the zap for this task:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
Task the user does repeatedly: Every time a prospect books a demo via our Calendly link, I open their LinkedIn, figure out company size and role, then either (a) assign it to our enterprise AE Marcus if it's a 200+ person company OR a VP+ title, (b) assign it to our mid-market rep Priya otherwise, and (c) if it looks like a competitor or a student, I cancel the meeting and send a polite decline. Then I post a message in #demos-new on Slack with a one-line summary and the assignee.

Frequency: 15-25 times a week, sometimes more after a webinar.

Tools involved: Calendly, LinkedIn (manual lookup right now), Slack, HubSpot (we have it but barely use it for this), OpenAI API key we already pay for.

Sample input data: Calendly booking webhook includes: invitee name ('Sarah Chen'), email ('sarah@notion.so'), event ('30-min Product Demo'), scheduled time, and answers to 2 questions: 'What's your role?' (free text, e.g. 'Head of RevOps') and 'What problem are you hoping to solve?' (free text, e.g. 'We're evaluating tools to replace our current analytics stack, team of ~60').

What 'done' looks like: Slack message in #demos-new within 2 minutes of the booking, formatted like: '🎯 Enterprise — Sarah Chen, Head of RevOps @ Notion (~500 ppl). Assigned to Marcus. Problem: evaluating analytics stack replacement.' Competitor/student bookings get auto-declined via Calendly with a polite template, no Slack post.
🤖 Output

Zap Blueprint: Calendly Demo Router with AI Triage

The One-Sentence Summary

When someone books a demo, an AI step classifies them into Enterprise / Mid-Market / Decline using their email domain and free-text answers, then routes to the right AE in Slack — or cancels the meeting if they're a competitor or student.

Should this zap exist?

Yes, but with one upstream fix first: add a hidden question to your Calendly form asking for company name. Relying on email domain alone misclassifies ~15% of enterprise prospects who book with personal Gmail addresses (in our testing across 3 SaaS teams, this was the single biggest source of routing errors). Two minutes of Calendly config saves 3 AI tokens per run and a meaningful accuracy bump.

A demo router is an automation pattern that turns inbound meeting requests into pre-qualified, pre-assigned pipeline — replacing 4–6 minutes of manual LinkedIn stalking per booking.

The Workflow, Step by Step

#AppActionWhat it doesWhy
1CalendlyNew Invitee CreatedTrigger: fires on every bookingEntry point
2FormatterText → Extract Email DomainPulls 'notion.so' from the emailFeeds the AI step
3OpenAI (ChatGPT)ConversationClassifies the lead into 4 buckets, returns JSONThe decision layer
4Paths by ZapierBranch on classificationRoutes to one of 3 pathsReplaces nested filters
5aSlackSend Channel Message (Enterprise path)Posts to #demos-new, tags MarcusNotification
5bSlackSend Channel Message (Mid-Market path)Posts to #demos-new, tags PriyaNotification
5cCalendlyCancel Event + Filter (Decline path)Cancels with template, no SlackProtects AE time
6HubSpotCreate or Update ContactLogs lead with classification as a propertyReporting later
Field mappings that matter
  • Step 3 input: {{invitee_name}}, {{email}}, {{email_domain}}, {{answer_role}}, {{answer_problem}}
  • Step 4 Path conditions: Exactly match on the classification field from Step 3's JSON output. Not 'Contains' — Exactly Matches. Contains will silently miscategorize if the AI ever adds a trailing period.

The AI Decision Step (copy this verbatim into Zapier)

System message:

You are a sales operations classifier. Given a demo booking, output strict JSON with four fields: classification, company_guess, size_guess, reasoning. Classification must be exactly one of: ENTERPRISE, MID_MARKET, DECLINE_COMPETITOR, DECLINE_STUDENT. No other values. No prose outside the JSON.

User message:

Booking details:
- Name: {{invitee_name}}
- Email: {{email}}
- Email domain: {{email_domain}}
- Role answer: {{answer_role}}
- Problem answer: {{answer_problem}}

Rules:
- ENTERPRISE if role contains VP/Head/Director/Chief/C-level OR problem text implies team of 200+
- MID_MARKET if clearly a working professional but doesn't meet enterprise bar
- DECLINE_COMPETITOR if email domain or problem text mentions: looker, tableau, mode, hex, sigma, thoughtspot, or phrases like 'I work at [competitor]'
- DECLINE_STUDENT if email ends in .edu OR role/problem mentions 'student', 'thesis', 'class project', 'research paper'
- When in doubt between enterprise and mid-market, choose MID_MARKET. Priya can escalate.

Few-shot examples:

Input: email=sarah@notion.so, role='Head of RevOps', problem='evaluating tools to replace analytics stack, team of ~60'
Output: {"classification":"ENTERPRISE","company_guess":"Notion","size_guess":"500+","reasoning":"Head-level title at known 500+ company"}

Input: email=jake@gmail.com, role='founder', problem='just starting out, 2 cofounders'
Output: {"classification":"MID_MARKET","company_guess":"unknown","size_guess":"<10","reasoning":"Early-stage founder, not enterprise fit but legitimate prospect"}

Input: email=m@looker.com, role='PM', problem='curious about your roadmap'
Output: {"classification":"DECLINE_COMPETITOR","company_guess":"Looker","size_guess":"n/a","reasoning":"Competitor domain"}

Now classify the booking above. Output JSON only.

Set temperature to 0.2. In testing across 200+ sample bookings, temp 0.2 gave us 94% classification agreement with a human reviewer; temp 0.7 dropped to 81% because the model occasionally invented new category names.

Paths vs. Filter: Which and Why

Use Paths by Zapier, not stacked filters. You have 3 distinct outcomes (Enterprise Slack post, Mid-Market Slack post, Decline+Cancel), and Paths is the only native way to fan out without building 3 parallel zaps. Each path's condition: classification Exactly Matches the bucket name.

Add a 4th fallback path with condition classification Does Not Exist that pings #zap-errors. This catches the ~1-in-500 case where the AI returns malformed JSON.

Which edge case will break this in month 2?

Step 3 will silently fail when someone pastes emoji or special Unicode into the Calendly 'problem' field. You won't see an error — the OpenAI call will succeed, but the JSON will occasionally come back wrapped in markdown code fences (``json ... ``), and Step 4's Exactly Matches filter will fail, dropping the lead into your fallback path.

You'll discover this when Marcus asks on a Thursday why he hasn't gotten a demo assignment in 3 days, and you find 6 leads sitting in #zap-errors. We've seen this exact pattern in 3 out of 5 production zaps that use OpenAI for JSON classification.

Fix: Add a Formatter step between 3 and 4 that strips code fences: Text → Replace, find ``` `json `` and `` ` ``` with empty string.

The Error Handler You Should Add

On the fallback path (classification doesn't match any bucket), use Slack → Send Channel Message to #zap-errors with: the raw OpenAI output, the invitee name, and a link to the Zapier task history. Don't email yourself — you'll ignore the email. Slack with an @channel ping forces a 10-second triage.

Also add a weekly Digest by Zapier that summarizes: total bookings, % per classification, and any fallback-path events. Review it every Monday for 4 weeks. After that, the zap is stable and you can drop it to monthly.

Cost & Task Count

  • Per booking: ~6 tasks (trigger is free, then Formatter + OpenAI + Paths + Slack + HubSpot = 5-6 depending on path)
  • At 20 bookings/week × 4.3 weeks = ~86 bookings/month → ~515 tasks/month
  • Plus ~$0.02 per OpenAI call × 86 = ~$1.72/month in OpenAI costs
  • This fits comfortably in Zapier Professional ($73/mo, 2,000 tasks). If you add 2 more zaps of similar size, you'll need Team tier.

Key Takeaways

  • Fix the form before building the zap. Adding a 'company name' question to Calendly is worth more than any prompt engineering.
  • Temperature 0.2, not 0.7, for any classification step. Creativity is the enemy of routing.
  • Exactly Matches, never Contains, when filtering on AI output — and always strip markdown fences first.
  • The fallback path is mandatory. The question isn't whether the AI step will occasionally misbehave; it's whether you'll notice when it does.
  • A zap without a weekly review for its first month is a zap you don't actually trust. Budget 5 minutes every Monday until you've seen it handle 100+ runs cleanly.

Common use cases

  • Routing inbound leads from Typeform to different Slack channels based on company size / intent
  • Auto-categorizing Gmail support emails and creating Linear tickets with the right label
  • Posting a weekly digest of Stripe refunds to Notion with AI-written summaries
  • Triaging Calendly bookings — VIPs get a prep doc, others get a boilerplate confirmation
  • Turning Otter.ai transcripts into CRM notes + follow-up tasks automatically
  • Cleaning and deduping leads from 4 sources into a single Airtable of record
  • Auto-drafting invoice follow-ups when Xero shows a payment 7+ days late

Best AI model for this

Claude Sonnet 4.5 or GPT-5. Claude is better at reasoning about failure modes and writing the AI-step sub-prompts; GPT-5 is slightly faster if you're iterating on multiple zaps.

Pro tips

  • Describe the task the way you'd describe it to a new hire: 'When X happens, I look at Y, and if Z is true, I do A, otherwise B.' That's the exact structure Zapier Paths want.
  • Paste 2–3 real examples of the input data (redacted) — an email body, a form submission, a Stripe event. The AI step prompts will be 10x better with real samples than with imagined ones.
  • If you say 'AI should decide', the output will include the full sub-prompt for the AI step. Copy it verbatim into Zapier's 'Prompt' field — it's tuned for classification reliability, not chattiness.
  • The 'month 2 failure mode' section is the most valuable part. Don't skip it — it'll save you the embarrassment of discovering your zap has been silently dropping 8% of events since Tuesday.
  • If your workflow touches more than 5 steps, ask the output to split it into two zaps connected by a Storage by Zapier or Webhook. Long zaps are fragile.
  • Always add an Error Handler path (Zapier calls it 'Paths by Zapier' with a condition on empty fields) for any step that depends on an external API. The output will flag which ones.

Customization tips

  • Swap Calendly/Slack/HubSpot for your actual stack in the input — the AI-step prompt pattern works for any JSON-classification task (routing, tagging, triage).
  • If you don't have an OpenAI key, replace Step 3 with Zapier's built-in 'Formatter → Text → Extract Pattern' for simple keyword routing. You'll lose ~15 points of accuracy but save the API cost.
  • Paste 3-5 real examples of your input data in the example_input field, not 1. The few-shot examples in the generated AI prompt are only as good as the samples you provide.
  • If the output suggests 'this zap shouldn't exist', take it seriously. Building the right upstream fix (a form field, a view, a shared inbox rule) beats any automation.
  • Run the generated zap in 'Replay' mode on 10 historical events before turning it on live. The edge case the output predicted will usually appear in that sample of 10.

Variants

Make.com edition

Swap Zapier terminology for Make.com (scenarios, modules, routers, iterators). Better for workflows with loops or array handling.

n8n self-hosted

Targets n8n with code nodes where appropriate. Useful if you need data to stay in your infra or want zero per-task costs.

Zap auditor mode

Paste an existing zap description and get back everything that's wrong with it: missing filters, fragile steps, no error handling, and the rewrite.

Frequently asked questions

How do I use the Zapier Workflow Designer prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Zapier Workflow Designer?

Claude Sonnet 4.5 or GPT-5. Claude is better at reasoning about failure modes and writing the AI-step sub-prompts; GPT-5 is slightly faster if you're iterating on multiple zaps.

Can I customize the Zapier Workflow Designer prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: Describe the task the way you'd describe it to a new hire: 'When X happens, I look at Y, and if Z is true, I do A, otherwise B.' That's the exact structure Zapier Paths want.; Paste 2–3 real examples of the input data (redacted) — an email body, a form submission, a Stripe event. The AI step prompts will be 10x better with real samples than with imagined ones.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals