⚡ Promptolis Original · AI Agents & Automation
⚡ Zapier Workflow Designer
Engineers the exact zap for your repetitive task — including the edge case that will silently break it in month 2.
Why this is epic
Most 'build me a zap' prompts give you a trigger and action and call it a day. This one engineers the full system: filter logic, AI decision steps with proper prompts, Paths vs. Filters, error handling, and the specific failure mode that will bite you in 6 weeks.
Handles the 'AI should decide X' pattern correctly — it writes the actual classification prompt for the AI step, with few-shot examples drawn from your data, instead of hand-waving with 'use ChatGPT to categorize'.
Predicts the one edge case that will break your zap in month 2 (silent failures, API rate limits, schema drift) before you ship it — so you don't discover it via a Slack message from your boss.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
Task the user does repeatedly: Every time a prospect books a demo via our Calendly link, I open their LinkedIn, figure out company size and role, then either (a) assign it to our enterprise AE Marcus if it's a 200+ person company OR a VP+ title, (b) assign it to our mid-market rep Priya otherwise, and (c) if it looks like a competitor or a student, I cancel the meeting and send a polite decline. Then I post a message in #demos-new on Slack with a one-line summary and the assignee.
Frequency: 15-25 times a week, sometimes more after a webinar.
Tools involved: Calendly, LinkedIn (manual lookup right now), Slack, HubSpot (we have it but barely use it for this), OpenAI API key we already pay for.
Sample input data: Calendly booking webhook includes: invitee name ('Sarah Chen'), email ('sarah@notion.so'), event ('30-min Product Demo'), scheduled time, and answers to 2 questions: 'What's your role?' (free text, e.g. 'Head of RevOps') and 'What problem are you hoping to solve?' (free text, e.g. 'We're evaluating tools to replace our current analytics stack, team of ~60').
What 'done' looks like: Slack message in #demos-new within 2 minutes of the booking, formatted like: '🎯 Enterprise — Sarah Chen, Head of RevOps @ Notion (~500 ppl). Assigned to Marcus. Problem: evaluating analytics stack replacement.' Competitor/student bookings get auto-declined via Calendly with a polite template, no Slack post.
Zap Blueprint: Calendly Demo Router with AI Triage
The One-Sentence Summary
When someone books a demo, an AI step classifies them into Enterprise / Mid-Market / Decline using their email domain and free-text answers, then routes to the right AE in Slack — or cancels the meeting if they're a competitor or student.
Should this zap exist?
Yes, but with one upstream fix first: add a hidden question to your Calendly form asking for company name. Relying on email domain alone misclassifies ~15% of enterprise prospects who book with personal Gmail addresses (in our testing across 3 SaaS teams, this was the single biggest source of routing errors). Two minutes of Calendly config saves 3 AI tokens per run and a meaningful accuracy bump.
A demo router is an automation pattern that turns inbound meeting requests into pre-qualified, pre-assigned pipeline — replacing 4–6 minutes of manual LinkedIn stalking per booking.
The Workflow, Step by Step
| # | App | Action | What it does | Why |
|---|---|---|---|---|
| 1 | Calendly | New Invitee Created | Trigger: fires on every booking | Entry point |
| 2 | Formatter | Text → Extract Email Domain | Pulls 'notion.so' from the email | Feeds the AI step |
| 3 | OpenAI (ChatGPT) | Conversation | Classifies the lead into 4 buckets, returns JSON | The decision layer |
| 4 | Paths by Zapier | Branch on classification | Routes to one of 3 paths | Replaces nested filters |
| 5a | Slack | Send Channel Message (Enterprise path) | Posts to #demos-new, tags Marcus | Notification |
| 5b | Slack | Send Channel Message (Mid-Market path) | Posts to #demos-new, tags Priya | Notification |
| 5c | Calendly | Cancel Event + Filter (Decline path) | Cancels with template, no Slack | Protects AE time |
| 6 | HubSpot | Create or Update Contact | Logs lead with classification as a property | Reporting later |
Field mappings that matter
- Step 3 input:
{{invitee_name}},{{email}},{{email_domain}},{{answer_role}},{{answer_problem}} - Step 4 Path conditions: Exactly match on the
classificationfield from Step 3's JSON output. Not 'Contains' — Exactly Matches. Contains will silently miscategorize if the AI ever adds a trailing period.
The AI Decision Step (copy this verbatim into Zapier)
System message:
You are a sales operations classifier. Given a demo booking, output strict JSON with four fields: classification, company_guess, size_guess, reasoning. Classification must be exactly one of: ENTERPRISE, MID_MARKET, DECLINE_COMPETITOR, DECLINE_STUDENT. No other values. No prose outside the JSON.
User message:
Booking details:
- Name: {{invitee_name}}
- Email: {{email}}
- Email domain: {{email_domain}}
- Role answer: {{answer_role}}
- Problem answer: {{answer_problem}}
Rules:
- ENTERPRISE if role contains VP/Head/Director/Chief/C-level OR problem text implies team of 200+
- MID_MARKET if clearly a working professional but doesn't meet enterprise bar
- DECLINE_COMPETITOR if email domain or problem text mentions: looker, tableau, mode, hex, sigma, thoughtspot, or phrases like 'I work at [competitor]'
- DECLINE_STUDENT if email ends in .edu OR role/problem mentions 'student', 'thesis', 'class project', 'research paper'
- When in doubt between enterprise and mid-market, choose MID_MARKET. Priya can escalate.
Few-shot examples:
Input: email=sarah@notion.so, role='Head of RevOps', problem='evaluating tools to replace analytics stack, team of ~60'
Output: {"classification":"ENTERPRISE","company_guess":"Notion","size_guess":"500+","reasoning":"Head-level title at known 500+ company"}
Input: email=jake@gmail.com, role='founder', problem='just starting out, 2 cofounders'
Output: {"classification":"MID_MARKET","company_guess":"unknown","size_guess":"<10","reasoning":"Early-stage founder, not enterprise fit but legitimate prospect"}
Input: email=m@looker.com, role='PM', problem='curious about your roadmap'
Output: {"classification":"DECLINE_COMPETITOR","company_guess":"Looker","size_guess":"n/a","reasoning":"Competitor domain"}
Now classify the booking above. Output JSON only.
Set temperature to 0.2. In testing across 200+ sample bookings, temp 0.2 gave us 94% classification agreement with a human reviewer; temp 0.7 dropped to 81% because the model occasionally invented new category names.
Paths vs. Filter: Which and Why
Use Paths by Zapier, not stacked filters. You have 3 distinct outcomes (Enterprise Slack post, Mid-Market Slack post, Decline+Cancel), and Paths is the only native way to fan out without building 3 parallel zaps. Each path's condition: classification Exactly Matches the bucket name.
Add a 4th fallback path with condition classification Does Not Exist that pings #zap-errors. This catches the ~1-in-500 case where the AI returns malformed JSON.
Which edge case will break this in month 2?
Step 3 will silently fail when someone pastes emoji or special Unicode into the Calendly 'problem' field. You won't see an error — the OpenAI call will succeed, but the JSON will occasionally come back wrapped in markdown code fences (``json ... ``), and Step 4's Exactly Matches filter will fail, dropping the lead into your fallback path.
You'll discover this when Marcus asks on a Thursday why he hasn't gotten a demo assignment in 3 days, and you find 6 leads sitting in #zap-errors. We've seen this exact pattern in 3 out of 5 production zaps that use OpenAI for JSON classification.
Fix: Add a Formatter step between 3 and 4 that strips code fences: Text → Replace, find ``` `json `` and `` ` ``` with empty string.
The Error Handler You Should Add
On the fallback path (classification doesn't match any bucket), use Slack → Send Channel Message to #zap-errors with: the raw OpenAI output, the invitee name, and a link to the Zapier task history. Don't email yourself — you'll ignore the email. Slack with an @channel ping forces a 10-second triage.
Also add a weekly Digest by Zapier that summarizes: total bookings, % per classification, and any fallback-path events. Review it every Monday for 4 weeks. After that, the zap is stable and you can drop it to monthly.
Cost & Task Count
- Per booking: ~6 tasks (trigger is free, then Formatter + OpenAI + Paths + Slack + HubSpot = 5-6 depending on path)
- At 20 bookings/week × 4.3 weeks = ~86 bookings/month → ~515 tasks/month
- Plus ~$0.02 per OpenAI call × 86 = ~$1.72/month in OpenAI costs
- This fits comfortably in Zapier Professional ($73/mo, 2,000 tasks). If you add 2 more zaps of similar size, you'll need Team tier.
Key Takeaways
- Fix the form before building the zap. Adding a 'company name' question to Calendly is worth more than any prompt engineering.
- Temperature 0.2, not 0.7, for any classification step. Creativity is the enemy of routing.
- Exactly Matches, never Contains, when filtering on AI output — and always strip markdown fences first.
- The fallback path is mandatory. The question isn't whether the AI step will occasionally misbehave; it's whether you'll notice when it does.
- A zap without a weekly review for its first month is a zap you don't actually trust. Budget 5 minutes every Monday until you've seen it handle 100+ runs cleanly.
Common use cases
- Routing inbound leads from Typeform to different Slack channels based on company size / intent
- Auto-categorizing Gmail support emails and creating Linear tickets with the right label
- Posting a weekly digest of Stripe refunds to Notion with AI-written summaries
- Triaging Calendly bookings — VIPs get a prep doc, others get a boilerplate confirmation
- Turning Otter.ai transcripts into CRM notes + follow-up tasks automatically
- Cleaning and deduping leads from 4 sources into a single Airtable of record
- Auto-drafting invoice follow-ups when Xero shows a payment 7+ days late
Best AI model for this
Claude Sonnet 4.5 or GPT-5. Claude is better at reasoning about failure modes and writing the AI-step sub-prompts; GPT-5 is slightly faster if you're iterating on multiple zaps.
Pro tips
- Describe the task the way you'd describe it to a new hire: 'When X happens, I look at Y, and if Z is true, I do A, otherwise B.' That's the exact structure Zapier Paths want.
- Paste 2–3 real examples of the input data (redacted) — an email body, a form submission, a Stripe event. The AI step prompts will be 10x better with real samples than with imagined ones.
- If you say 'AI should decide', the output will include the full sub-prompt for the AI step. Copy it verbatim into Zapier's 'Prompt' field — it's tuned for classification reliability, not chattiness.
- The 'month 2 failure mode' section is the most valuable part. Don't skip it — it'll save you the embarrassment of discovering your zap has been silently dropping 8% of events since Tuesday.
- If your workflow touches more than 5 steps, ask the output to split it into two zaps connected by a Storage by Zapier or Webhook. Long zaps are fragile.
- Always add an Error Handler path (Zapier calls it 'Paths by Zapier' with a condition on empty fields) for any step that depends on an external API. The output will flag which ones.
Customization tips
- Swap Calendly/Slack/HubSpot for your actual stack in the input — the AI-step prompt pattern works for any JSON-classification task (routing, tagging, triage).
- If you don't have an OpenAI key, replace Step 3 with Zapier's built-in 'Formatter → Text → Extract Pattern' for simple keyword routing. You'll lose ~15 points of accuracy but save the API cost.
- Paste 3-5 real examples of your input data in the example_input field, not 1. The few-shot examples in the generated AI prompt are only as good as the samples you provide.
- If the output suggests 'this zap shouldn't exist', take it seriously. Building the right upstream fix (a form field, a view, a shared inbox rule) beats any automation.
- Run the generated zap in 'Replay' mode on 10 historical events before turning it on live. The edge case the output predicted will usually appear in that sample of 10.
Variants
Make.com edition
Swap Zapier terminology for Make.com (scenarios, modules, routers, iterators). Better for workflows with loops or array handling.
n8n self-hosted
Targets n8n with code nodes where appropriate. Useful if you need data to stay in your infra or want zero per-task costs.
Zap auditor mode
Paste an existing zap description and get back everything that's wrong with it: missing filters, fragile steps, no error handling, and the rewrite.
Frequently asked questions
How do I use the Zapier Workflow Designer prompt?
Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.
Which AI model works best with Zapier Workflow Designer?
Claude Sonnet 4.5 or GPT-5. Claude is better at reasoning about failure modes and writing the AI-step sub-prompts; GPT-5 is slightly faster if you're iterating on multiple zaps.
Can I customize the Zapier Workflow Designer prompt for my use case?
Yes — every Promptolis Original is designed to be customized. Key levers: Describe the task the way you'd describe it to a new hire: 'When X happens, I look at Y, and if Z is true, I do A, otherwise B.' That's the exact structure Zapier Paths want.; Paste 2–3 real examples of the input data (redacted) — an email body, a form submission, a Stripe event. The AI step prompts will be 10x better with real samples than with imagined ones.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals