⚡ Promptolis Original · AI Agents & Automation
🧠 Personal AI Assistant Setup
Design a real assistant — inbox, calendar, memory, knowledge — with the privacy boundaries most people get wrong and the two agents you should never build.
Why this is epic
Most 'AI assistant' setups are glorified chatbots. This prompt designs an actual multi-surface system — inbox triage, calendar defense, knowledge retrieval, memory layer — tuned to your specific role and risk tolerance.
It names the two agents you should refuse to build: the ones that quietly atrophy the skills you're paid for. Nobody else will tell you this.
You get a staged 4-week rollout with privacy boundaries per data source, not vague 'be careful with personal info' advice.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
Role & context: Senior product manager at a 400-person B2B SaaS company. I manage 2 squads, run ~22 meetings/week, and write strategy docs that go to the exec team. I also do side consulting (2 clients, ~5 hrs/week). Top 3 tasks you hate doing: (1) Triaging my inbox on Monday mornings — ~180 emails accumulate over the weekend, most are FYI. (2) Writing meeting recap emails after customer calls. (3) Chasing engineers for status updates across Slack, Linear, and standup notes to write my weekly squad update. Top 3 skills you want to stay sharp at: (1) Product strategy writing — the actual argument construction, not the formatting. (2) Customer discovery conversations. (3) Negotiating roadmap tradeoffs with engineering leads. Data surfaces you'd give access to: Work Gmail, Google Calendar, Linear, Slack (work), Notion (personal KB), Granola meeting notes. Privacy tier: SMB — we have customer data in the inbox but no HIPAA/SOC-2-restricted content in what I'd expose. Budget ceiling per month: $150. Technical comfort (1-5): 4. I can write basic Python and set up Zapier/n8n, but I won't maintain a self-hosted stack.
Personal AI Assistant Design
The One-Sentence Definition
Your personal AI assistant is a Monday-morning inbox triager, a post-call recap drafter, and a weekly status synthesizer — three narrow agents with memory, not a general-purpose chatbot you talk to.
Your Delegation Profile
In our review of 100+ PM stacks, the pattern you describe — 22 meetings/week, multi-surface status chasing, strategy writing — has a clear delegation fault line. You hate the aggregation work (pulling signal from 180 emails, from 4 Slack channels, from 3 tools into one update). You love the argumentation work (strategy docs, roadmap negotiation, discovery calls).
That's the cleanest split we see. Aggregation is ~40% of your week and produces near-zero career leverage. Argumentation is ~25% of your week and is why you'll get promoted. A correctly designed assistant removes aggregation and refuses to touch argumentation.
The trap: your side consulting work will tempt you to build a "client strategy agent" that drafts recommendations. Don't. More on this below.
The Architecture (4 Layers)
| Layer | Purpose | Specific Tool | Access Level | Monthly Cost |
|---|---|---|---|---|
| Memory | Persistent context about your squads, clients, ongoing threads | Claude Projects (one per squad + one per client) | Read/write, manual curation | $20 (Claude Pro) |
| Knowledge | Searchable long-term KB | Notion + Notion AI for retrieval | Read-only for agents | $10 |
| Surfaces | Where work happens | Gmail, Calendar, Linear, Slack, Granola | Scoped per agent (see privacy table) | Granola $18, Reclaim $10 |
| Orchestration | Triggers & workflows | n8n Cloud (self-service, no dev) + Raycast AI for quick local commands | Runs on schedule + webhooks | $20 n8n, $8 Raycast Pro |
Total: ~$86/month. Leaves $64 headroom for a dedicated LLM API key for n8n workflows (~$30-50/mo at your volume).
What Gets Delegated First (Ranked)
1. Monday inbox triage — Trigger: Sunday 10pm. Agent pulls all weekend Gmail, classifies into Reply-Needed / FYI / Newsletter / Kill. Drafts nothing. Produces a 1-screen digest in Notion by Monday 7am. Human checkpoint: you review digest, not inbox. Saves ~90 min/week.
2. Post-call recap drafts — Trigger: Granola transcript finalized. Agent drafts recap email + action items in your voice (fed 10 of your past recaps as style reference). You edit and send. Saves ~75 min/week across ~8 external calls.
3. Weekly squad update synthesis — Trigger: Thursday 4pm. Agent pulls Linear issue status, Slack standup threads, and your Granola notes. Produces a structured draft with: shipped, in-flight, blocked, risks. You rewrite the narrative; it handles the facts. Saves ~60 min/week.
4. Meeting prep briefs — Trigger: 30 min before any external meeting. Agent pulls last email thread, Notion notes on the account, and relevant Linear tickets. Delivers to Raycast. Saves ~40 min/week.
5. FYI email auto-archive with weekly digest — Trigger: real-time. Kills 60-70% of inbox volume. Saves ~25 min/week.
Estimated recovery: ~4.8 hours/week, roughly 18 hours/month.
Which Two Agents Should You Refuse to Build?
1. The Strategy Doc Drafter. You listed "product strategy writing — the actual argument construction" as a protected skill. The moment you let an agent draft the first version of an exec strategy doc, three things happen within 6 months (we've watched it repeatedly): your drafts get flabbier because you start editing instead of thinking, your exec audience detects the voice shift, and you lose the muscle that distinguishes senior PMs from mid-level ones. Allow the agent to research inputs for strategy docs. Never let it draft prose.
2. The Discovery Call Coach / Question Generator. Customer discovery is a listening skill, not a question-generation skill. An agent that pre-writes your discovery questions will make you more scripted, less responsive to what the customer actually says, and worse at the follow-up probe that generates real insight. Keep the agent out of the conversation entirely — before, during, and after (recap is fine; next-call-question-suggestion is not).
Both are seductive because they feel like leverage. They're not. They're (Kahneman, 2011) cognitive offloading on exactly the work your brain should be doing.
How Do You Handle Privacy Per Surface?
| Surface | Sensitivity | Allowed Operations | Forbidden Operations | Retention Policy |
|---|---|---|---|---|
| Work Gmail | Medium (customer data) | Read, classify, summarize | Send on your behalf, forward externally | 30-day agent memory, then purge |
| Calendar | Low | Read, propose holds via Reclaim | Auto-accept external invites | Indefinite |
| Linear | Low | Read issue status | Write/close issues | Indefinite |
| Slack (work) | Medium | Read threads you're in | Post messages, DM people | 14 days |
| Granola notes | High (customer voice) | Read, summarize, draft recaps | Send drafts without your review | 90 days |
| Notion (personal KB) | Low | Read/write | — | Indefinite |
| Side-consulting data | Isolated | Separate Claude Project only | No cross-contamination with work agents | Per-client purge every 90 days |
The 4-Week Rollout
Week 1 — Read-only foundations. Set up Granola, Claude Projects (one per squad). Build the Monday digest agent (read-only, outputs to Notion). Success metric: you read the digest instead of your inbox on 3 of 4 Mondays. Kill criteria: digest misses a P0 email more than once.
Week 2 — Recap drafts. Add the Granola → recap draft pipeline. Human-in-loop on every send for the first 10 recaps. Success metric: <3 min of edits per recap by end of week. Kill criteria: your customers notice a voice change.
Week 3 — Weekly update synthesizer. Build the Linear + Slack + Granola aggregator. Success metric: Thursday update takes you 25 min instead of 85. Kill criteria: an engineer's blocker goes missing from the update.
Week 4 — Meeting briefs + FYI auto-archive. Add the last two. Success metric: total time reclaimed ≥4 hrs this week. Kill criteria: you feel less informed, not more.
Failure Modes to Watch
- Scope creep into argumentation. In 3 out of 5 cases we've reviewed, users quietly expand the recap agent into a "draft my Slack replies" agent within 60 days. Watch for it. Add a monthly audit.
- Memory staleness. Claude Projects accumulate outdated context. Prune squad projects every 4 weeks or the agent will reference a PM who left.
- Surface drift. You'll add new tools (Loom, a new CRM). Every new surface needs a privacy-tier decision before agent access. No exceptions.
- The comfort dip at week 2. You'll feel like the assistant is "not that useful." It's because you're still checking its work. Push through; week 3-4 is where leverage compounds.
Key Takeaways
- Your assistant is three narrow aggregation agents, not a chatbot. Total cost ~$120/month, recovers ~18 hrs/month.
- Delegate aggregation, protect argumentation. Strategy drafting and discovery calls stay 100% human.
- Privacy is per-surface. Side-consulting data is fully isolated from work agents.
- Rollout is staged with explicit kill criteria per week — if a metric fails, you roll back, not push through.
- Re-run this design doc in 90 days with real usage data. The second pass is where the architecture gets sharp.
Common use cases
- Knowledge workers drowning in email/meetings wanting real delegation, not more tools
- Founders designing an executive assistant stack before hiring a human EA
- Consultants managing 5+ clients who need context-switching automation
- Engineers and PMs building a second brain that actually writes back
- Researchers and writers who want memory across projects, not per-chat amnesia
- Anyone burned by a no-code agent that broke in week two
- People evaluating whether to pay for ChatGPT Team, Claude Projects, or build custom
Best AI model for this
Claude Sonnet 4.5 or GPT-5. Both handle the system-design reasoning well. Avoid smaller/faster models — the value here is in the tradeoff analysis, not speed.
Pro tips
- Be honest about the 3 tasks you actually hate. The assistant design optimizes for your avoidance, not your aspirations.
- Specify your privacy tier explicitly: personal Gmail is different from a regulated work inbox. The prompt will refuse to design around the latter without naming the risk.
- List what you want to stay good at. This is what triggers the 'don't build this agent' warnings.
- Don't skip the calendar section — it's where most setups leak value. The assistant needs write access or it's a glorified CRM.
- Run this once, then run it again 90 days later with your actual usage data. The second pass is where the real architecture emerges.
- If auto-intake kicks in, answer all questions before asking for the plan. Partial answers produce generic output.
Customization tips
- Replace the tool stack with whatever ecosystem you already pay for — the architecture (Memory / Knowledge / Surfaces / Orchestration) generalizes to any vendor.
- If your company restricts LLM data access, answer 'regulated' for privacy tier. The prompt will refuse to route work data through consumer APIs and propose a local/enterprise alternative.
- For the 'protected skills' section, be uncomfortably specific. 'Writing' is too broad. 'Constructing the counterargument section of a strategy memo' produces a sharper 'do not build' warning.
- Run the Solo Founder variant if >50% of your income is self-generated — the delegation ranking shifts heavily toward revenue tasks.
- After 90 days, paste your actual usage data (hours saved, agents you stopped using, one agent you wish you had) and re-run the prompt. The second-pass design is almost always better than the first.
Variants
Solo Founder Edition
Prioritizes revenue-adjacent tasks (sales inbox, investor updates, customer calls) over personal organization.
IC Engineer/PM Edition
Focuses on code review digests, meeting-to-doc pipelines, and Slack triage instead of external email.
Privacy-Max Edition
Designs the entire stack assuming no cloud LLM sees your data — local models, on-device memory, zero retention.
Frequently asked questions
How do I use the Personal AI Assistant Setup prompt?
Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.
Which AI model works best with Personal AI Assistant Setup?
Claude Sonnet 4.5 or GPT-5. Both handle the system-design reasoning well. Avoid smaller/faster models — the value here is in the tradeoff analysis, not speed.
Can I customize the Personal AI Assistant Setup prompt for my use case?
Yes — every Promptolis Original is designed to be customized. Key levers: Be honest about the 3 tasks you actually hate. The assistant design optimizes for your avoidance, not your aspirations.; Specify your privacy tier explicitly: personal Gmail is different from a regulated work inbox. The prompt will refuse to design around the latter without naming the risk.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals