⚡ Promptolis Original · Coding & Development
🔌 MCP Server Prompt Engineering (Claude / Cursor / Custom)
Model Context Protocol — Anthropic's open standard (Nov 2024) for AI-tool integration. Pre-built server configs (Linear, Postgres, GitHub, Notion, Slack) + Cursor/Claude Desktop setup + custom server starter. 2026 early-mover.
MCP Server Prompt Engineering (Claude / Cursor / Custom) — Model Context Protocol — Anthropic's open standard (Nov 2024) for AI-tool integration. Pre-built server configs (Linear, Postgres, GitHub, Notion, Slack) + Cursor/Claude Desktop setup + custom server starter. 2026 early-mover. Setup: 5 min · Best AI: Claude Opus 4.6 — protocol understanding + multi-platform reasoning. · Cost: Free, MIT-licensed.
Why this is epic
MCP is open standard, NOT ChatGPT plugins, NOT generic function-calling. Most users confused — this clarifies + enables.
Pre-built server config recipes for common tools (Linear, Postgres, GitHub, Notion, Slack, Stripe). Working JSON.
Security framing: read-only DB users + scoped GitHub PATs + secrets management. Defense in depth, not prompt-only.
📑 Page navigation + Key Takeaways Click to expand
📌 Key Takeaways
- What it is: Model Context Protocol — Anthropic's open standard (Nov 2024) for AI-tool integration. Pre-built server configs (Linear, Postgres, GitHub, Notion, Slack) + Cursor/Claude Desktop setup + custom server starter. 2026 early-mover.
- Best for: Claude Desktop MCP setup
- Time investment: 5 min setup, 30-45 min full setup + prompt patterns output
- Recommended AI model: Claude Opus 4.6 — protocol understanding + multi-platform reasoning.
- Cost: Free forever — MIT-licensed, no signup, no paywall
📑 On this page
- The prompt (copy-ready)
- How to use it (4 steps)
- Example input + output
- Common use cases
- Pro tips + variants
- FAQ
⚙️ At a glance
- Category:
- Coding & Development
- Setup time:
- 5 min
- Output time:
- 30-45 min full setup + prompt patterns
- Best AI model:
- Claude Opus 4.6 — protocol understanding + multi-platform reasoning.
- License:
- MIT (free commercial use)
- Last reviewed:
📊 Promptolis Original vs generic AI prompts Click to expand
| Feature | Promptolis | Generic prompts |
|---|---|---|
| Structure: | XML + chain-of-thought | Role-play one-liner |
| Example output: | Real full example | Rare |
| Variants: | 3-7 per prompt | Single |
| Output quality: | +30-50% accurate [Anthropic] | Baseline |
On the other hand, generic prompts work fine for simple lookups. Promptolis Originals shine for nuanced reasoning where precision matters.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
<use-case>I want Claude in my Cursor IDE to be able to: (1) search my Linear issues by keyword + status, (2) read my Postgres database schema and run read-only queries, (3) check my GitHub PR statuses across our team's repos. I'm a backend engineer, comfortable with TypeScript + Python.</use-case> <existing-mcp-knowledge>aware-of-it, haven't set up any servers yet</existing-mcp-knowledge> <tooling>Cursor IDE</tooling> <server-target>pre-built where possible, custom only if necessary</server-target> <technical-stack>TypeScript primary, Python secondary</technical-stack> <key-question>How do I set up these 3 integrations in Cursor? Which need custom servers vs pre-built? And what prompt patterns work best when Claude has access to all three?
What MCP Actually Is
MCP (Model Context Protocol) is Anthropic's open standard (released November 2024) for connecting AI assistants to external tools, data sources, and services. The key differentiation:
- NOT ChatGPT plugins: OpenAI's plugin system was OpenAI-specific, deprecated. MCP is open spec.
- NOT generic function-calling: Function-calling is at the API level. MCP standardizes the SERVER side — how tools expose capabilities to ANY MCP-aware client.
- NOT magic: MCP is plumbing. It standardizes the connection so any MCP server works with any MCP client. The prompt-engineering work is the same as before.
Three capability types in MCP:
1. Tools: actions Claude can execute (run a query, create a ticket, send a message)
2. Resources: data Claude can read (file contents, API responses, structured data)
3. Prompts: templated instructions the server provides (parameterized prompt templates)
For your use case, you'll mostly use Tools + Resources.
Your Use Case Mapped
You want three integrations:
1. Linear search: Tool (search issues) + Resource (read issue content). PRE-BUILT MCP SERVER EXISTS.
2. Postgres read-only queries: Tool (execute query) + Resource (schema introspection). PRE-BUILT MCP SERVER EXISTS.
3. GitHub PR statuses: Tool (search PRs, check status). PRE-BUILT MCP SERVER EXISTS.
Good news: all three have pre-built servers. No custom server needed.
Pre-Built Server Recommendations
For Linear:
@modelcontextprotocol/server-linear(community, well-maintained)- Auth: Linear API key
For Postgres:
@modelcontextprotocol/server-postgres(official Anthropic example)- Auth: Postgres connection string
- READ-ONLY by default — exactly what you want for safety
For GitHub:
@modelcontextprotocol/server-github(official Anthropic example)- Auth: GitHub Personal Access Token (PAT)
- Scope it to read-only for safety
All three are TypeScript-based, run via npx. No custom code needed.
Configuration for Cursor IDE
Cursor's MCP config is at ~/.cursor/mcp.json (or per-project .cursor/mcp.json). For your three integrations:
{
"mcpServers": {
"linear": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-linear"],
"env": {
"LINEAR_API_KEY": "lin_api_xxxxx"
}
},
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://readonly_user:password@localhost:5432/mydb"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxx"
}
}
}
}
Key notes:
- Use a READ-ONLY Postgres user (create one specifically for this if you don't have one)
- GitHub PAT scope:
repo:status, public_repominimum;repoif you have private repos. Don't grant write scopes unless needed. - Linear API key has full account scope by default — if your team allows, use a dedicated bot-user account, not your personal one.
After saving config: restart Cursor. The integrations will appear in the MCP indicator.
Prompt-Engineering for This Setup
With all three connected, Claude in Cursor can be invoked with prompts like:
Pattern 1 — Single-tool query:
'Find all my Linear issues in 'In Progress' status assigned to me. List issue IDs + titles + age.'
Claude knows to use the Linear tool. Output is structured.
Pattern 2 — Multi-tool workflow:
'For each open PR I have on GitHub, find the corresponding Linear issue (search by PR title or branch name) and tell me which ones have stale 'In Progress' Linear status (>3 days since last update).'
Claude orchestrates: GitHub tool to list PRs → Linear tool to search per PR → reasoning to identify stale ones.
Pattern 3 — Investigate + propose:
'Query the users table — show me the schema first, then count users created in last 7 days. Then in Linear, find any issues mentioning 'user growth' and tell me if my query results align with what's claimed in those issues.'
Claude uses Postgres tool for schema + query, Linear tool for issue search, reasoning to compare.
Best practices for MCP-aware prompts:
1. Be explicit about which tool you want used. Claude can guess but is more reliable when told. 'Use the Postgres tool to...' beats ambiguous instruction.
2. Specify output format. 'Return as a markdown table with columns X, Y, Z' produces consistently better results than open-ended.
3. Instruct on failure handling. 'If the Linear query returns no results, say so explicitly rather than inferring.'
4. For multi-step workflows, break into explicit steps. 'First, do X. Then do Y. Then summarize.' beats 'Do X, Y, and Z' which can lead to skipped steps.
5. For Postgres specifically: specify SAFE queries. 'Read-only SELECT only, no UPDATE/DELETE/DROP, limit to 100 rows.' Defense in depth — your DB user should already be read-only, but tell Claude too.
Custom Server Approach
Not needed for your use case (all three integrations have pre-built servers). But for when you DO need custom:
TypeScript SDK starter: @modelcontextprotocol/sdk package. Anthropic's official docs walk through a basic server. Key abstractions:
Serverclass for the server itselfsetRequestHandlerfor routing tool/resource/prompt requests- Schema definitions for tools (using Zod or JSON Schema)
- StdioServerTransport for local development; HttpServerTransport for hosted
Common gotchas:
- Tool input schemas must be valid JSON Schema or Zod schemas. Sloppy schemas → bad tool-calling.
- Resources should support pagination if data is large.
- Errors must be returned in MCP error format, not raw exceptions.
- For long-running tools: implement progress reporting via the Server's notification API.
Common Failure Modes + Mitigations
Server crashes / hangs:
- Cursor will show stale state or timeout errors
- Mitigation: restart Cursor; check server logs in
~/Library/Logs/Cursor/(Mac) or equivalent - Pre-built servers via npx generally stable; custom servers are where most crashes happen
Auth failures:
- 'Tool returned error: Unauthorized' or similar
- Mitigation: verify API keys / tokens are current. Linear API keys can expire. GitHub PATs auto-rotate if you have policies.
Rate limits:
- GitHub: 5000 req/hour per token
- Linear: 1500 req/hour per API key
- Postgres: depends on your DB config
- Mitigation: Don't have Claude run query loops without limits. 'Search top 10 issues' beats 'fetch all issues'.
Schema-mismatch:
- Tool expected X, Claude sent Y
- Mitigation: pre-built servers usually handle this; for custom servers, fail loud with clear error messages.
Security + Permissions Considerations
For your stack:
1. Postgres user permissions: Create a dedicated `readonly_claude` user with SELECT-only permissions. Revoke INSERT/UPDATE/DELETE/DROP at the role level. Don't rely on prompt-engineering for safety.
2. GitHub PAT scoping: `public_repo + repo:status` if you only need PR status. Add `repo` only if you need private repo access. Avoid `admin:org`, `delete_repo`, etc.
3. Linear API key: scope to a dedicated Linear user/account if possible. Use 'Workspace API key' rather than personal API key when team-shared.
4. Secrets management: Don't commit `mcp.json` to your repo. Add to .gitignore. Consider environment variables or a secrets manager (1Password CLI, etc.) for team setups.
5. Audit log review: Linear, GitHub, Postgres all have audit logs. Periodically review what your MCP tokens have done.
On Your Key Question
Direct:
1. All three integrations are pre-built. No custom servers needed. Use the npx-based configs in the JSON above.
2. Cursor setup: drop the JSON into `~/.cursor/mcp.json`, restart Cursor. Each integration shows up after restart.
3. Best prompt patterns: be explicit about which tool to use, specify output format, instruct on failure handling. Multi-tool workflows benefit from explicit step-breakdown.
4. Safety: read-only Postgres user is the most important security move. Don't trust prompt-engineering to enforce 'no DELETE.' Enforce it at the DB role level.
5. Start with one integration, prove value, then add the others. Setting all three at once + getting them all wrong is harder to debug than incremental.
Specific suggested order: Postgres first (highest leverage for backend engineer + most error-prone), Linear second, GitHub last.
What MCP Won't Solve
MCP is plumbing. It doesn't fix:
- Bad prompt-engineering (vague instructions, no output format)
- Bad data architecture (your Postgres queries are only as good as your schema)
- Bad tool selection (Claude using wrong tool for the job)
- Bad permissions (over-scoped tokens are the same risk as before)
MCP standardizes the CONNECTION. The craft of using AI well still requires the same prompt-engineering, security, and architectural thinking as before. Don't expect MCP to make Claude smarter; expect it to make Claude's TOOLS more standardized.
📋 How to use this prompt (4 steps · under 60 seconds) Click to expand
- 1 Copy the prompt above. Click "Copy prompt". XML-structured prompt now on clipboard.
- 2 Open ChatGPT, Claude, or Gemini. One-click launch above. Recommended: Claude Opus 4.6 — protocol understanding + multi-platform reasoning..
-
3
Paste + fill placeholders. Replace
{curly braces}with your context. Specificity = quality. - 4 Run + iterate. Setup: 5 min. Output: 30-45 min full setup + prompt patterns.
Common use cases
- Claude Desktop MCP setup
- Cursor IDE MCP integration
- Building custom MCP server (TypeScript / Python)
- Connecting AI to internal APIs
- Multi-tool agent workflows
- B2B AI integration architecture
- Publishing public MCP server
Best AI model for this
Claude Opus 4.6 — protocol understanding + multi-platform reasoning.
Pro tips
- MCP ≠ plugins, ≠ function-calling — it's a PROTOCOL
- Server roles: tools / resources / prompts
- Pre-built before custom — most workflows have existing servers
- Read-only DB users at role level, not prompt level
- Scope GitHub PATs to minimum needed
- Don't commit mcp-config.json
- MCP is plumbing — bad prompts still produce bad results
Customization tips
- For users new to MCP entirely: start with the conceptual explanation. Most users confuse MCP with ChatGPT plugins or function-calling.
- For users building custom MCP servers: TypeScript SDK is most mature. Python SDK is good. Rust/Go community efforts exist but less mature.
- For team / enterprise use: discuss authentication patterns (OAuth flows for shared tokens), audit log requirements, secrets management.
- For Claude Desktop users (vs Cursor / Windsurf / other): config path is `~/Library/Application Support/Claude/claude_desktop_config.json` (Mac).
- For Windsurf users: Windsurf supports MCP as of 2025. Config is similar to Cursor.
- For users connecting to internal/private APIs: custom MCP server is required. Can wrap existing REST/GraphQL APIs.
- For users wanting to expose data TO Claude users (publishing an MCP server): different concern set — versioning, backward compatibility, public auth flows.
- Premium pack content: 30+ pre-built server config recipes, custom server starter templates (TypeScript + Python), enterprise auth patterns, MCP server publishing guide.
Variants
Claude Desktop Setup
Pre-built servers + config
Cursor IDE Integration
Multi-tool workflow
Custom MCP Server (TypeScript)
SDK starter + gotchas
Custom MCP Server (Python)
Python SDK alternative
Database MCP (Postgres/MySQL)
Read-only safety patterns
Enterprise / Multi-User Setup
OAuth + audit + secrets
Publishing Public MCP Server
Versioning + auth + docs
Frequently asked questions
Common questions about this prompt and how to get the best results from it.
How do I use the MCP Server Prompt Engineering (Claude / Cursor / Custom) prompt?
Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.
Which AI model works best with MCP Server Prompt Engineering (Claude / Cursor / Custom)?
Claude Opus 4.6 — protocol understanding + multi-platform reasoning.
Can I customize the MCP Server Prompt Engineering (Claude / Cursor / Custom) prompt for my use case?
Yes — every Promptolis Original is designed to be customized. Key levers: MCP ≠ plugins, ≠ function-calling — it's a PROTOCOL; Server roles: tools / resources / prompts
What does it cost to use this prompt?
The prompt itself is free, MIT-licensed, with no email signup required. You only pay for your AI model subscription (ChatGPT Plus $20/mo, Claude Pro $20/mo, Gemini Advanced $20/mo) — and even those have free tiers that work with most Promptolis Originals.
How is this different from PromptBase or PromptHero?
PromptBase sells prompts in a marketplace ($2-15 each). PromptHero focuses on image-generation prompts. Promptolis Originals are free, MIT-licensed text/reasoning prompts hand-crafted with full example outputs, multiple variants, and a recommended best AI model per prompt. We don't sell anything.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals