⚡ Promptolis Original · Data & Analytics

🗄️ SQL Query Optimizer — Find + Fix Slow Queries

The structured SQL query optimization covering query plan analysis, index strategy, join optimization, the 7 common performance killers, and the systematic tuning methodology that turns 30-second queries into sub-second.

⏱️ 30 min per query optimization 🤖 ~2 min in Claude 🗓️ Updated 2026-04-20

Why this is epic

Most slow queries have 3-5 specific optimization opportunities: missing indexes, bad join order, unnecessary scans, subquery inefficiencies. This Original produces structured tuning: query plan analysis + optimization + validation.

Names the 7 performance killers (missing indexes, table scans, N+1 queries, suboptimal joins, SELECT *, no statistics, bad plans, function on indexed columns) + specific fixes.

Produces systematic optimization methodology + benchmarking approach + production-safe deployment.

The prompt

Promptolis Original · Copy-ready
<role> You are a SQL optimization specialist with 15 years of database experience (Postgres, MySQL, BigQuery, Snowflake). You've tuned 5,000+ queries + mentored database teams. You understand query planners, indexing strategies, join algorithms. You are direct. You will name when queries are obviously bad, when schemas are the real problem, and when optimization isn't worth it (rare queries acceptable as-is). </role> <principles> 1. EXPLAIN plan first. Data-driven, not guess-driven. 2. Indexes are 80% of performance wins. 3. SELECT * hurts performance. 4. N+1 queries = app anti-pattern. 5. Function on indexed column voids index. 6. Subqueries often better as JOINs. 7. Statistics must be current. 8. Test on production-like data size. </principles> <input> <query>{the slow query}</query> <schema>{relevant tables + structure}</schema> <database>{Postgres/MySQL/BigQuery/Snowflake}</database> <current-performance>{execution time, row counts}</current-performance> <indexes>{current indexes}</indexes> <query-plan>{EXPLAIN output if available}</query-plan> <frequency>{how often query runs}</frequency> <constraints>{can't add indexes / schema changes}</constraints> </input> <output-format> # Query Optimization: [Query summary] ## Query Plan Analysis What planner shows. ## Performance Killers Identified Specific issues. ## Optimization Plan Specific changes. ## Index Recommendations New + modified indexes. ## Rewritten Query Optimized version. ## Testing Plan Before production deploy. ## Expected Performance From current to target. ## Key Takeaways 5 bullets. </output-format> <auto-intake> If input incomplete: ask for query, schema, database, performance, indexes, plan, frequency, constraints. </auto-intake> Now, optimize:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
<query>SELECT u.id, u.email, u.name, 
  (SELECT COUNT(*) FROM orders o WHERE o.user_id = u.id) as order_count,
  (SELECT SUM(total) FROM orders o WHERE o.user_id = u.id AND o.status = 'completed') as total_spent
FROM users u
WHERE u.created_at > '2024-01-01'
  AND u.status = 'active'
ORDER BY u.created_at DESC
LIMIT 100;</query>
<schema>users: id (PK), email, name, created_at, status. 2.5M rows. orders: id (PK), user_id (FK), total, status, created_at. 12M rows.</schema>
<database>PostgreSQL 15.</database>
<current-performance>8.2 seconds execution time. Sequential scans on orders table.</current-performance>
<indexes>users: idx_users_created_at, idx_users_status. orders: idx_orders_user_id, idx_orders_status (separate).</indexes>
<query-plan>EXPLAIN shows: Seq Scan on orders for each users row — correlated subqueries triggering N+1 pattern inside single query.</query-plan>
<frequency>Dashboard query, runs 500 times/hour.</frequency>
<constraints>Can add indexes. Prefer not to change application code if possible.</constraints>
🤖 Output

Query Optimization: User List with Order Counts + Spending, Postgres 15, 8.2s → target <100ms

A SQL query optimization produces systematic performance tuning. Your query has 2 major issues (correlated subqueries + missing composite index) producing 8.2s execution time. This plan rewrites query with LEFT JOIN + GROUP BY + adds 1 composite index, reducing to ~80-150ms. Will deploy production-safely with testing.

Query Plan Analysis

What planner shows:

  • Sequential scan on orders for EACH row in users (correlated subquery pattern)
  • For 100 users × 2 subqueries = 200 orders table scans
  • Orders has 12M rows → 200 full scans = significant I/O
  • Total cost dominated by subquery execution

Index usage:

  • users: idx_users_created_at used for initial filtering
  • orders: idx_orders_user_id NOT being used for subqueries (optimizer choosing scan for small-estimated result sets)

Performance Killers Identified

1. Correlated Subqueries (Primary Issue)

Pattern: each outer row triggers subquery execution.

Impact: 100 users × 2 subqueries = 200 queries against orders table internally.

Fix: rewrite as JOINs with aggregation.

2. Missing Composite Index

Current: idx_orders_user_id + separate idx_orders_status

Problem: for 'total spent on completed orders' query, optimizer can't efficiently use 2 separate indexes.

Fix: composite index on (user_id, status) with INCLUDE(total).

3. Missing Users Composite

Current: separate indexes on created_at + status

For query filter: WHERE status='active' AND created_at > date

Fix: composite index (status, created_at) with INCLUDE.

Optimization Plan

Priority 1: Rewrite query (biggest impact)

Priority 2: Add composite indexes (support rewrite)

Priority 3: ANALYZE tables (statistics freshness)

Index Recommendations

New Index 1: Users Composite
CREATE INDEX CONCURRENTLY idx_users_status_created_at
ON users (status, created_at)
INCLUDE (id, email, name);

Why: supports WHERE filter + covers SELECT columns = index-only scan possible.

Estimated size: ~200 MB for 2.5M rows.

New Index 2: Orders Composite
CREATE INDEX CONCURRENTLY idx_orders_user_status_total
ON orders (user_id, status)
INCLUDE (total);

Why: supports JOIN + aggregation + SUM calculation from index alone.

Estimated size: ~800 MB for 12M rows.

Drop Existing (Optional):
-- idx_orders_status can be dropped (rarely used standalone)
-- Verify first via pg_stat_user_indexes usage

Rewritten Query

SELECT 
  u.id, 
  u.email, 
  u.name,
  COALESCE(o.order_count, 0) as order_count,
  COALESCE(o.total_spent, 0) as total_spent
FROM users u
LEFT JOIN (
  SELECT 
    user_id,
    COUNT(*) as order_count,
    SUM(CASE WHEN status = 'completed' THEN total ELSE 0 END) as total_spent
  FROM orders
  GROUP BY user_id
) o ON o.user_id = u.id
WHERE u.created_at > '2024-01-01'
  AND u.status = 'active'
ORDER BY u.created_at DESC
LIMIT 100;

Changes:

  • Correlated subqueries → single LEFT JOIN with aggregated subquery
  • Single scan of orders with GROUP BY (uses index)
  • CASE statement for conditional sum (faster than second subquery)
  • COALESCE handles users with no orders

Alternative (even better if orders table is huge):

-- Only aggregate orders for matching users (pre-filter)
SELECT 
  u.id, u.email, u.name,
  COALESCE(o.order_count, 0) as order_count,
  COALESCE(o.total_spent, 0) as total_spent
FROM (
  SELECT id, email, name, created_at
  FROM users
  WHERE status = 'active' 
    AND created_at > '2024-01-01'
  ORDER BY created_at DESC
  LIMIT 100
) u
LEFT JOIN (
  SELECT 
    user_id,
    COUNT(*) as order_count,
    SUM(CASE WHEN status = 'completed' THEN total ELSE 0 END) as total_spent
  FROM orders
  WHERE user_id IN (
    SELECT id FROM users 
    WHERE status = 'active' AND created_at > '2024-01-01'
    ORDER BY created_at DESC
    LIMIT 100
  )
  GROUP BY user_id
) o ON o.user_id = u.id
ORDER BY u.created_at DESC;

Pre-filters orders to only include relevant user_ids = much faster for 12M row orders table.

Testing Plan

Phase 1: Development Testing

1. Take production backup to staging

2. Add new indexes (CONCURRENTLY to avoid locking)

3. Run EXPLAIN ANALYZE on optimized query

4. Compare execution plans

5. Measure execution time at scale

Phase 2: Validation

1. Verify query results match original

2. Run 100 times to measure average + percentile performance

3. Check index usage via `pg_stat_user_indexes`

Phase 3: Production Deploy

1. Add indexes during low-traffic window (CONCURRENTLY still needs resources)

2. Deploy application query change

3. Monitor query performance

4. Monitor overall database load

5. Rollback plan: revert app, drop indexes if needed

Monitoring after deploy:

  • P50/P95/P99 query execution times
  • Overall database CPU/IO
  • Index usage statistics
  • Application error rates

Expected Performance

Current: 8.2 seconds.

After optimization:

  • With composite indexes: ~200-400ms
  • With query rewrite: ~80-150ms
  • Combined with alternative pre-filter version: ~50-100ms

Improvement: ~100x faster.

Production impact:

  • 500 executions/hour × 8 seconds saved = 1 hour of cumulative query time saved per hour
  • Database CPU reduction: 15-25%
  • User experience: dashboard loads noticeably faster

Key Takeaways

  • Root cause: correlated subqueries (N+1 within single query). Rewrite as LEFT JOIN + aggregation = ~100x speedup.
  • Add 2 composite indexes: users(status, created_at) INCLUDE (id, email, name) + orders(user_id, status) INCLUDE (total). Enables index-only scans.
  • Alternative pre-filter version (subquery filtering orders by user_ids in filter) is optimal for huge orders table. ~50-100ms vs. 200-400ms base optimization.
  • Deploy CONCURRENTLY to avoid table locks during index creation. Takes longer to build but production-safe.
  • 100x speedup: 8.2 seconds → 80-150ms. At 500 executions/hour, significant database load reduction + dashboard user experience improvement.

Common use cases

  • Developers tuning application queries
  • Data engineers optimizing pipelines
  • DBAs addressing performance issues
  • Analysts speeding up reports
  • Scaling applications with database bottlenecks

Best AI model for this

Claude Opus 4 or Sonnet 4.5. SQL optimization requires database internals + query planning. Top-tier reasoning matters.

Pro tips

  • EXPLAIN plan first. Don't guess — let query planner show.
  • Indexes are 80% of performance wins. Missing index = sequential scan pain.
  • SELECT * hurts: unnecessary data transfer + prevents index-only scans.
  • N+1 queries kill apps. Use joins + prefetching.
  • Function on indexed column voids the index.
  • Subqueries often better as JOINs in modern optimizers.
  • Statistics outdated = bad plans. ANALYZE regularly.
  • Test on production-like data size. Dev data may not reveal problems.

Customization tips

  • EXPLAIN ANALYZE actually runs the query (not just plans it). On large tables, use carefully. Takes query execution time.
  • pg_stat_statements module in Postgres: tracks slow queries automatically. Review top-10 slow queries weekly.
  • For BigQuery/Snowflake: different optimization patterns. Data warehouse optimizers differ from OLTP.
  • After adding indexes, run ANALYZE on the table. Statistics must match new indexes for optimizer to use them.
  • Index write-cost vs. read-cost: each index slows INSERT/UPDATE. Balance: add for read-heavy queries, skip for write-heavy tables.

Variants

Application Query Tuning

For ORMs + application queries.

Analytics Query Tuning

For reports + dashboards.

Data Pipeline Optimization

For ETL + batch processing.

Schema Redesign

For structural optimization.

Frequently asked questions

How do I use the SQL Query Optimizer — Find + Fix Slow Queries prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with SQL Query Optimizer — Find + Fix Slow Queries?

Claude Opus 4 or Sonnet 4.5. SQL optimization requires database internals + query planning. Top-tier reasoning matters.

Can I customize the SQL Query Optimizer — Find + Fix Slow Queries prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: EXPLAIN plan first. Don't guess — let query planner show.; Indexes are 80% of performance wins. Missing index = sequential scan pain.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals