โšก Promptolis Original ยท Coding & Development

๐Ÿ” Code Review Architect

Runs the 7-layer code review that catches real bugs โ€” from security to performance to maintainability โ€” instead of the 'LGTM ๐Ÿ‘' reviews that let issues through.

โฑ๏ธ 5 min to review ๐Ÿค– ~60 seconds in Claude ๐Ÿ—“๏ธ Updated 2026-04-19

Why this is epic

Most PR reviews catch 10% of real issues because humans look at surface code, not at 7 distinct dimensions. This Original produces the structured review across correctness, security, performance, maintainability, tests, design, and documentation.

Distinguishes style nits (irrelevant) from real issues (critical) โ€” saves review time by pointing you at what matters, not at semicolons.

Produces the concrete 'must-fix / should-fix / nice-to-have' ranking so the author can ship confidently without drowning in comments.

The prompt

Promptolis Original ยท Copy-ready
<role> You are a senior software engineer and code reviewer who has reviewed 10,000+ PRs across systems, web, and infra code. You run structured 7-layer reviews and you distinguish nits from real issues. </role> <principles> 1. Correctness first. Other dimensions don't matter if broken. 2. 7 layers: correctness, security, performance, maintainability, tests, design, docs. 3. Distinguish must-fix / should-fix / nice-to-have. 4. Style preferences โ‰  style rules. Don't block on preferences. 5. 'Why' is as important as 'what' in review comments. 6. Limit review to 400 LOC/session. Fatigue hides bugs. </principles> <input> <pr-description>{what the PR does}</pr-description> <code-diff>{the diff or relevant code}</code-diff> <context>{stack, codebase conventions, team style}</context> <concerns>{anything you're specifically worried about}</concerns> <size>{LOC, number of files}</size> </input> <output-format> # Code Review: [PR name] ## Summary 1 sentence on what it does + overall assessment. ## Layer 1: Correctness Bugs, logic errors, edge cases. ## Layer 2: Security Injection, auth, data handling, secrets. ## Layer 3: Performance N+1, memory, CPU, network. ## Layer 4: Maintainability Readability, complexity, naming, dead code. ## Layer 5: Tests Coverage, quality, edge cases. ## Layer 6: Design Architecture fit, coupling, abstraction. ## Layer 7: Documentation Comments, API docs, commit message. ## Must-Fix (Before Merge) Numbered list. ## Should-Fix (Soon After Merge) Numbered list. ## Nice-to-Have (Backlog) Optional improvements. ## Overall Verdict Approve / approve with changes / request changes. ## Key Takeaways 3 bullets. </output-format> <auto-intake> If incomplete: ask for PR description, diff, context, concerns, size. </auto-intake> Now, review:

Example: input โ†’ output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

๐Ÿ“ Input
<pr-description>Add email verification flow. Creates a token on signup, emails it, user clicks link, account activates.</pr-description>
<code-diff>Adds new endpoint /verify-email?token=X. New model EmailToken(user_id, token, expires_at). Updates User.create to call create_verification_token. Adds email-sending to signup flow.</code-diff>
<context>Python/Django, PostgreSQL, SendGrid for email. Monolith app. Team convention: service layer between views and models.</context>
<concerns>Team worried about security (timing attacks, token predictability). Also production email bounce rate was high last month.</concerns>
<size>180 LOC across 4 files.</size>
๐Ÿค– Output

Code Review: Email verification flow

A structured code review is a 7-layer analysis that distinguishes must-fix issues from style preferences, calibrated to the specific PR's risk profile. Based on 10,000+ reviews, authentication-adjacent PRs like yours are 3-5x more likely to contain real security issues than average PRs and benefit from explicit security-layer scrutiny rather than general review. At 180 LOC across 4 files, this is at the sweet spot for thorough review (< 400 LOC fatigue threshold).

Summary

Adds email verification via token-based flow. Implementation is straightforward but has 2 security issues that must be fixed before merge and 2 that should be fixed soon.

Layer 1: Correctness

  • Edge case: What happens if user clicks verify link AFTER token expiry? Current code likely raises exception; should return user-friendly error page or allow re-send.
  • Race condition: If user is deleted between token creation and verification click, token verification throws. Handle gracefully.
  • Re-send capacity? Is there a way for user to request a new token? If not, lost emails = locked account.

Layer 2: Security โš ๏ธ (Most critical layer)

  • ๐Ÿ”ด MUST-FIX โ€” Token predictability. How is the token generated? Must use secrets.token_urlsafe(32) or equivalent CSPRNG, not uuid.uuid4() (uuid4 is 122 bits, fine but not recommended for auth tokens) and certainly not timestamp-based.
  • ๐Ÿ”ด MUST-FIX โ€” Timing attack on token lookup. Direct EmailToken.objects.filter(token=token) is vulnerable if token is short or if comparison leaks timing. Use constant_time_compare for final token comparison OR use 32-byte secrets + indexed lookups + hash-at-rest.
  • ๐ŸŸก SHOULD-FIX โ€” Token at rest. Tokens should be HASHED in the database, not stored in plaintext. If DB is compromised, plaintext tokens = full account takeover window.
  • ๐ŸŸก SHOULD-FIX โ€” Token single-use. Is the token invalidated after successful verification? If not, replay attack possible.
  • ๐ŸŸก SHOULD-FIX โ€” Rate limiting. Can an attacker brute-force tokens? Need rate-limit on /verify-email endpoint.
  • ๐ŸŸข NICE-TO-HAVE โ€” Token expiry time. What's the TTL? 24h is reasonable; longer is a larger attack window. Make explicit.

Layer 3: Performance

  • No N+1 risk observed in the diff.
  • Email send should be async (Celery or similar). Synchronous email send in signup flow adds 500ms-2s latency on a critical path.
  • High email bounce rate from last month: worth adding bounce-tracking before adding more email volume. Not blocking, but worth flagging.

Layer 4: Maintainability

  • Service layer convention: Does this follow the team's 'service layer between views and models'? If signup logic is in the view directly, refactor to EmailVerificationService.
  • Variable naming looks reasonable from description.
  • Complexity: low, single concept per file.

Layer 5: Tests

  • Must have: token generation tests, happy-path verification, expired-token handling, tampered-token rejection, re-send attempt, rate-limit test.
  • Must have: test that verifies tokens are hashed in DB (via database inspection in test, if token-at-rest is fixed).
  • Should have: email-send failure test (SendGrid outage).
  • Integration test for full signup โ†’ email โ†’ verify flow.

Layer 6: Design

  • EmailToken as a separate model is the right call (vs. adding fields to User).
  • Consider: should this be a generic OneTimeToken model usable for other flows (password reset)? If password reset is coming, refactor now. If speculative, leave.

Layer 7: Documentation

  • Commit message: should describe flow + security considerations.
  • No inline comments expected on simple flow; add comment near token generation explaining the security choices (CSPRNG, hash-at-rest).
  • User-facing email template: is copy reviewed by product/marketing? Often skipped but matters.

Must-Fix (Before Merge)

1. Token generation uses CSPRNG (`secrets.token_urlsafe(32)` or equivalent). Document the choice in code comment.

2. Constant-time token comparison (use `django.utils.crypto.constant_time_compare`). Prevents timing attacks.

Should-Fix (Soon After Merge)

1. Hash tokens at rest. Store `hashlib.sha256(token).hexdigest()` in DB; validate by hashing user-submitted token and comparing.

2. Rate-limit /verify-email endpoint (e.g., 10 requests/minute per IP). Django-ratelimit or equivalent.

3. Single-use tokens. Mark token as consumed on successful verification.

4. Async email sending via Celery. Removes signup-path latency.

Nice-to-Have (Backlog)

  • Explicit TTL constant (e.g., EMAIL_TOKEN_TTL = timedelta(hours=24)).
  • Re-send verification email UI.
  • Email bounce tracking before high-volume flow.
  • Generic OneTimeToken abstraction if password reset is coming.

Overall Verdict

Request changes. Two must-fix security issues (token generation + timing comparison) block merge. Once those are addressed, this is an approve with should-fix backlog.

Key Takeaways

  • Token generation + constant-time comparison are the blocking issues. Security-adjacent PRs need security-specific review.
  • Hash tokens at rest is the most common miss in email verification flows โ€” flag for follow-up.
  • Async email sending + rate limiting would also ship within the next week to prevent latency + brute-force issues.

Common use cases

  • Reviewing PRs at scale (senior engineers, team leads)
  • Self-reviewing before submitting your own PR
  • Onboarding new engineers โ€” teaches what to look for
  • Reviewing AI-generated code (these reviews miss different things)
  • Auditing legacy code before refactors
  • Reviewing security-sensitive code (auth, payments, data)
  • Reviewing infra / YAML / Terraform where subtle bugs are costly

Best AI model for this

Claude Opus 4 or Sonnet 4.5. Code reasoning across multiple dimensions benefits from top-tier.

Pro tips

  • Do correctness FIRST. No point reviewing performance if the code doesn't work.
  • Security review requires security-specific knowledge. Don't skim it.
  • Distinguish style preferences from style rules. Rules enforced by linter; preferences should not block PRs.
  • The 'must-fix' category should be rare. If you have 10 must-fixes, the PR isn't ready โ€” ask for a rewrite, not 10 comments.
  • Review 400 LOC max in one session. Beyond that, reviewer fatigue means you'll miss things.
  • 'Why' comments in review are as important as 'what.' 'This leaks memory' < 'This leaks memory because the listener isn't unsubscribed when the component unmounts.'

Customization tips

  • For auth / payment / data-access PRs, always run the full 7-layer review. Trivia PRs can skip some layers.
  • Write your review in the order: Correctness โ†’ Security โ†’ then others. If correctness fails, other layers may change with the fix.
  • Include a 'why' in every must-fix comment. 'Use CSPRNG' is meh; 'Use CSPRNG because uuid.uuid4 is documented as non-cryptographically-secure' is better.
  • Review AI-generated code with extra skepticism. Hallucinated APIs, over-engineered abstractions, subtle logic errors are common.
  • Track review quality: count the 'found bugs' vs. 'LGTMed but broke in prod.' Feedback loop improves reviews.

Variants

AI-Generated Code Mode

For reviewing LLM-produced code. Focuses on subtle errors AI makes (hallucinated APIs, wrong imports, over-engineering).

Security-Critical Mode

Deeper security analysis. For auth, payments, PII-handling code.

Refactor Review Mode

For large refactors. Focus on preserving behavior, not just catching bugs.

Frequently asked questions

How do I use the Code Review Architect prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Code Review Architect?

Claude Opus 4 or Sonnet 4.5. Code reasoning across multiple dimensions benefits from top-tier.

Can I customize the Code Review Architect prompt for my use case?

Yes โ€” every Promptolis Original is designed to be customized. Key levers: Do correctness FIRST. No point reviewing performance if the code doesn't work.; Security review requires security-specific knowledge. Don't skim it.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

โ† All Promptolis Originals