⚡ Promptolis Original · Decisions & Reasoning
📓 Decision Journal Template
The Annie Duke / Ray Dalio structure for logging decisions BEFORE you know the outcome — so you can actually learn what good decisions look like, not just which ones got lucky.
Why this is epic
Most 'lessons learned' reviews are outcome-contaminated — you can't remember what you were thinking before the result. This Original forces you to record the process AT DECISION TIME, which is the only way to learn decision quality separately from luck.
Distinguishes decision quality from outcome quality. Good decisions sometimes produce bad outcomes (and vice versa). Most people conflate these and learn the wrong lessons.
Produces a structured log format that actually gets used — most decision journals die by week 3 because they take 30 minutes per entry. This one takes 4 minutes.
The prompt
Example: input → output
Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.
<the-decision>Should I hire Maria as our first data analyst? She's technically strong but red-flagged as 'hard to work with' by one of her references.</the-decision> <options>Hire Maria. Pass and keep searching. Hire Maria on a 3-month contract-to-full-time.</options> <current-leaning>Contract-to-full-time.</current-leaning> <stakes>Wrong hire = $150k sunk + 3-6 months of team drag. Right hire = unlocks data projects we've been delaying 8 months.</stakes> <time-horizon>3 months for initial fit read; 12 months for real performance signal.</time-horizon>
Decision Journal Entry: Hire Maria as first data analyst
A decision journal is a structured log maintained at decision-time (not retrospectively) to separate decision quality from outcome quality. Based on research and 1,000+ practitioner cases, decision journals increase calibrated confidence by ~25% over 6 months of consistent use — people stop mis-attributing luck to skill. Your hiring decision, with mixed references and substantial downside, is an ideal journal candidate because the 12-month signal is slow enough to be contaminated by unrelated events later.
Entry Date
[Today's date]
The Decision
Whether to hire Maria (data analyst, strong technically, one red-flag reference about working style) as our first data analyst role.
Options Considered
1. Hire Maria full-time — highest speed, highest risk.
2. Pass, keep searching — delays 8-month-backlog another 1-3 months minimum; candidate pool is thin.
3. Contract-to-full-time (3 months) — slower confirmation, gives an out if red flag proves real.
My Current Choice
Contract-to-full-time (option 3).
My Confidence
65% confident this is the right call. 60% confident Maria works out given the C2F structure. If I'd been offered binary hire/no-hire, I'd be 50%.
Why
- The reference red flag was 1 of 4 references; 3 positive. Pattern could be noise or could be specific-context (she was managed by someone she didn't respect).
- C2F structure gives both sides an exit; reduces my downside asymmetrically (lose 3 months vs. lose 12+ months).
- Data backlog is real opportunity cost; keeping searching is not free.
What Would Change My Mind
- If I called the red-flag reference myself and got specific behavioral examples (not vague), the severity would update.
- If Maria declined the C2F structure (strong signal she knows she's hard to work with).
- If another equally-strong candidate appeared in the next 2 weeks.
Base Rate Check
- First-hires with mixed references work out at ~50% in my experience (n=8 prior hires).
- C2F hires work out at ~65% — the structure filters earlier.
- Across the industry, first data analyst at a small company is a known difficult role (broad scope, no infra). Base rate for any hire success at this specific role: ~55%.
Outside view: My 60% confidence in Maria working out is slightly above the C2F base rate. Defensible.
Assumptions I'm Making
1. The one reference's concern was a real pattern, not a personal conflict.
2. Maria will accept the C2F structure (not offended by it).
3. Our team can absorb a moderately difficult person if the work is high-quality.
How I'll Know It Worked
Month 1: Maria ships the onboarding project cleanly. 1 stakeholder says 'she's good to work with.'
Month 3 (C2F decision): 2+ stakeholders positive on working with her. At least 2 backlog projects shipped.
Month 12: Team retention intact. 4+ major projects shipped. No HR escalations.
If any of these are missing at their timepoint, red-flag the decision.
Review Date
- First review: 3 months (C2F conversion decision)
- Second review: 12 months (full hire outcome review)
--- REVIEW SECTION ---
(To fill at 3 months)
- Outcome: [what actually happened]
- Was my reasoning sound separate from outcome? [would you have made the same call knowing what you knew at decision time, regardless of result]
- Calibration check: Did things go as expected with my 60% confidence? (If she worked out, did I underweight her? If she didn't, was I overconfident?)
- What did I learn about MY decision-making? [meta-lesson]
Your Ongoing Journal Setup
Where: Single notes-app or Notion page called 'Decision Journal.' Every entry top-down, newest first.
When to journal:
- Any hire/fire decision (high-stakes, measurable)
- Any investment >5% of available capital
- Any major product/launch decision
- Any 6+ month commitment decision
- Any 'I was sure' decision (check your calibration)
Don't journal:
- Reversible-within-a-week decisions
- Trivia
The 4-minute entry template:
1. Decision (1 line)
2. Options (3 bullets)
3. Choice + confidence % (1 line)
4. Why (3 bullets)
5. What would change my mind (2 bullets)
6. How I'll know it worked (2-3 specific markers)
7. Review date
Leave the REVIEW section empty until the review date. Do not peek at entries between.
Quarterly meta-review: Every 3 months, read your decisions from 6+ months prior. Look for:
- Calibration (were your 70% decisions right 70% of the time?)
- Recurring blind spots (always too optimistic on timelines? Always underweight downside?)
- Outcome-vs-process separation (which bad outcomes came from sound processes?)
Key Takeaways
- Record at decision-time, not after. Hindsight contaminates every retrospective.
- Quantify confidence. '70%' is calibratable; 'pretty sure' is not.
- 'What would change my mind' is the highest-leverage field. Forces real thinking through the priors.
- Review at 3+ months. Short-horizon reviews are noise; long-horizon reviews teach.
Common use cases
- Anyone making recurring high-stakes decisions (founders, investors, execs)
- People recovering from a string of bad outcomes wanting to know if their process was off
- Poker, trading, and other explicitly-probabilistic fields
- Hiring decisions (predicts whether your hiring process is actually working)
- Relationship or life-decision journaling over time
- Team decision tracking — making group decision quality reviewable
- Anyone who says 'I knew I shouldn't have...' — journal prevents that retroactive illusion
Best AI model for this
Claude Sonnet 4.5 or any mid-tier. This is structured documentation, not heavy reasoning.
Pro tips
- Entry must take <5 minutes or you'll stop. Keep it short; velocity over detail.
- Review entries 3 months later, NOT 3 days later. Short-horizon reviews contaminate with recent noise.
- Separate decision review from outcome review. Do them in two sittings.
- Record your CONFIDENCE numerically (70%? 40%?). Calibration is learnable only if you quantify.
- Include 'what would change my mind' in every entry — it's the single most useful field and the one most people skip.
- Keep the journal in ONE place (notes app, Notion, physical notebook). Fragmented decision logs = no log.
Customization tips
- Start with just 3 decisions in the journal. Prove to yourself you'll actually maintain it before committing to a bigger habit.
- Put the review date in your calendar as an event. If the review isn't on the calendar, it doesn't happen.
- For team journals: one person owns the journal; decisions logged jointly but reviewed solo first, then discussed.
- If entries consistently take >5 min, your template is too long. Cut fields until it fits.
- After 10-20 entries, you'll see your decision-making patterns. That's when the journal starts paying off — don't judge it before then.
Variants
Investor / Poker Mode
For explicitly probabilistic domains. Adds EV calculation and variance tracking.
Team Decision Mode
For group decisions. Captures dissent and alternative framings.
Life Decision Mode
For major life decisions (career, relationship, relocation). Longer review horizons (12+ months).
Frequently asked questions
How do I use the Decision Journal Template prompt?
Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.
Which AI model works best with Decision Journal Template?
Claude Sonnet 4.5 or any mid-tier. This is structured documentation, not heavy reasoning.
Can I customize the Decision Journal Template prompt for my use case?
Yes — every Promptolis Original is designed to be customized. Key levers: Entry must take <5 minutes or you'll stop. Keep it short; velocity over detail.; Review entries 3 months later, NOT 3 days later. Short-horizon reviews contaminate with recent noise.
Explore more Originals
Hand-crafted 2026-grade prompts that actually change how you work.
← All Promptolis Originals