Skip to content

henu-wang/probabilistic-thinking-guide

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

Probabilistic Thinking Guide

A practical guide to thinking in probabilities instead of certainties — the single most important skill for making better decisions under uncertainty. Covers Bayesian reasoning, calibration, base rates, expected value, and practical exercises.

Table of Contents


Why Probabilistic Thinking?

The world is uncertain. Yet most people think in black and white: "It will work" or "It won't work." "The market will go up" or "The market will go down."

Probabilistic thinkers say: "There's a 70% chance it will work, a 20% chance of partial success, and a 10% chance of failure. Given those probabilities, the expected value is positive."

This matters because:

  • Better calibration — You're less often surprised
  • Better decisions — Expected value beats gut feeling
  • Less overconfidence — You acknowledge what you don't know
  • Better communication — "I'm 80% confident" is more useful than "I think so"
  • Faster updating — New information adjusts your probabilities, not your identity

"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function." — F. Scott Fitzgerald


Core Concepts

1. Think in Ranges, Not Points

Instead of: "This project will take 3 months" Say: "There's a 50% chance it takes 2-4 months, a 90% chance it takes 1-6 months"

Instead of: "The stock is worth $50" Say: "I estimate it's worth $40-$65, with my best guess at $50"

Template:

## Probability Range: [Estimate]

| Confidence | Range |
|:-:|:-:|
| 50% (likely range) | [low] to [high] |
| 80% (wider range) | [low] to [high] |
| 95% (almost certain) | [low] to [high] |
| Best single estimate | [point] |

Why ranges matter:

  • A single point estimate hides your uncertainty
  • Ranges communicate how much you know (and don't)
  • Wide ranges = low confidence = more investigation needed
  • Narrow ranges = high confidence (but check for overconfidence)

2. Base Rates First

Before analyzing the specific case, ask: "What usually happens in situations like this?"

The formula:

Start with the base rate
+ Adjust for specific evidence
= Better estimate

Examples:

Question Your Intuition Base Rate Adjusted Estimate
Will my startup succeed? "Probably!" (80%) ~10% survive 5 years 15-25% (if you have strong evidence)
Will this hire work out? "Great interview!" (90%) ~50% of hires succeed long-term 60-70%
Will this project be on time? "Definitely" (95%) ~70% of projects are late 50-60%
Will this relationship last? "Forever!" (99%) ~40-50% of marriages end in divorce 60-70%

The key: Your specific evidence should SHIFT the base rate, not REPLACE it. A great interview doesn't make a hire 90% likely to succeed — it moves the base rate from 50% to maybe 65%.


3. Bayesian Updating

Named after: Thomas Bayes (18th-century statistician)

Core idea: Update your beliefs proportionally to the strength of new evidence.

Simplified process:

1. Start with a PRIOR probability (your belief before new evidence)
2. Observe new EVIDENCE
3. Ask: "How likely is this evidence if my belief is TRUE?"
4. Ask: "How likely is this evidence if my belief is FALSE?"
5. Calculate the POSTERIOR probability (updated belief)

Intuitive example:

Prior: "There's a 20% chance it will rain today" (based on season)

New evidence: The sky is dark and cloudy.
- How likely are dark clouds if it WILL rain? Very likely (90%)
- How likely are dark clouds if it WON'T rain? Somewhat likely (30%)

Updated belief: ~43% chance of rain

More evidence: The weather app says 80% rain probability.
Updated belief: ~75% chance of rain

Practical rules:

  • Strong evidence (very likely if true, very unlikely if false) → big update
  • Weak evidence (similarly likely whether true or false) → small update
  • Extraordinary claims require extraordinary evidence — a low prior needs very strong evidence to shift it significantly

4. Expected Value

Formula:

Expected Value = Σ (Probability × Value) for all outcomes

Decision rule: Choose the option with the highest expected value (adjusted for risk tolerance).

Template:

## Expected Value: [Decision]

### Option A:
| Outcome | Probability | Value | P × V |
|---------|:-:|:-:|:-:|
| Great outcome | __% | +$____ | $____ |
| OK outcome | __% | +$____ | $____ |
| Bad outcome | __% | -$____ | -$____ |
| **Expected Value** | 100% | | **$____** |

### Option B:
| Outcome | Probability | Value | P × V |
|---------|:-:|:-:|:-:|
| Great outcome | __% | +$____ | $____ |
| OK outcome | __% | +$____ | $____ |
| Bad outcome | __% | -$____ | -$____ |
| **Expected Value** | 100% | | **$____** |

### Choose: Option with higher EV = ____

### Risk check: Can I survive the worst case?
- Option A worst case: ____ → Survivable? Y/N
- Option B worst case: ____ → Survivable? Y/N

Important: Expected value maximization works for repeated decisions. For one-time, high-stakes decisions, also consider:

  • Can you survive the worst case? (Ruin probability)
  • How asymmetric are the outcomes? (Upside vs. downside)
  • How reversible is the decision?

5. Calibration

Definition: Your confidence levels match reality. When you say "90% sure," you're right ~90% of the time.

Most people are overconfident: When they say 90% confident, they're right only 50-70% of the time.

Calibration exercise:

Statement Your Confidence (%) Actually True?
"The Earth is closer to the Sun than Mars" __%
"Brazil has more people than Russia" __%
"The Eiffel Tower is taller than 300 meters" __%
[Add your own predictions] __%

Track over time:

## Calibration Log

| Month | Predictions at 90%+ | Actually correct | Calibration |
|-------|:-:|:-:|:-:|
| Jan | 10 | 7 | 70% (overconfident) |
| Feb | 12 | 9 | 75% (still overconfident) |
| Mar | 8 | 7 | 87.5% (improving) |

Tips for better calibration:

  • Widen your confidence intervals (most people's are too narrow)
  • Track predictions in writing
  • Seek feedback on past predictions
  • Practice with calibration games and exercises

6. The Pre-Mortem Probability

Before any initiative, assign probabilities to outcomes:

## Pre-Mortem Probability: [Project/Decision]

| Outcome | Probability | If This Happens, We Will... |
|---------|:-:|---|
| Wild success | __% | |
| Moderate success | __% | |
| Break even | __% | |
| Partial failure | __% | |
| Complete failure | __% | |
| **Total** | **100%** | |

### Are we comfortable with this distribution?
### What could we do to shift probability toward success?
### What's our plan for the failure scenarios?

Practical Applications

Business Decisions

## Probabilistic Business Case: [Initiative]

### Revenue Scenarios
| Scenario | Probability | Annual Revenue | EV |
|----------|:-:|:-:|:-:|
| Bull case | __% | $____ | $____ |
| Base case | __% | $____ | $____ |
| Bear case | __% | $____ | $____ |
| **Expected Revenue** | 100% | | **$____** |

### Cost: $____
### Expected Profit: $____ - $____ = $____

### Decision criteria:
- [ ] Expected profit is positive
- [ ] We can survive the bear case
- [ ] The bull case justifies the effort

Investment Decisions

Margin of safety = buying below expected value to account for being wrong.

## Investment Analysis: [Asset]

### Scenario Valuation
| Scenario | Probability | Fair Value | Weighted |
|----------|:-:|:-:|:-:|
| Bull | __% | $____ | $____ |
| Base | __% | $____ | $____ |
| Bear | __% | $____ | $____ |
| **Expected Fair Value** | 100% | | **$____** |

### Current price: $____
### Expected return: (____ - ____) / ____ = ____%
### Margin of safety: ____%

### Kelly Criterion (position sizing):
Kelly % = (bp - q) / b
Where b = odds, p = win probability, q = loss probability
Suggested position size: ___% of portfolio
(Most practitioners use half-Kelly for safety)

Everyday Decisions

Even daily decisions benefit from probabilistic thinking:

Decision Deterministic Thinking Probabilistic Thinking
Bring umbrella? "Will it rain? Yes/No" "30% chance of rain. Umbrella costs 0 effort. Bring it."
Take the highway? "Highway is faster" "80% chance highway saves 10 min. 20% chance accident adds 30 min. EV: +2 min."
Accept job offer? "It's a good company" "70% chance this is better than current job. 20% equivalent. 10% worse."

Common Probability Mistakes

Mistake What It Looks Like Fix
Neglecting base rates "My startup is different!" Start with the base rate, then adjust
Conjunction fallacy "She's a feminist bank teller" seems more likely than "she's a bank teller" A + B can never be more likely than A alone
Gambler's fallacy "It's been red 5 times, black is due" Independent events have no memory
Ignoring sample size Drawing conclusions from small samples Larger samples = more reliable
Survivorship bias "All successful CEOs did X" What about the failures who also did X?
Overconfidence 90% confidence intervals that are right 50% of the time Track and calibrate
Anchoring First number heard distorts estimate Generate your own estimate first
Binary thinking "It will work" vs. "It won't work" Express as probability: "65% chance"
Availability bias Vivid events seem more probable Look up actual frequencies

Exercises

Exercise 1: Calibration Training

Make 10 predictions this week with explicit confidence levels (e.g., "80% confident the meeting will run over time"). Track results. Adjust.

Exercise 2: Fermi Estimation

Estimate something you don't know, breaking it into components:

  • How many dentists in your city?
  • How many flights take off worldwide each day?
  • How much does your neighbor spend on groceries per year?

Exercise 3: Update Practice

Pick a belief. State your current confidence (e.g., "70% confident AI will replace most white-collar jobs within 20 years"). Read one article that disagrees. What's your updated confidence?

Exercise 4: Expected Value in Daily Life

For your next three decisions this week, quickly calculate expected value. Does it change what you choose?

Exercise 5: Pre-Mortem Probabilities

For your current project, assign probabilities to five outcomes (wild success through complete failure). Do the numbers add to 100%? Are you comfortable with them?


Resources

Books:

  • Superforecasting — Philip Tetlock
  • Thinking, Fast and Slow — Daniel Kahneman
  • The Signal and the Noise — Nate Silver
  • How to Measure Anything — Douglas Hubbard
  • Thinking in Bets — Annie Duke
  • The Drunkard's Walk — Leonard Mlodinow

For decision principles that embrace uncertainty and probabilistic reasoning, explore KeepRule — a curated library of mental models from the world's best thinkers, organized for practical application.


Contributing

Have a probabilistic thinking technique or exercise? PRs welcome.

License

MIT License — see LICENSE for details.

About

A practical guide to probabilistic thinking, Bayesian reasoning, and better predictions

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors