Foundational Thinking Models: Bayesian Updating

Turning New Information into Better Decisions

Bayesian Updating “When the facts change, I change my mind. What do you do, sir?”
— (often attributed to) John Maynard Keynes

Bayesian Updating is a disciplined way to revise what you believe in light of fresh evidence.
At its heart sits Bayes’ Theorem:

Posterior ∝ Prior × Likelihood

  • Prior – your starting belief about how probable something is.
  • Likelihood – how compatible the new data are with each possible explanation.
  • Posterior – your updated belief after blending prior knowledge with new evidence.

Repeat the cycle whenever new data arrive and your mental model stays “live”—never frozen in yesterday’s assumptions.


2. Why Executives Should Care

  1. Faster, clearer pivots. Markets move; so must your conviction levels. Bayesian thinking forces you to quantify how much a new signal should shift your stance—avoiding knee-jerk overreactions and stubborn inertia.
  2. Back-testing intuition. By writing down “priors” (explicit assumptions) you create an audit trail. That transparency builds organisational learning.
  3. Risk-weighted bets. Resource allocation, M&A, pricing experiments—each hinges on the probability of multiple futures. Bayesian updating keeps those probabilities realistic.
  4. Culture of evidence. Teams learn that opinions are provisional until the next data point—lowering ego friction and promoting constructive challenge.

3. Core Principles in Plain English

PrincipleExecutive Translation
Priors matterStart with something—even a rough base rate—before new info arrives.
Signal vs. noiseWeight evidence by its reliability (sample size, data quality).
Continuous refinementDon’t “flip the switch”; nudge beliefs each time data accrue.
Explicit probabilitiesReplace vague words (“likely”, “risky”) with numbers or probability ranges.
Decision thresholdsDefine in advance what posterior probability will trigger action.

4. A Five-Step Bayesian Playbook for Leaders

  1. Frame the question. What probability actually matters? (e.g., “Probability a pilot market will reach $10 m ARR inside 24 months.”)
  2. Elicit your prior. Use base rates, industry benchmarks, historical internal projects, or expert judgement—but write it down.
  3. Gather evidence & score its reliability. Design experiments, run pilots, or collect market intelligence.
  4. Update numerically (or at least directionally). Even simple scoring (–2 to +2) keeps the process disciplined if formal maths feels heavy.
  5. Act or iterate. Compare the new posterior to your decision thresholds. If still uncertain, design the next evidence-generating loop.

5. Corporate Case Study – Netflix & Original Content

  • 2011 prior belief: In-house originals might boost retention but represented untested creative risk. Prior probability that a flagship show would materially lower churn: ~30 %.
  • Evidence wave 1: Big-data analysis of viewer clusters showed strong latent demand for political thrillers + Kevin Spacey fan overlap. Likelihood of success under a “House of Cards” concept much higher than generic drama. Posterior rose to ~55 %.
  • Evidence wave 2: Pilot marketing test trailers yielded record click-throughs; competitor bidding dynamics signalled urgency. Posterior >80 %.
  • Decision: Green-light $100 m two-season order, pivoting Netflix strategy toward originals—eventually transforming the industry.

6. Common Pitfalls

TrapHow to Counter
Over-anchoring on priorsSchedule explicit “prior re-sets” at milestones; invite dissenting voices.
Data myopia (ignoring outside base rates)Start with outside-view statistics before adjusting with inside info.
False precisionUse ranges (e.g., 40-60 %) when data quality is low; avoid spurious decimals.
Confirmation loopsBuild dashboards that surface disconfirming evidence by default.

7. Action Checklist

  • Capture one key prior for each strategic initiative in your OKR system.
  • Add a column for “evidence weight” in decision logs.
  • Train product and finance teams in basic Bayesian language.
  • Hold quarterly “belief-update” sessions—red-team style.
  • Reward teams for updating early, not for always being right at the start.

8. Quick Self-Assessment

  1. Can you name the implicit priors behind your next board paper?
  2. Do your dashboards flag how much a KPI change should shift beliefs—or just show the change?
  3. When was the last time you publicly lowered confidence in a pet project?

9. Reflection Prompts for Leadership Teams

  • Which long-held assumption about our customers feels least examined by fresh data?
  • How could we create a “minimum viable dataset” to test it in the next 30 days?
  • What decision thresholds would trigger a pivot—and are they written down?

10. Further Reading

  1. “The Signal and the Noise” – Nate Silver – Accessible intro with business examples.
  2. “Superforecasting” – Tetlock & Gardner – Practical methods for probability calibration.
  3. “Bayesian Statistics the Fun Way” – Will Kurt – Gentle quantitative walkthroughs.
  4. McKinsey Quarterly: “Bias Busters—Updating Beliefs” (2023) – Short case vignettes.
  5. Harvard Business Review: “A Better Way to Think About Risk” – Applying Bayes to corporate strategy.

Closing Thought

Bayesian Updating isn’t about perfect predictions—it’s about becoming less wrong faster than the competition. In an era where agility trumps certainty, the executive who updates early and often turns uncertainty into strategic edge.

Missed out on the over all series?

Murray Slatter

Strategy, Growth, and Transformation Consultant: Book time to meet with me here!

Or Signup for the Newsletter

Leave a Reply

Your email address will not be published. Required fields are marked *