Bayesian Signal Updating

An interactive app for Bayesian belief updating with binary signals

Bayesian Signal Updating

A small interactive page for exploring Bayesian belief updating with a binary state, a noisy binary signal, and live posterior calculations.

Signal structure

θ = 1 θ = 0 s = 1 s = 0 s = 1 s = 0 π₀ 1 − π₀ 1 − α α β 1 − β
The state is \( \theta \in \{0,1\} \), where \( \theta=1 \) means “passed” and \( \theta=0 \) means “failed.” The signal is \( s \in \{0,1\} \), where \( s=1 \) is a signal in favor of passing and \( s=0 \) is a signal in favor of failing.
Signal \(s=1\) Signal \(s=0\)
True state \( \theta=1 \) (passed) \(1-\alpha\) \(\alpha\)
True state \( \theta=0 \) (failed) \(\beta\) \(1-\beta\)

Controls

Prior \( \pi_0 = \mathbb{P}(\theta=1) \) 50.0%
0102030405060708090100
\( \alpha \) (false negative) 20.0%
\( \alpha = \mathbb{P}(s=0 \mid \theta=1) \), so \( 1-\alpha = \mathbb{P}(s=1 \mid \theta=1) \).
\( \beta \) (false positive) 20.0%
\( \beta = \mathbb{P}(s=1 \mid \theta=0) \), so \( 1-\beta = \mathbb{P}(s=0 \mid \theta=0) \).
Observed signal
Posterior if \( s=1 \)
80.0%
\( \pi_1(s=1)=\mathbb{P}(\theta=1\mid s=1) \)
Posterior if \( s=0 \)
20.0%
\( \pi_1(s=0)=\mathbb{P}(\theta=1\mid s=0) \)
Active posterior for the selected signal:
\( \pi_1 = 80.0\% \)

Setup

The environment is characterized by a binary outcome, corresponding to whether the decision-maker passes or fails the exam. Formally, the state is \( \theta \in \{0,1\} \), where \( \theta=1 \) means “passed” and \( \theta=0 \) means “failed.” The prior probability of passing is denoted by \( \pi_0 = \mathbb{P}(\theta=1) \), with \( \pi_0 \in (0,1) \).

Before making a decision, the decision-maker receives a binary signal \( s \in \{0,1\} \). We interpret \( s=1 \) as a signal in favor of passing, and \( s=0 \) as a signal in favor of failing. The signal is noisy rather than perfectly revealing, with likelihoods

$$ \mathbb{P}(s=1\mid \theta=1)=1-\alpha,\qquad \mathbb{P}(s=0\mid \theta=1)=\alpha, $$ $$ \mathbb{P}(s=1\mid \theta=0)=\beta,\qquad \mathbb{P}(s=0\mid \theta=0)=1-\beta, $$

where \( \alpha,\beta\in[0,1] \) are the false-negative and false-positive rates, respectively.

The likelihoods describe how informative the signal is about the true state of the world. They answer a simple question: if a given state were true, how likely would we be to observe a particular signal?
In this setting, two types of mistakes can occur. A false negative arises when the decision-maker actually passed the exam (\( \theta = 1 \)) but receives a “fail” signal (\( s=0 \)); this happens with probability \( \alpha \). A false positive arises when the decision-maker actually failed (\( \theta = 0 \))

Posterior beliefs

By Bayes’ rule, the posterior probability of passing after signal \( s=1 \) is

$$ \pi_1(s=1) \equiv \mathbb{P}(\theta=1\mid s=1) = \frac{(1-\alpha)\pi_0}{(1-\alpha)\pi_0+\beta(1-\pi_0)}. $$

Similarly, the posterior probability of passing after signal \( s=0 \) is

$$ \pi_1(s=0) \equiv \mathbb{P}(\theta=1\mid s=0) = \frac{\alpha\pi_0}{\alpha\pi_0+(1-\beta)(1-\pi_0)}. $$

Use the sliders to see how the prior \( \pi_0 \), the false-negative rate \( \alpha \), and the false-positive rate \( \beta \) jointly determine posterior beliefs. In particular, a more accurate signal corresponds to lower values of both \( \alpha \) and \( \beta \).