Introduction to Probability (Part 1)
Learn outcomes, events, distributions, conditional probability, Bayes rule, and independence, with simple examples and diagrams that build a solid foundation for statistics and machine learning.
In mathematics, probability gives us a precise way to reason about uncertainty, allowing us to describe and manipulate uncertain outcomes using clear, consistent rules. In machine learning, statistics, and probabilistic modelling, nearly every concept builds on these foundations, from estimating unknown quantities and making predictions to learning from data and quantifying uncertainty in model outputs.
This post builds our understanding of probability theory from the ground up. We focus on formal mathematical rules, explaining each with simple examples and diagrams.
By the end, you will be comfortable with the fundamentals of probability theory, capable of reading and manipulating mathematical notation, and understand how these ideas form a single, consistent structure that underpins our current tools.
Definitions of Outcomes, Events, and Event Spaces
Before using probability terminology for reasoning and decision-making, we must first define uncertainty. Uncertainty arises with imperfect or unknown information, making outcomes unpredictable.
An experiment is any process with an unknown outcome. Tossing a coin, rolling a die, or drawing a card from a deck are typical examples. These are called experiments because of their randomness. Before performing the experiment, we do not know which outcome will occur.
The outcome space of such an experiment is denoted by , and is the set of all possible outcomes of the experiment. For a fair six-sided die,
Each element of represents one complete, mutually exclusive description of the outcome. To illustrate this concept, consider a simple thought experiment: imagine a die with faces numbered to . It is impossible for the die to land on two different numbers simultaneously; if the die shows a , it cannot, at the same time, show a . This mutual exclusivity is crucial in defining each element in as representing exactly one outcome. When the experiment is run, exactly one outcome in occurs.
Importantly, an event is not a single outcome, but a set of outcomes. An event tells us about the result of the experiment. All of the following are examples of events in the dice rolling experiment. Each statement corresponds to an actual result:
- “The die shows an even number” corresponds to the set .
- “The die shows a six” corresponds to the set .
- “The die shows a number greater than four” corresponds to the set .
The empty event, denoted , contains no outcomes and represents a statement that can never be true for that experiment.
At this point, it is important to specify which sets of outcomes we treat as meaningful and why. In principle, any subset of the outcome space could be considered an event, but in more complex settings, not all subsets are practical or relevant for our purposes. To address this, we introduce the event space , a collection of events. The event space specifies exactly the sets of outcomes to which we assign probabilities and reflects the questions we are prepared to model and reason about.
For simple, finite cases like dice rolls or cards, the event space usually contains all subsets of . That is, every event present in the outcome space is a useful outcome that we want to assign a probability to. In complex situations, however, is chosen more carefully to exclude meaningless questions.
The event space must satisfy three basic properties:
- It contains the empty event and the trivial event .
- It is closed under union: if and are events, then is also an event.
- It is closed under complementation: if is an event, then is also an event.
These requirements ensure that once we decide which questions are meaningful, we can also ask subsequent questions. If we can ask whether event happened and whether event happened, we must also be able to ask whether at least one of these events happened, or whether event did not happen. Closure also guarantees that probability theory is logically stable under such reasoning.
To help solidify this concept, imagine an event space that is not closed under basic operations such as unions or complements. You might define two valid events, yet find that combining them produces a set that is no longer considered an event. This leads to an immediate problem — probabilities cannot be assigned consistently. Statements "A or B occurs" or "A does not occur" become ill-defined, even though they arise naturally in reasoning. Closure ensures that whenever we form such combinations, the result remains within the event space, allowing probability to behave coherently and avoiding contradictions.
Probability Distributions and Axioms
Once we have decided which events we care about, we can quantify their likelihood of occurring. To do that, we first need to understand probability distributions and some of the axioms (fundamental statements or rules accepted as true without proof) of probability. Think of these axioms like the rules of a game: they provide the structure within which everything else operates.
A probability distribution assigns a number to each event in a way that reflects how plausible that event is. Importantly, these numbers are not arbitrary. They must obey a small set of basic rules that prevent contradictions and ensure that probabilities behave sensibly.
Formally, a probability distribution over is a function that maps each event in to a real number, subject to the following axioms:
and if and are disjoint events,
The first axiom rules out negative probabilities. The second fixes the probability of the entire outcome space to one, expressing the fact that the experiment must produce some outcome in . The third axiom states that if two events are disjoint, the probability that either occurs is the sum of their probabilities, since there is no overlap to count twice.
Together, these axioms are minimal but sufficient. They do not aim to describe every aspect of probability, only to guarantee internal consistency. With no redundancy and no extra assumptions, they provide just enough structure to support the whole theory.
Useful consequences follow from these axioms. In particular, since and are complements,
This means that there is a 0% chance of outcomes within the empty event from occurring - they are impossible.
Furthermore, for any two events, whether disjoint or not, the probability of either or both of these events occurring can be calculated as
This formula arises because outcomes in are counted twice — once in and once in . Subtracting removes the duplicate contribution.
To illustrate why this is necessary, consider the following example:
For a fair die, let and . Then , , and . Applying the formula gives , which corresponds to the four outcomes .
Interpreting Probability
So far, we have defined probability as a numerical system with rules that guide our questions and answers. To understand what probabilities represent, we rely on frequentist or subjective interpretations. A practical decision, such as betting on whether it will rain tomorrow, can highlight the differences between these interpretations.
The frequentist interpretation defines probability in terms of long-run behaviour. If we repeat an experiment many times under identical conditions, the probability of an event is the fraction of times it occurs as the number of repetitions increases. In our rainfall example, a frequentist might consider historical weather data to estimate the probability of rain.
Frequentist probabilities are calculated as follows: If an event occurs times in independent repetitions of the experiment, we estimate
As increases, the empirical frequency should stabilise and converge to a fixed value that we call the probability of the event .
On the other hand, the subjective interpretation measures the degree of belief an agent holds based on current information. It expresses how plausible an event seems, not how often it would occur if repeated. In the context of predicting tomorrow’s rain, a subjective interpretation might include the meteorologist's insights or recent changes in atmospheric conditions that are not captured in the historical data.
While it may seem that this interpretation is neither useful nor mathematically sound, it is essential when repeating an experiment is impossible or meaningless, such as predicting tomorrow’s weather or assessing whether a specific system will fail.
Both interpretations use the same mathematical rules. The frequentist view relies on repetition. The subjective view treats probability as a tool for reasoning about uncertainty, allowing beliefs to update in light of new information. Humans use this reasoning all the time, even when we do not state beliefs or assign numerical values to them.
Conditional Probability
In many situations, we gain partial information about possible outcomes before an experiment is complete, which, in turn, affects the probabilities we assign to events. Conditional probability formalises how these probabilities change once we restrict attention to outcomes consistent with the partial information we have obtained.
For events and , the conditional probability of given is defined as
This definition may be better understood graphically. The event restricts the outcome space to a smaller region. Within that restricted space, we ask what fraction of outcomes also belong to . Consider the following example, in which we calculate the probability of drawing a heart card, given that we will draw a red card.
The Chain Rule and Bayes' Rule
Conditional probability is useful as it allows us to break down joint events (two or more events happening at the same time) into separate, sequential pieces. Specifically, this can be achieved through the chain rule, which splits a joint probability into more granular, useful components.
For two events and , the chain rule can be applied as
This follows directly from the definition of conditional probability (the denominator is just brought over to the other side of the equation!). The probability that both events occur is the probability that occurs, multiplied by the probability that occurs within the subset of outcomes where is true.
The chain rule can also be applied to scenarios with more than two joint events. For example, for three events , , and ,
This decomposition can be extended to any number of events and provides a systematic way to construct complex probabilities from simpler conditional components.
Bayes' Rule
One of the most useful consequences of the chain rule is Bayes' rule. The key idea is that the same joint probability can be written in two different ways. From the chain rule, we have both and .
Since these expressions describe the same event, they must be equal. Solving this equality for gives Bayes' rule, which shows how to update the probability of (the hypothesis) after observing (the evidence), resulting in
In Bayes' rule, is called the prior probability. It represents how likely we believe event is before observing , based only on the information available up to that point. The term measures how compatible the observation is with the assumption that is true. Together, these two quantities form the numerator and express how strongly explains the observed event .
The denominator acts as a normalising constant. It accounts for all possible ways in which could occur and rescales the numerator so that the resulting value of lies between 0 and 1. In this way, Bayes' rule converts an unnormalised score into a valid probability that reflects our updated belief about after observing .
Although I can try to explain this here in great detail, the very best explanation I have found for Bayes' rule is 3blue1brown's YouTube video. Please watch it and support him — he has done so much for the mathematics and machine learning community!
Random Variables
Reasoning directly about sets of outcomes quickly becomes tedious, and we really need a better, more succinct way of writing about probabilities. This is exactly where Random variables are useful. Specifically, random variables provide a numerical representation of uncertainty, simplifying both notation and analysis.
It is crucial to distinguish between the 'random outcome' (e.g., the physical result of tossing a coin) and the 'variable X', which is a numeric representation of that outcome. This separation helps prevent misconceptions by framing the random event as one thing and its mathematical mapping as another.
A random variable can be defined as a function that maps each outcome in to a real number. For a coin toss, this can be written as
The randomness in a "random variable" originates in the experiment's outcomes, since it is the experiment that produces random outcomes, not the function or mapping itself. The random variable simply records the outcome numerically.
Once outcomes are expressed numerically, we can finally define probabilities, relationships between variables, and summary quantities such as expected values and variances in a uniform way.
When working with multiple random variables (as in conditional probabilities), we need to describe how they behave both individually and together. This is done through joint, marginal, and conditional distributions.
The joint distribution of two random variables and assigns probabilities to pairs of values. In joint distributions, we are interested in the probability that takes the specific value at the same time that takes the specific value , which is written as
This distribution fully characterises the system. From it, we can recover the behaviour of each variable separately through marginalisation. Marginalisation can be achieved by summing over all possible values of the other variable, effectively collapsing the joint distribution down to a distribution over a single variable by accounting for every possible value the other variable could take on, and is mathematically written as
Joint and Marginal distributions can be easily understood using a table of discrete probabilities. For a fair die, let be whether a roll is even or odd, and let be whether the roll is small or large . The probability table is given below.
This table can be interpreted as follows:
- The entry corresponds to rolling a 2.
- The entry corresponds to rolling a 4 or 6.
- Each row sum gives a marginal distribution of
- Each column sum gives a marginal distribution of
- The total probability sums to 1, as required.
In addition to these distributions, conditional distributions describe how one variable behaves once the value of another is known, and can be calculated as
For the example above, if we were to condition on the die roll being large, the probability that the roll is even can be calculated as
which makes sense, since two out of three possible large rolls are even (4 and 6), but not 5.
Independence and Conditional Independence
When working with multiple events, such as in joint and conditional distributions, an important question is whether variables genuinely influence one another, or whether their apparent relationship disappears once we account for additional information. This leads to the ideas of independence and conditional independence.
Independence means that knowing the outcome of one event tells us nothing about the outcome of another. Two events and are independent if
Equivalently, whenever . Learning that occurred does not change the probability of .
For random variables and , independence means that their joint distribution can be factorised as
Conditional independence weakens the notion of independence by allowing dependence to disappear once additional information is taken into account. Two variables and are conditionally independent given a third variable if, after fixing the value of , knowing provides no further information about , and vice versa. In this setting, any apparent relationship between and is fully explained by their shared dependence on .
Mathematically expressed, and are conditionally independent given if
Conclusion
In this post, we explored and understood basic probability theory from first principles. We started by defining outcomes, events, and event spaces, then introduced probability distributions through a small set of axioms that ensure consistency. From these foundations, we developed conditional probability, the chain rule, Bayes' rule, and the language of random variables and distributions, showing how joint, marginal, and conditional behaviour fit into a single framework. Independence and conditional independence then clarified when variables genuinely interact and when apparent relationships disappear once relevant information is taken into account.
Together, these ideas form the core of probability theory. In the next part, we will extend this framework by learning how to query probability distributions directly, move beyond discrete outcomes to continuous spaces, and introduce expectation and variance as tools for summarising and reasoning about random variables.

