4.4: Compound Events, Independence and Conditional Probability
- Page ID
- 58900
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)So far, we’ve worked with probabilities where we are computing the likelihood of one event happening on its own. But in many real-world situations, we want to find the probability that one event happens given that we already know another has occurred. When one event influences or contains information about another, we call these compound events.
Understanding compound events leads us to a core concept in probability: conditional probability. This is the probability of one event happening given that another event has already occurred.
Definition: Conditional Probability
The conditional probability of event A given B is the probability that A occurs assuming that event B has already occurred. It is written as:
\[ P(A \mid B) \]
The conditional probability can be computed with the following formula; though, in most cases, the formula is not necessary:
\[ P(A \mid B) = \frac{P(A \cap B)}{P(B)} \quad \text{(assuming \( P(B) > 0 \))} \]
Real-World Example: Flu Testing
Imagine that 20% of people in a population have the flu. A rapid test is administered. Among those who have the flu, 90% test positive. Among those who do not have the flu, 10% still test positive (false positive).
If someone tests positive, what is the probability that they actually have the flu? That’s a conditional probability. It represents the interaction between:
- A = person has the flu
- B = person tests positive
We want to find: \( P(A \mid B) \), which asks: “Among people who tested positive, how many actually have the flu?”
These types of questions appear everywhere from healthcare to quality control to legal trials, and accurate answers often depend on understanding if and how events relate to each other.
Statistical Independence
Sometimes two events have no influence on each other. In this case, knowing one occurs doesn’t change the probability of the other. We call this statistical independence.
Definition: Independent Events
Events A and B are independent if:
\[ P(A \mid B) = P(A) \]
\[\text{or equivalently:} \quad P(A \cap B) = P(A) \cdot P(B) \]
In other words: knowing that B happens does not change the probability of A happening.
Intuitively you can ask “Does one event happening impact the probability that another event happens?” If no, the events are independent.
Check Your Understanding: Dependent or Independent?
For each of the scenarios below, decide whether the two events are independent or dependent. Select your answer and check the feedback!
Question 1: You randomly select a marble from a bag, and then (without putting it back) select a second marble.
Are these events dependent or independent?
Question 2: You roll a die, and then flip a coin.
Are these events dependent or independent?
Question 3: You draw a card from a deck, note its suit, and then put it back. Later, you draw another card.
Are these events dependent or independent?
Question 4: You take a reading quiz, and then use your score to decide whether to go to tutoring.
Are these events dependent or independent?
Dice Example: Dependent Events
Suppose you roll two standard 6-sided dice (let’s call them Die 1 and Die 2). Each die will have a sample space of: {1, 2, 3, 4, 5, 6}. Let's consider the two possible events below:
- Event \(A\): “The sum of the two dice is 10”
- Event \(B\): “The first die shows a 4"
To determine if these events are independent we can compare if \(P(A \mid B) = P(A)\). You may already intuitively see if this true.
Let's first find \(P(A)\). This is a little more tricky than finding the probability associated with a single die. There are actually 36 unique outcomes from the two dice rolls, using the multiplication rule from counting. There are exactly three ways for the sum of the two rolls to total 10. We can either have the first die roll a 4 and the second a 6, which we can denote as \((4, 6)\), we can have both dice roll 5, denoted as \((5,5)\), or we can have the first a 6 and the second a 4 denoted as \((6,4)\). Note here we do care about order, as we are interested in the specifics of the first roll. Since there are three different ways to sum to 10, we have \(P(A) = 3/36 = 1/12\).
Let's next find \(P(A \mid B)\). This is asking the probability of have the dice sum to 10 assuming that the first die was a 4. If we make the assumption that die 1 rolled a 4, then there is only a single possible roll of die 2 that can lead to a sum of 10: the second die must be a 6. This is the scenario seen above \((4,6)\). Since there is a 1 in 6 chance of this happening with the second die, the probability is 1/6.
Since \(P(A) \not = P(A\mid B)\), these events are not independent! Event \(A\) is influenced by knowing if \(B\) has occurred or not!
While we have the dice out, let's investigate another question. Is \(P(A \mid B) = P(B\mid A)\)? Let's calculate the second conditional probability below.
Assume that we know \(A\) has occurred. This means we know the two dice have been rolled and summed to 10. That means there are only three possibilities as listed above: \((4,6)\), \((5,5)\) and \((6,4)\). Recall, these are the only possibilities because we are taking event \(A\) as know to have happened! If we look at these three possibilities, only one of them fulfills event \(B\). Therefore \(P(B \mid A) = 1/3\). This is not equal to \(P(A \mid B)\)!
Summary
- Conditional probability: "What is the probability something happens given something else already did?"
- \(P(A | B) = P(A \text{ occuring assuming } B \text{ has already occurred})\)
- Independence: One event having occurred does not influence the probability of a different event occurring.
- If independent, \(P(A \mid B) = P(A)\) and \(P(B \mid A) = P(B)\)
- Multiplication rule: \[ P(A \cap B) = P(A \mid B)\cdot P(B) = P(B\mid A)\cdot P(A)\] which becomes \( P(A) \cdot P(B) \) only if A and B are independent.
Summary of Probability Rules (So Far)
Here's a growing list of the foundational probability rules and relationships we've learned so far. You’ll continue to use these rules throughout your study of probability and statistics.
- Range of Probability:
The probability of any event A is a number between 0 and 1: \[ 0 \leq P(A) \leq 1 \] - Complement Rule:
The probability that event A does not occur is: \[ P(A^c) = 1 - P(A) \] - Addition Rule (General):
For any two events A and B: \[ P(A \cup B) = P(A) + P(B) - P(A \cap B) \] - Addition Rule (Mutually Exclusive Events):
If A and B are mutually exclusive (disjoint): \[ P(A \cup B) = P(A) + P(B) \] - Conditional Probability:
The probability of A given B: \[ P(A \mid B) = \frac{P(A \cap B)}{P(B)} \quad (P(B) > 0) \] - Multiplication Rule (General):
For any two events A and B: \[ P(A \cap B) = P(A) \cdot P(B \mid A) \] - Multiplication Rule (Independent Events):
If A and B are independent: \[ P(A \cap B) = P(A) \cdot P(B) \] - Definition of Independence:
Events A and B are independent if: \[ P(A \mid B) = P(A) \quad \text{or} \quad P(B \mid A) = P(B) \]
Think About It
Can you think of two events from your life or your studies that would be considered dependent? What about two that are truly independent?


