# 2.3: Independent Events

- Page ID
- 3254

In this section we consider a property of events that relates to conditional probability, namely *independence*. First, we define what it means for a pair of events to be independent, and then we consider collections of more than two events.

## Independence for Pairs of Events

The following definition provides an intuitive definition of the concept of independence for two events, and then we look at an example that provides a computational way for determining when events are independent.

### Definition \(\PageIndex{1}\)

Events \(A\) and \(B\) are * independent* if knowing that one occurs does not affect the probability that the other occurs, i.e.,

$$P(A\ |\ B) = P(A) \quad\text{and}\quad P(B\ |\ A) = P(B). \label{indep}$$

Using the definition of conditional probability (Definition 2.2.1), we can derive an alternate way to the equations \ref{indep} for determining when two events are independent, as the following example demonstrates.

### Example \(\PageIndex{1}\)

Suppose that events \(A\) and \(B\) are independent. We rewrite equations \ref{indep} using the definition of conditional probability:

\begin{align}

P(A\ |\ B) = P(A) \quad & \Rightarrow\quad \frac{P(A\cap B)}{P(B)} = P(A) \\

& \text{and} \\ \notag

P(B\ |\ A) = P(B) \quad & \Rightarrow\quad \frac{P(A\cap B)}{P(A)} = P(B)

\end{align}

In each of the expressions on the right-hand side above we isolate \(P(A\cap B)\):

\begin{align}

\frac{P(A\cap B)}{P(B)} = P(A) \quad & \Rightarrow\quad P(A\cap B) = P(A)P(B) \\

& \text{and} \\ \notag

\frac{P(A\cap B)}{P(A)} = P(B) \quad & \Rightarrow\quad P(A\cap B) = P(A)P(B)

\end{align}

Both expressions result in \(P(A\cap B) = P(A)P(B)\). Thus, we have shown that if events \(A\) and \(B\) are independent, then the probability of their intersection is equal to the product of their individual probabilities. We state this fact in the next definition.

### Definition \(\PageIndex{2}\)

Events \(A\) and \(B\) are * independent *if $$P(A\cap B) = P(A)P(B).$$

Generally speaking, Definition 2.3.2 tends to be an easier condition than Definition 2.3.1 to verify when checking whether two events are independent.

### Example \(\PageIndex{2}\)

Consider the context of Exercise 2.2.1, where we randomly draw a card from a standard deck of 52 and \(C\) denotes the event of drawing a club, \(K\) the event of drawing a King, and \(B\) the event of drawing a black card.

Are \(C\) and \(K\) independent events? Recall that \(P(C\cap K) = 1/52\), and note that \(P(C) = 13/52\) and \(P(K) = 4/52\). Thus, we have

$$P(C\cap K) = \frac{1}{52} = P(C)P(K) = \frac{13}{52}\times\frac{4}{52},$$

indicating that \(C\) and \(K\) **are independent**.

Are \(C\) and \(B\) independent events? Recall that \(P(C\cap B) = 13/52\), and note that \(P(B) = 26/52\). Thus, we have

$$P(C\cap B) = \frac{13}{52} \neq P(C)P(B) = \frac{13}{52}\times\frac{26}{52},$$

indicating that \(C\) and \(B\) **are not independent**.

Let's think about the results of this example intuitively. To say that \(C\) and \(K\) are independent means that knowing that one of the events occurs does not affect the probability of the other event occurring. In other words, knowing that the card drawn is a King does not influence the probability of the card being a club. The proportion of clubs in the entire deck of 52 is the same as the proportion of clubs in just the collection of Kings: \(1/4\). On the other hand, \(C\) and \(B\) are not independent (AKA *dependent*) because knowing that the card drawn is club indicates that the card *must be black*, i.e., the probability that the card is black is 1. Alternately, knowing that the card drawn is black increases the probability that the card is a club, since the proportion of clubs in the entire deck is \(1/4\), but the proportion of clubs in the collection of black cards is \(1/2\).

## Independence for 3 or More Events

For collections of 3 or more events, there are two different types of independence.

### Definition \(\PageIndex{3}\)

Let \(A_1, A_2, \ldots, A_k\), where \(k\geq3\), be a collection of events.

- The events are
if every pair of events in the collection is independent.**pairwise independent** - The events are
if every sub-collection of events, say \(A_{i_1}, A_{i_2}, \ldots, A_{i_n}\), satisfy the following:**mutually independent**

$$P(A_{i_1}\cap A_{i_2}\cap \ldots \cap A_{i_n}) = P(A_{i_1})\times P(A_{i_2})\times \ldots\times P(A_{i_n})$$

Mutually independent is a stronger type of independence, since it *implies* pairwise independent. But pairwise independence does NOT imply mutual independence, as the following example will demonstrate.

### Example \(\PageIndex{3}\)

Consider again the context of Example 1.1.1, i.e., tossing a fair coin twice, and define the following events:

\begin{align*}

A &= \text{first toss is heads}\\

B &= \text{second toss is heads}\\

C &= \text{exactly one head is recorded}

\end{align*}

We show that this collection of events - \(A, B, C\) - is pairwise independent, but NOT mutually independent. First, we note that the individual probabilities of each event are \(0.5\):

\begin{align*}

P(A) &= P(\{hh, ht\}) = 0.5 \\

P(B) &= P(\{hh, th\}) = 0.5 \\

P(C) &= P(\{ht, th\}) = 0.5

\end{align*}

Next, we look at the probabilities of all pairwise intersections to establish pairwise independence:

\begin{align*}

P(A\cap B) &= P(hh) = 0.25 = P(A)P(B) \\

P(A\cap C) &= P(ht) = 0.25 = P(A)P(C) \\

P(B\cap C) &= P(th) = 0.25 = P(B)P(C)

\end{align*}

However, note that the three events do not have any outcomes in common, i.e., \(A\cap B\cap C = \varnothing\). Thus, we have

$$P(A\cap B\cap C) = 0 \neq P(A)P(B)P(C),\notag$$

and so the events are not mutually independent.