Skip to main content
Statistics LibreTexts

5.1: Discrete Random Variables

  • Page ID
    3272
  • Joint Distributions

    In this chapter we consider two or more random variables defined on the same sample space and discuss how to model the probability distribution of the random variables jointly. This starts with defining the joint cdf.

    Definition\(\PageIndex{1}\)

    The joint behavior of random variables \(X\) and \(Y\) is determined by the joint cumulative distribution function (cdf):
    $$F(x,y) = P(X\leq x\ \text{and}\ Y\leq y).\notag$$

    In the next two sections, we consider the two cases of random variables: discrete and continuous.


    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    Joint Distributions of Discrete Random Variables

    Having already defined the joint cdf, we begin by looking at the joint frequency function for two discrete random variables.

    Definition\(\PageIndex{1}\)

    If discrete random variables \(X\) and \(Y\) are defined on the same sample space \(\Omega\), then their joint frequency function is given by
    $$p(x,y) = P(X=x\ \text{and}\ Y=y).\notag$$


    In the discrete case, we can obtain the joint cdf of \(X\) and \(Y\) by summing up the joint frequency function:
    $$F(x,y) = \sum_{x_i \leq x} \sum_{y_j \leq y} p(x_i, y_j),\notag$$
    where \(x_i\) denotes possible values of \(X\) and \(y_j\) denotes possible values of \(Y\). From the joint frequency function, we can also obtain the individual probability distributions of \(X\) and \(Y\) separately as shown in the next definition.

    Definition\(\PageIndex{2}\)

    Suppose that discrete random variables \(X\) and \(Y\) have joint frequency function \(p(x,y)\). Let \(x_1, x_2, \ldots, x_i, \ldots\) denote the possible values of \(X\), and let \(y_1, y_2, \ldots, y_j, \ldots\) denote the possible values of \(Y\). The marginal frequency functions of \(X\) and \(Y\) are respectively given by the following:
    \begin{align*}
    p_X(x) &= \sum_j p(x, y_j) \quad(\text{fix}\ x,\ \text{sum over possible values of}\ Y) \\
    p_Y(y) &= \sum_i p(x_i, y) \quad(\text{fix}\ y,\ \text{sum over possible values of}\ X)
    \end{align*}

    Example \(\PageIndex{1}\):

    Consider again the probability experiment of Example 3.4.1, where we toss a fair coin three times and record the sequence of heads \((h)\) and tails \((t)\). Again, we let random variable \(X\) denote the number of heads obtained. We also let random variable \(Y\) denote the winnings earned in a single play of a game with the following rules, based on the outcomes of the probability experiment:

    • player wins $1 if first \(h\) occurs on the first toss
    • player wins $2 if first \(h\) occurs on the second toss
    • player wins $3 if first \(h\) occurs on the third toss
    • player loses $1 if no \(h\) occur

    Note that the possible values of \(X\) are \(x=0,1,2,3\), and the possible values of \(Y\) are \(y=-1,1,2,3\). We represent the joint frequency function using a table:

    Table 1: joint frequency function of X and Y
    \(p(x,y)\) \(X\)
    \(Y\) 0 1 2 3
    -1 1/8 0 0 0
    1 0 1/8 2/8 1/8
    2 0 1/8 1/8 0
    3 0 1/8 0 0

    The values in Table 1 give the values of \(p(x,y)\). For example, consider \(p(0,-1)\):
    $$p(0,-1) = P(X=0\ \text{and}\ Y=-1) = P(ttt) = 1/8.\notag$$
    Since the outcomes are equally likely, the values of \(p(x,y)\) are found by counting the number of outcomes in the sample space that result in the specified values of the random variables, and then dividing by \(8\), the total number of outcomes. The sample space is given below, color coded to help explain the values of \(p(x,y)\):
    $$\Omega = \{{\color{green}hhh}, {\color{green}hht}, {\color{green}hth}, {\color{green}htt}, {\color{blue}thh}, {\color{blue}tht}, {\color{magenta}tth}, {\color{red} ttt}\}\notag$$

    Given the joint frequency function, we can now find the marginal frequency functions. Note that the marginal frequency function for \(X\) is found by computing sums of the columns in Table 1, and the marginal frequency function for \(Y\) corresponds to the row sums. (Note that we found the frequency function for \(X\) in Example 3.4.1 as well.)

    \(x\) \(p_X(x)\) \(y\) \(p_Y(y)\)
    0 1/8 -1 1/8
    1 3/8 1 1/2
    2 3/8 2 1/4
    3 1/8 3 1/8

    Finally, we can find the joint cdf for \(X\) and \(Y\) by summing over values of the joint frequency function. For example, consider \(F(1,1)\):
    $$F(1,1) = P(X\leq1\ \text{and}\ Y\leq1) = p(0,-1) + p(0,1) + p(-1,1) + p(1,1) = \frac{1}{4}\notag$$
    Again, we can represent the joint cdf using a table:

    Table 2: joint cdf of X and Y
    \(F(x,y)\) \(X\)
    \(Y\) 0 1 2 3
    -1 1/8 1/8 1/8 1/8
    1 1/8 1/4 1/2 5/8
    2 1/8 3/8 3/4 7/8
    3 1/8 1/2 7/8 1



    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    Joint Distributions of Continuous Random Variables

    Definition\(\PageIndex{1}\)

    If continuous random variables \(X\) and \(Y\) are defined on the same sample space \(\Omega\), then their joint density function is a piecewise continuous function, denoted \(f(x,y)\), that satisfies the following.

    1. \(f(x,y)\geq0\), for all \((x,y)\in\mathbb{R}^2\)
    2. \(\displaystyle{\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1}\)
    3. \(\displaystyle{P((X,Y)\in A) = \iint\limits_A\! f(x,y)\, dx\, dy}\), for any \(A\subseteq\mathbb{R}^2\)

    As an example of the third condition in Definition 24 (the one above), in the continuous case, the joint cdf for random variables \(X\) and \(Y\) is obtained by integrating the joint density function over a set \(A\) of the form
    $$A = \{(x,y)\in\mathbb{R}^2\ |\ X\leq a\ \text{and}\ Y\leq b\},\notag$$
    where \(a\) and \(b\) are constants. Specifically, if \(A\) is given as above, then the joint cdf of \(X\) and \(Y\), at the point \((a,b)\), is given by
    $$F(a,b) = P(X\leq a\ \text{and}\ Y\leq b) = \int\limits^b_{-\infty}\int\limits^a_{-\infty}\! f(x,y)\, dx\, dy.\notag$$
    Note that probabilities for continuous jointly distributed random variables are now volumes instead of areas as in the case of a single continuous random variable.

    As in the discrete case, we can also obtain the individual probability distributions of \(X\) and \(Y\) from the joint density function.

    Definition\(\PageIndex{2}\)

    Suppose that continuous random variables \(X\) and \(Y\) have joint density function \(f(x,y)\). The marginal density functions of \(X\) and \(Y\) are respectively given by the following.
    \begin{align*}
    f_X(x) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dy \quad(\text{fix}\ x,\ \text{integrate over possible values of}\ Y) \\
    f_Y(y) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dx \quad(\text{fix}\ y,\ \text{integrate over possible values of}\ X)
    \end{align*}

    Example \(\PageIndex{1}\):

    Suppose a radioactive particle is contained in a unit square. We can define random variables \(X\) and \(Y\) to denote the \(x\)- and \(y\)-coordinates of the particle's location in the unit square, with the bottom left corner placed at the origin. Radioactive particles follow completely random behavior, meaning that the particle's location should be uniformly distributed over the unit square. This implies that the joint density function of \(X\) and \(Y\) should be constant over the unit square, which we can write as
    $$f(x,y) = \left\{\begin{array}{l l}
    c, & \text{if}\ 0\leq x\leq 1\ \text{and}\ 0\leq y\leq 1 \\
    0, & \text{otherwise},
    \end{array}\right.\notag$$
    where \(c\) is some unknown constant. We can find the value of \(c\) by using the first condition in Definition 24(first definition in the section) and solving the following:
    $$\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1 \quad\Rightarrow\quad \int\limits^1_0\!\int\limits^1_0\! c\, dx\, dy = 1 \quad\Rightarrow\quad c \int\limits^1_0\!\int\limits^1_0\! 1\, dx\, dy = 1 \quad\Rightarrow\quad c=1\notag$$

    We can now use the joint density of \(X\) and \(Y\) to compute probabilities that the particle is in some specific region of the unit square. For example, consider the region
    $$A = \{(x,y)\ |\ x-y > 0.5\},\notag$$
    which is graphed in Figure 5.

    prob1.PNG

    Integrating the joint density function over \(A\) gives the following probability:
    $$P(X-Y>0.5) = \iint\limits_A\! f(x,y)\, dx\, dy = \int^{0.5}_0\! \int^{0.5}_0\! 1\, dx\, dy = 0.25\notag$$

    Finally, we apply Definition 25(second definition in the section) and find the marginal density functions of \(X\) and \(Y\).
    \begin{align*}
    f_X(x) &= \int\limits^1_0\! 1\, dy = 1, \quad\text{for}\ 0\leq x\leq 1 \\
    f_Y(y) &= \int\limits^1_0\! 1\, dx = 1, \quad\text{for}\ 0\leq y\leq 1
    \end{align*}
    Note that both \(X\) and \(Y\) are individually uniform random variables, each over the interval \([0,1]\). This should not be too surprising. Given that the particle's location was uniformly distributed over the unit square, we should expect that the coordinates would also be uniformly distributed over the unit intervals.

    Example \(\PageIndex{2}\): keep? commented out on the note

    At a gas station, gasoline is stocked in a bulk tank each week. Let random variable \(X\) denote the proportion of the tank's capacity that is stocked in a given week, and let \(Y\) denote the proportion of the tank's capacity that is sold in the same week. Note that the gas station cannot sell more than what was stocked in a given week, which implies that the value of \(Y\) cannot exceed the value of \(X\). A possible joint density function of \(X\) and \(Y\) is given by
    $$f(x,y) = \left\{\begin{array}{l l}
    3x, & \text{if}\ 0\leq y \leq x\leq 1 \\
    0, & \text{otherwise.}
    \end{array}\right.\notag$$
    We find the joint cdf of \(X\) and \(Y\) at \((1/2, 1/3)\):
    $$F(1/2, 1/3) = P(X\leq 1/2\ \text{and}\ Y\leq 1/3)\notag$$


    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

    Independent Random Variables

    In some cases, the probability distribution of one random variable will not be affected by the distribution of another random variable defined on the same sample space. In those cases, the joint distribution functions have a very simple form, and we refer to the random variables as independent.

    Definition\(\PageIndex{1}\)

    Random variables \(X_1, X_2, \ldots, X_n\) are independent if the joint cdf factors into a product of the marginal cdf's:
    $$F(x_1, x_2, \ldots, x_n) = F_{X_1}(x_1)\cdot F_{X_2}(x_2) \cdots F_{X_n}(x_n).\notag$$
    It is equivalent to check that this condition holds for frequency functions in the discrete setting and density function in the continuous setting.

    Example \(\PageIndex{1}\):

    Consider the discrete random variables defined in Example 19(Joint Discrete RV EXAMPLE 1). \(X\) and \(Y\) are independent if
    $$p(x,y) = p_X(x)\cdot p_Y(y),\notag$$
    for all pairs \((x,y)\). Note that, for \((0,-1)\), we have
    $$p(0,-1) = \frac{1}{8},\ \ p_X(0) = \frac{1}{8},\ \ p_Y(-1) = \frac{1}{8} \quad\Rightarrow\quad p(0,-1) \neq p_X(0)\cdot p_Y(-1).\notag$$
    Thus, \(X\) and \(Y\) are not independent, or in other words, \(X\) and \(Y\) are dependent. This should make sense given the definition of \(X\) and \(Y\). The winnings earned depend on the number of heads obtained. So the probabilities assigned to the values of \(Y\) will be affected by the values of \(X\).

    Example \(\PageIndex{2}\):

    Consider the continuous random variables defined in Example 20(Joint Continuous RV EXAMPLE 1). \(X\) and \(Y\) are independent if
    $$f(x,y) = f_X(x)\cdot f_Y(y),\notag$$
    for all \((x,y)\in\mathbb{R}^2\). Note that, for \((x,y)\) in the unit square, we have
    $$f(x,y) = 1,\ \ f_X(x) = 1,\ \ f_Y(y) = 1 \quad\Rightarrow\quad f(x,y) = f_X(x)\cdot f_Y(y).\notag$$
    Outside of the unit square, \(f(x,y) = 0\) and at least one of the marginal density functions, \(f_X\) or \(f_Y\), will equal 0 as well. Thus, \(X\) and \(Y\) are independent.