Skip to main content
Statistics LibreTexts

4.9: Chapter Review

  • Page ID
    5554
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    4.1 Introduction

    The characteristics of a probability distribution or density function (PDF) are as follows:

    1. Each probability is between zero and one, inclusive (inclusive means to include zero and one).
    2. The sum of the probabilities is one.

    4.2 Hypergeometric Distribution

    The combinatorial formula can provide the number of unique subsets of size \(x\) that can be created from \(n\) unique objects to help us calculate probabilities. The combinatorial formula is \(\left(\begin{array}{l}{n} \\ {x}\end{array}\right)=_{n} C_{x}=\frac{n !}{x !(n-x) !}\)

    A hypergeometric experiment is a statistical experiment with the following properties:

    1. You take samples from two groups.
    2. You are concerned with a group of interest, called the first group.
    3. You sample without replacement from the combined groups.
    4. Each pick is not independent, since sampling is without replacement.

    The outcomes of a hypergeometric experiment fit a hypergeometric probability distribution. The random variable \(X =\) the number of items from the group of interest. \(h(x)=\frac{\left(\begin{array}{l}{A} \\ {x}\end{array}\right)\left(\begin{array}{l}{N-A} \\ {n-x}\end{array}\right)}{\left(\begin{array}{l}{N} \\ {n}\end{array}\right)}\).

    4.3 Binomial Distribution

    A statistical experiment can be classified as a binomial experiment if the following conditions are met:

    1. There are a fixed number of trials, \(n\).
    2. There are only two possible outcomes, called "success" and, "failure" for each trial. The letter \(p\) denotes the probability of a success on one trial and \(q\) denotes the probability of a failure on one trial.
    3. The \(n\) trials are independent and are repeated using identical conditions.

    The outcomes of a binomial experiment fit a binomial probability distribution. The random variable \(X =\) the number of successes obtained in the \(n\) independent trials. The mean of \(X\) can be calculated using the formula \(\mu = np\), and the standard deviation is given by the formula \(\sigma=\sqrt{n p q}\).

    The formula for the Binomial probability density function is

    \[P(x)=\frac{n !}{x !(n-x) !} \cdot p^{x} q^{(n-x)}\nonumber\]

    4.4 Geometric Distribution

    There are three characteristics of a geometric experiment:

    1. There are one or more Bernoulli trials with all failures except the last one, which is a success.
    2. In theory, the number of trials could go on forever. There must be at least one trial.
    3. The probability, \(p\), of a success and the probability, \(q\), of a failure are the same for each trial.

    In a geometric experiment, define the discrete random variable \(X\) as the number of independent trials until the first success. We say that \(X\) has a geometric distribution and write \(X \sim G(p)\) where \(p\) is the probability of success in a single trial.

    The mean of the geometric distribution \(X \sim G(p)\) is \(\mu = 1/p\) where \(x =\) number of trials until first success for the formula \(P(X=x)=(1-p)^{x-1} p\) where the number of trials is up and including the first success.

    An alternative formulation of the geometric distribution asks the question: what is the probability of x failures until the first success? In this formulation the trial that resulted in the first success is not counted. The formula for this presentation of the geometric is:

    \[P(X=x)=p(1-p)^{x}\nonumber\]

    The expected value in this form of the geometric distribution is

    \[\mu=\frac{1-p}{p}\nonumber\]

    The easiest way to keep these two forms of the geometric distribution straight is to remember that \(p\) is the probability of success and \((1−p)\) is the probability of failure. In the formula the exponents simply count the number of successes and number of failures of the desired outcome of the experiment. Of course the sum of these two numbers must add to the number of trials in the experiment.

    4.5 Poisson Distribution

    A Poisson probability distribution of a discrete random variable gives the probability of a number of events occurring in a fixed interval of time or space, if these events happen at a known average rate and independently of the time since the last event. The Poisson distribution may be used to approximate the binomial, if the probability of success is "small" (less than or equal to 0.01) and the number of trials is "large" (greater than or equal to 25). Other rules of thumb are also suggested by different authors, but all recognize that the Poisson distribution is the limiting distribution of the binomial as \(n\) increases and \(p\) approaches zero.

    The formula for computing probabilities that are from a Poisson process is:

    \[P(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber\]

    where \(P(X)\) is the probability of successes, \(\mu\) (pronounced mu) is the expected number of successes, \(e\) is the natural logarithm approximately equal to \(2.718\), and \(X\) is the number of successes per unit, usually per unit of time.


    This page titled 4.9: Chapter Review is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform.

    • Was this article helpful?