Skip to main content
Statistics LibreTexts

4.5: Poisson Distribution

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Another useful probability distribution is the Poisson distribution, or waiting time distribution. This distribution is used to determine how many checkout clerks are needed to keep the waiting time in line to specified levels, how may telephone lines are needed to keep the system from overloading, and many other practical applications. A modification of the Poisson, the Pascal, invented nearly four centuries ago, is used today by telecommunications companies worldwide for load factors, satellite hookup levels and Internet capacity problems. The distribution gets its name from Simeon Poisson who presented it in 1837 as an extension of the binomial distribution which we will see can be estimated with the Poisson.

    There are two main characteristics of a Poisson experiment.

    1. The Poisson probability distribution gives the probability of a number of events occurring in a fixed interval of time or space if these events happen with a known average rate.
    2. The events are independent of the time since the last event. For example, a book editor might be interested in the number of words spelled incorrectly in a particular book. It might be that, on the average, there are five words spelled incorrectly in 100 pages. The interval is the 100 pages and it is assumed that there is no relationship between when misspellings occur.
    3. The random variable \(X\) = the number of occurrences in the interval of interest.
    Example \(\PageIndex{1}\)

    A bank expects to receive six bad checks per day, on average. What is the probability of the bank getting fewer than five bad checks on any given day? Of interest is the number of checks the bank receives in one day, so the time interval of interest is one day. Let \(X\) = the number of bad checks the bank receives in one day. If the bank expects to receive six bad checks per day then the average is six checks per day. Write a mathematical statement for the probability question.


    \(P (X < 5)\)

    Example \(\PageIndex{2}\)

    You notice that a news reporter says "uh," on average, two times per broadcast. What is the probability that the news reporter says "uh" more than two times per broadcast.

    This is a Poisson problem because you are interested in knowing the number of times the news reporter says "uh" during a broadcast.

    a. What is the interval of interest?


    a. one broadcast measured in minutes

    b. What is the average number of times the news reporter says "uh" during one broadcast?


    b. 2

    c. Let \(X\) = ____________. What values does \(X\) take on?


    c. Let \(X\) = the number of times the news reporter says "uh" during one broadcast.
    \(x = 0, 1, 2, 3\), ...

    d. The probability question is \(P\) (______).


    d. \(P (X > 2)\)

    Poisson Probability Distribution Function

    \(X \sim \text{Poisson} (\mu)\)

    Read this as "\(X\) is a random variable with a Poisson distribution." The parameter is \(\mu\); \(\mu\) = the mean for the interval of interest. The mean is the number of occurrences that occur on average during the interval period.

    The formula for computing probabilities that are from a Poisson process is:

    \[p(x)=\frac{\mu^{x} e^{-\mu}}{x !}\nonumber\]

    where \(p(x)=P(X=x)\) is the probability of \(x\) occurrences, \(\mu\) is the expected number of occurrences based upon historical data, \(e\) is the natural logarithm approximately equal to 2.718.

    In order to use the Poisson distribution, certain assumptions must hold. These are: the probability of an occurrence is unchanged within the interval, there cannot be simultaneous occurrences within the interval, and finally, that the probability of a occurrence among intervals is independent, the same assumption of the binomial distribution.

    In a way, the Poisson distribution can be thought of as a clever way to convert a continuous random variable, usually time, into a discrete random variable by breaking up time into discrete independent intervals. This way of thinking about the Poisson helps us understand why it can be used to estimate the probability for the discrete random variable from the binomial distribution. The Poisson is asking for the probability of a number of successes during a period of time while the binomial is asking for the probability of a certain number of successes for a given number of trials.

    Example \(\PageIndex{3}\)

    Leah's answering machine receives about six telephone calls between 8 a.m. and 10 a.m. What is the probability that Leah receives more than one call in the next 15 minutes?

    Let X = the number of calls Leah receives in 15 minutes. (The interval of interest is 15 minutes or \(\frac{1}{4}\) hour.)

    \(x = 0, 1, 2, 3\), ...

    If Leah receives, on the average, six telephone calls in two hours, and there are eight 15 minute intervals in two hours, then Leah receives

    \(\left(\frac{1}{8}\right)\)(6)= 0.75 calls in 15 minutes, on average. So, \(\mu = 0.75\) for this problem.

    \(X \sim \text{Poisson} (0.75)\)

    Find \(P (X > 1)\):

    \[P(X > 1) = 1 - P(X=0) = 1 - \frac{0.75^{0} e^{-0.75}}{0 !} = 0.1734\nonumber\]

    Probability that Leah receives more than one telephone call in the next 15 minutes is about 0.1734.

    The graph of \(X \sim \text{Poisson} (0.75)\) is:

    This graphs shows a poisson probability distribution. It has 5 bars that decrease in height from left to right. The x-axis shows values in increments of 1 starting with 0, representing the number of calls Leah receives within 15 minutes. The y-axis ranges from 0 to 0.5 in increments of 0.1.
    Figure \(\PageIndex{1}\)

    The \(y\)-axis contains the probability of \(X\) = the number of calls in 15 minutes.

    Example \(\PageIndex{4}\)

    According to a survey a university professor gets, on average, 7 emails per day. Let X = the number of emails a professor receives per day. The discrete random variable X takes on the values x = 0, 1, 2 …. The random variable X has a Poisson distribution: \(X\sim\text{Poisson}(7)\). The mean is 7 emails.

    1. What is the probability that an email user receives exactly 2 emails per day?
    2. What is the probability that an email user receives at most 2 emails per day?
    3. What is the standard deviation?

    a. \(P(X=2)=p(2)=\frac{\mu^{x_{e}-\mu}}{x !}=\frac{7^{2} e^{-7}}{2 !}=0.022\)

    b. \(P(X \leq 2)=p(0)+p(1)+p(2)=\frac{7^{0} e^{-7}}{0 !}+\frac{7^{1} e^{-7}}{1 !}+\frac{7^{2} e^{-7}}{2 !}=0.029\)

    c. Standard Deviation = \(\sigma=\sqrt{\mu}=\sqrt{7} \approx 2.65\)

    Example \(\PageIndex{5}\)

    Text message users receive or send an average of 41.5 text messages per day.

    1. How many text messages does a text message user receive or send per hour?
    2. What is the probability that a text message user receives or sends two messages per hour?
    3. What is the probability that a text message user receives or sends more than two messages per hour?

    a.Let X = the number of texts that a user sends or receives in one hour. The average number of texts received per hour is \(\frac{41.5}{24}\) ≈ 1.7292.

    b.\(P(X=2)=p(2)=\frac{\mu^{x} e^{-\mu}}{x !}=\frac{1.729^{2} e^{-1.729}}{2 !}=0.265\)

    c.\(P(>x2)=1-P(X \leq 2) = 1 - \left(p(0) + p(1) + p(2)\right)=1-\left[\frac{7^{0} e^{-7}}{0 !}+\frac{7^{1} e^{7}}{1 !}+\frac{7^{2} e^{-7}}{2 !}\right]=0.250\)

    Estimating the Binomial Distribution with the Poisson Distribution

    The Poisson distribution can provide an approximation for the binomial. We say that the binomial distribution approaches the Poisson. The binomial distribution approaches the Poisson distribution is as n gets larger and p is small such that np becomes a constant value. There are several rules of thumb for when one can say they will use a Poisson to estimate a binomial. One suggests that np, the mean of the binomial, should be less than 25. Another author suggests that it should be less than 7. And another, noting that the mean and variance of the Poisson are both the same, suggests that np and npq, the mean and variance of the binomial, should be greater than 5. There is no one broadly accepted rule of thumb for when one can use the Poisson to estimate the binomial.

    As we move through these probability distributions we are getting to more sophisticated distributions that, in a sense, contain the less sophisticated distributions within them. This proposition has been proven by mathematicians. This gets us to the highest level of sophistication in the next probability distribution which can be used as an approximation to all of those that we have discussed so far. This is the normal distribution.

    Example \(\PageIndex{6}\)

    On May 13, 2013, starting at 4:30 PM, the probability of low seismic activity for the next 48 hours in Alaska was reported as about 1.02%. Use this information for the next 200 days to find the probability that there will be low seismic activity in ten of the next 200 days. Use both the binomial and Poisson distributions to calculate the probabilities. Are they close?


    Let X = the number of days with low seismic activity.

    Using the binomial distribution:

    \[P\left(X=10\right)=p(10)=\frac{200 !}{10 !(200-10) !} \times .0102^{10} \times .9898^{190}=0.000039\nonumber\]

    Using the Poisson distribution:

    Calculate \(\mu = np = 200(0.0102) \approx 2.04\)

    \[P\left(X=10\right)=p(10)=\frac{\mu^{x} e^{-\mu}}{x !}=\frac{2.04^{10} e^{-2.04}}{10 !}=0.000045\nonumber \]

    We expect the approximation to be good because \(n\) is large (greater than 20) and \(p\) is small (less than 0.05). The results are close—both probabilities reported are almost 0.

    Example \(\PageIndex{7}\)

    A survey of 500 seniors in the Price Business School yields the following information. 75% go straight to work after graduation. 15% go on to work on their MBA. 9% stay to get a minor in another program. 1% go on to get a Master's in Finance.

    What is the probability that more than 2 seniors go to graduate school for their Master's in finance?


    This is clearly a binomial probability distribution problem. The choices are binary when we define the results as "Graduate School in Finance" versus "all other options." The random variable is discrete, and the events are, we could assume, independent. Solving as a binomial problem, we have:

    Binomial Solution

    \[n\cdot p=500\cdot 0.01=5=\mu\nonumber\]

    \[p(0)=\frac{500 !}{0 !(500-0) !} 0.01^{0}(1-0.01)^{500^{-0}}=0.00657\nonumber\]

    \[p(1)=\frac{500 !}{1 !(500-1) !} 0.01^{1}(1-0.01)^{500}=0.03318\nonumber\]

    \[p(2)=\frac{500 !}{2 !(500-2) !} 0.01^{2}(1-0.01)^{500^{2}}=0.08363\nonumber\]

    Adding all 3 together = 0.12339


    Poisson approximation

    \[n\cdot p=500\cdot 0.01=5=\mu\nonumber\]

    \[n \cdot p \cdot(1-p)=500 \cdot 0.01 \cdot(0.99) \approx 5=\sigma^{2}=\mu\nonumber\]

    \[p(x)=\frac{e^{-n p}(n p)^{x}}{x !}=\left\{p(0)=\frac{e^{-5} \cdot 5^{0}}{0 !}\right\}+\left\{p(1)=\frac{e^{-5} \cdot 5^{1}}{1 !}\right\}+\left\{p(2)=\frac{e^{-5} \cdot 5^{2}}{2 !}\right\}\nonumber\]



    An approximation that is off by 1 one thousandth is certainly an acceptable approximation.

    This page titled 4.5: Poisson Distribution is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform.