Skip to main content
Statistics LibreTexts

4.4: Normal Distributions

  • Page ID
    12771
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Definition \(\PageIndex{1}\)

    A random variable \(X\) has a normal distribution, with parameters \(\mu\) and \(\sigma\), write \(X\sim\text{normal}(\mu,\sigma)\), if it has pdf given by
    $$f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/2\sigma^2}, \quad\text{for}\ x\in\mathbb{R},\notag$$
    where \(\mu\in\mathbb{R}\) and \(\sigma > 0\).

    If a continuous random variable \(X\) has a normal distribution with parameters \(\mu\) and \(\sigma\), then \(\text{E}[X] = \mu\) and \(\text{Var}(X) = \sigma^2\). These facts can be derived using Definition 4.2.1; however, the integral calculations require many tricks. Note that the normal case is why the notation \(\mu\) is often used for the expected value, and \(\sigma^2\) is used for the variance. So, \(\mu\) gives the center of the normal pdf, and its graph is symmetric about \(\mu\), while \(\sigma\) determines how spread out the graph is. Figure 1 below shows the graph of two different normal pdf's.

    Example \(\PageIndex{1}\)

    Suppose \(X_1\sim\text{normal}(0, 2^2)\) and \(X_2\sim\text{normal}(0, 3^2)\). So, \(X_1\) and \(X_2\) are both normally distributed random variables with the same mean, but \(X_2\) has a larger standard deviation. Given our interpretation of standard deviation, this implies that the possible values of \(X_2\) are more "spread out'' from the mean. This is easily seen by looking at the graphs of the pdf's corresponding to \(X_1\) and \(X_2\) given in Figure 1.

    normal.PNG

    Figure 1: Graph of normal pdf's: \(X_1\sim\text{normal}(0,2^2)\) in blue, \(X_2\sim\text{normal}(0,3^2)\) in red


    The normal distribution is arguably the most important probably distribution. It is used to model the distribution of population characteristics such as weight, height, and IQ. The pdf is terribly tricky to work with, in fact integrals involving the normal pdf cannot be solved exactly, but rather require numerical methods to approximate. Because of this, there is no closed form for the corresponding cdf of a normal distribution. Given the importance of the normal distribution though, many software programs have built in normal probability calculators. There are also many useful properties of the normal distribution that make it easy to work with. We state these properties without proof below. Note that we also include the connection to expected value and variance given by the parameters.

    Properties of the Normal Distribution

    1. If \(X\sim\text{normal}(\mu, \sigma)\), then \(aX+b\) also follows a normal distribution with parameters \(a\mu + b\) and \(a\sigma\).
    2. If \(X\sim\text{normal}(\mu, \sigma)\), then \(\displaystyle{\frac{X-\mu}{\sigma}}\) follows the standard normal distribution, i.e., the normal distribution with parameters \(\mu=0\) and \(\sigma = 1\).

    The first property says that any linear transformation of a normally distributed random variable is also normally distributed. The second property is a special case of the first, since we can re-write the transformation on \(X\) as
    $$\frac{X-\mu}{\sigma} = \left(\frac{1}{\sigma}\right)X - \frac{\mu}{\sigma}.\notag$$
    This transformation, subtracting the mean and dividing by the standard deviation, is referred to as standardizing \(X\), since the resulting random variable will always have the standard normal distribution with mean 0 and standard deviation 1. In this way, standardizing a normal random variable has the effect of removing the units. Before the prevalence of calculators and computer software capable of calculating normal probabilities, people would apply the standardizing transformation to the normal random variable and use a table of probabilities for the standard normal distribution.


    This page titled 4.4: Normal Distributions is shared under a not declared license and was authored, remixed, and/or curated by Kristin Kuter.

    • Was this article helpful?