Skip to main content
Statistics LibreTexts

6.3: Central Limit Theorem- Meaning and Implications

  • Page ID
    58912
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    The Central Limit Theorem (CLT) is one of the most powerful and surprising results in all of statistics. It explains why sampling distributions are often bell-shaped even when the original population isn't.

    It sets the stage for confidence intervals and hypothesis tests by helping us understand the behavior of sample means.


    Definition: Central Limit Theorem

    The Central Limit Theorem (CLT) states the following:

    If we take many random samples of size \( n \) from any population (with mean \( \mu \) and standard deviation \( \sigma \)), then:

    The sampling distribution of the sample mean \( \bar{x} \) will follow a normal distribution with mean \(\mu\) and standard deviation \(\frac{\sigma}{\sqrt{n}}\},
    as long as the sample size \( n \) is large enough.

    A few important notes:

    • This works no matter what shape the original population has.
    • The bigger the sample size, the more normal the sampling distribution becomes.
    • This is most useful when the population is not normal!

    What Does This Mean?

    The CLT lets us work with sample means and know what to expect.

    If we want to estimate a population mean (like average home price or GPA), the CLT tells us that our sample means have a predictable shape and center:

    • Center: \( \mu_{\bar{x}} = \mu \)
    • Spread: \( \sigma_{\bar{x}} = \frac{\sigma}{\sqrt{n}} \)
    • Shape: approximately normal (if \( n \) is large enough)

    This gives us the foundation to compute probabilities, find margins of error, and test whether sample results could reasonably occur under specific assumptions.


    Central Limit Theorem: Simulation (n = 30)

    This simulation draws samples from a right-skewed population and tracks the sample means. The histogram below builds into a bell shape, even though the original population is not normal. That’s the power of the Central Limit Theorem.


    Exploration: What Did You See?

    Use the simulation above to explore how the Central Limit Theorem works in practice. As you interact with it, reflect on these questions:

    • What does the population distribution look like? Is it symmetric? Skewed?
    • After 10, 20, then 50 samples, what does the sampling distribution of the mean start to look like?
    • How does the sampling distribution change as you draw more samples?
    • What would happen if we used a larger sample size (n)? How might that affect shape and spread?
    • How could this simulation help explain why it's reasonable to use normal probability models, even when the data itself isn’t normal?

    Remember, the sampling distribution of sample means becomes more normal even when the population isn't. That’s the surprise, and the power, of the Central Limit Theorem.

    Where Does This Fit In?

    This section brings together everything we've built up so far:

    • Chapter 5: We studied distributions (including the normal distribution)
    • Chapter 6.1 & 6.2: We explored the idea that statistics from samples vary and form distributions of their own
    • Now (6.3): We learn that these sample statistics often create distributions with very predictable behavior

    The CLT is the key that allows us to apply normal models to problems where we’re working with sample statistics instead of individual data points.


    Example: Average Wait Times at a Coffee Shop

    Suppose the distribution of wait times at a popular coffee shop is right-skewed with an average of 6 minutes and a standard deviation of 3 minutes.

    If we take a random sample of 50 customers and calculate the mean wait time, what will the distribution of the sample mean look like?

    • The individual wait times are skewed
    • But the sampling distribution of the sample mean will be approximately normal (thanks to the CLT)
    • That distribution will be centered at 6 minutes with a standard error: \[ \sigma_{\bar{x}} = \frac{3}{\sqrt{50}} \approx 0.42 \]

    This means that the average wait time from a sample of 50 people will usually fall within about 0.42 minutes of 6, and the pattern of those averages will resemble a bell curve, even though the individual data doesn’t.

    Related Video


    In the next section, we'll begin to use the Central Limit Theorem to solve problems: calculating probabilities about sample means, building confidence intervals, and making decisions based on data.


    This page titled 6.3: Central Limit Theorem- Meaning and Implications is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Mathematics Department.

    • Was this article helpful?