Skip to main content
Statistics LibreTexts

9.1: The t-statistic

  • Page ID
    14503
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Last chapter, we were introduced to hypothesis testing using the \(z\)-statistic for sample means that we learned in Unit 1. This was a useful way to link the material and ease us into the new way to looking at data, but it isn’t a very common test because it relies on knowing the populations standard deviation, \(σ\), which is rarely going to be the case. Instead, we will estimate that parameter \(σ\) using the sample statistic \(s\) in the same way that we estimate \(μ\) using \(M\) (\(μ\) will still appear in our formulas because we suspect something about its value and that is what we are testing). Our new statistic is called \(t\), and for testing one population mean using a single sample (called a 1-sample \(t\)-test) it takes the form:

    \[t=\dfrac{M-\mu}{s_M}=\dfrac{M-\mu}{s / \sqrt{n}} \]

    Notice that \(t\) looks almost identical to \(z\); this is because they test the exact same thing: the value of a sample mean compared to what we expect of the population. The only difference is that the standard error is now denoted \(s_{M}\) to indicate that we use the sample statistic for standard deviation, \(s\), instead of the population parameter \(σ\). The process of using and interpreting the standard error and the full test statistic remain exactly the same.

    In chapter 3 we learned that the formulae for sample standard deviation and population standard deviation differ by one key factor: the denominator for the parameter is \(N\) but the denominator for the statistic is \(N – 1\), also known as degrees of freedom, \(df\). Because we are using a new measure of spread, we can no longer use the standard normal distribution and the \(z\)-table to find our critical values. For \(t\)-tests, we will use the \(t\)-distribution and \(t\)-table to find these values.

    The \(t\)-distribution, like the standard normal distribution, is symmetric and normally distributed with a mean of 0 and standard error (as the measure of standard deviation for sampling distributions) of 1. However, because the calculation of standard error uses degrees of freedom, there will be a different t-distribution for every degree of freedom. Luckily, they all work exactly the same, so in practice this difference is minor.

    Figure \(\PageIndex{1}\) shows four curves: a normal distribution curve labeled \(z\), and three tdistribution curves for 2, 10, and 30 degrees of freedom. Two things should stand out: First, for lower degrees of freedom (e.g. 2), the tails of the distribution are much fatter, meaning the a larger proportion of the area under the curve falls in the tail. This means that we will have to go farther out into the tail to cut off the portion corresponding to 5% or \(α\) = 0.05, which will in turn lead to higher critical values. Second, as the degrees of freedom increase, we get closer and closer to the \(z\) curve. Even the distribution with \(df\) = 30, corresponding to a sample size of just 31 people, is nearly indistinguishable from \(z\). In fact, a \(t\)-distribution with infinite degrees of freedom (theoretically, of course) is exactly the standard normal distribution. Because of this, the bottom row of the \(t\)-table also includes the critical values for \(z\)-tests at the specific significance levels. Even though these curves are very close, it is still important to use the correct table and critical values, because small differences can add up quickly.

    fig 8.1.1.png
    Figure \(\PageIndex{1}\): Distributions comparing effects of degrees of freedom

    The \(t\)-distribution table lists critical values for one- and two-tailed tests at several levels of significance arranged into columns. The rows of the \(t\)-table list degrees of freedom up to \(df\) = 100 in order to use the appropriate distribution curve. It does not, however, list all possible degrees of freedom in this range, because that would take too many rows. Above \(df\) = 40, the rows jump in increments of 10. If a problem requires you to find critical values and the exact degrees of freedom is not listed, you always round down to the next smallest number. For example, if you have 48 people in your sample, the degrees of freedom are \(N\) – 1 = 48 – 1 = 47; however, 47 doesn’t appear on our table, so we round down and use the critical values for \(df\) = 40, even though 50 is closer. We do this because it avoids inflating Type I Error (false positives, see chapter 7) by using criteria that are too lax.

    Contributors and Attributions

    • Foster et al. (University of Missouri-St. Louis, Rice University, & University of Houston, Downtown Campus)


    This page titled 9.1: The t-statistic is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Foster et al. (University of Missouri’s Affordable and Open Access Educational Resources Initiative) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.