Skip to main content
Statistics LibreTexts

9.3: t-Test for Means

  • Page ID
    51897
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Hypothesis testing for means for sample set sizes in \(n<30\) where \(s\) is used as an estimate for \(\sigma\) is the same as for \(n \geq 30\) except that \(t\) and not \(z\) is the test statistic[1]. Specifically, the test statistic is

    for \(k\) from any of the hypotheses listed in the table you saw in the previous section (one- and two-tailed versions):

    Two-Tailed Test Right-Tailed Test Left-Tailed Test
    \(H_{0}\) : \(\mu = k\) \(H_{0}\) : \(\mu \leq k\) \(H_{0}\) : \(\mu \geq k\)
    \(H_{1}\) : \(\mu \neq k\) \(H_{1}\) : \(\mu > k\) \(H_{1}\) : \(\mu < k\)

    The critical statistic is found in thet Distribution Table with the degrees of freedom \(\nu = n-1\).

    Example 9.4 : A physician claims that joggers, maximal volume oxygen uptake is greater than the average of all adults. A sample of 15 joggers has a mean of 40.6 ml/kg and a standard deviation of 6 ml/kg. If the average of all adults is 36.7 ml/kg, is there enough evidence to support the claim at \(\alpha = 0.05\)?

    1. Hypothesis.

    \(H_{0} : \mu \leq 36.7\)

    \(H_{1} : \mu > 36.7\) (claim)

    2. Critical statistic.

    In thet Distribution Table, find the column for one-tailed test at \(\alpha = 0.05\) and the line for degrees of freedom \(\nu = n-1 = 14\). With that find

    \[t_{\rm critical} = 1.761\]

    3. Test statistic.

    To compute this we need : \(\bar{x} = 40.6\), \(s= 6\) and \(n = 15\) from the problem statement. From the hypothesis we have \(k = 36.7\). So

    At this point we can estimate the \(p\)-value using thet Distribution Table, which doesn’t have as much information about the \(t\)-distribution as the Standard Normal Distribution Table has about the \(z\)-distribution, so we can only estimate. The procedure is: In the \(\nu = 14\) row, look for \(t\) values that bracket \(t _{\rm test} = 2.517\). They are 2.145 (with \(\alpha = 0.025\) in the column heading for one-tailed tests) and 2.624 (associated with a one-tail \(\alpha = 0.01\)).

    So,

    \[0.010 < p < 0.025\]

    is our estimate[2] for \(p\).

    4. Decision.

    fig72t1png-300x139.jpg

    Reject \(H_{0}\). We can also base this decision on our \(p\)-value estimate since :

    \[(0.010 < p < 0.025) < (\alpha = 0.05)\]

    5. Interpretation.

    There is enough evidence to support the claim that the joggers’ maximal volume oxygen uptake is greater than 36.7 ml/kg using a \(t\)-test at \(\alpha = 0.05\).

    Fine point. When we use \(s\) in a \(t\) (or \(z\) test) as an estimate for \(\sigma\), we are actually assuming that distribution of sample means is normal. The central limit theorem tells us that the distribution of sample means is approximately normal so generally we don’t worry about this restriction. If the population is normal then the distribution of sample means will be exactly normal. Some stats texts state that we need to assume that the population is normal for a \(t\)-test to be valid. However, the central limit theorem’s conclusion guarantees that the \(t\)-test is robust to violations of that assumption. If the population has a very wild distribution then \(s\) may be bad estimate for \(\sigma\) because the distribution of sample \(s\) values will not follow the \(\chi^2\) distribution. The chance if this happening becomes smaller the larger the \(n\), again by the central limit theorem.

    Origin of the \(t\)-distribution

    We can easily define the \(t\)-distribution via random variables associated with the following stochastic processes. Let :

    \[\begin{eqnarray*} Z & = & \mbox{ a random variable with a $z$-distribution} \\ X & = & \mbox{ a random variable with a $\chi^{2}$ distribution with $\nu$ degrees of freedom.} \end{eqnarray*}\]

    Then the random variable

    \[ T = \frac{Z}{X} \]

    is a random variable that follows a \(t\)-distribution with \(\nu\) degrees of freedom.


    1. Again, SPSS applies the \(t\)-test, uses \(s\) directly, for any sample size.
    2. If you know how to interpolate then you can find a single value for \(p\).

    This page titled 9.3: t-Test for Means is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Gordon E. Sarty via source content that was edited to the style and standards of the LibreTexts platform.