Skip to main content
Statistics LibreTexts

8.3: A Single Population Mean using the Student t-Distribution

  • Page ID
    15514
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In practice, we rarely know the population standard deviation. In the past, when the sample size was large, this did not present a problem to statisticians. They used the sample standard deviation \(s\) as an estimate for \(\sigma\) and proceeded as before to calculate a confidence interval with close enough results. However, statisticians ran into problems when the sample size was small. A small sample size caused inaccuracies in the confidence interval.

    William S. Goset (1876–1937) of the Guinness brewery in Dublin, Ireland ran into this problem. His experiments with hops and barley produced very few samples. Just replacing \(\sigma\) with \(s\) did not produce accurate results when he tried to calculate a confidence interval. He realized that he could not use a normal distribution for the calculation; he found that the actual distribution depends on the sample size. This problem led him to "discover" what is called the Student's t-distribution. The name comes from the fact that Gosset wrote under the pen name "Student."

    Up until the mid-1970s, some statisticians used the normal distribution approximation for large sample sizes and only used the Student's \(t\)-distribution only for sample sizes of at most 30. With graphing calculators and computers, the practice now is to use the Student's t-distribution whenever \(s\) is used as an estimate for \(\sigma\). If you draw a simple random sample of size \(n\) from a population that has an approximately a normal distribution with mean \(\mu\) and unknown population standard deviation \(\sigma\) and calculate the \(t\)-score

    \[t = \dfrac{\bar{x} - \mu}{\left(\dfrac{s}{\sqrt{n}}\right)},\]

    then the \(t\)-scores follow a Student's t-distribution with \(n – 1\) degrees of freedom. The \(t\)-score has the same interpretation as the z-score. It measures how far \(\bar{x}\) is from its mean \(\mu\). For each sample size \(n\), there is a different Student's t-distribution.

    The degrees of freedom, \(n – 1\), come from the calculation of the sample standard deviation \(s\). Previously, we used \(n\) deviations (\(x - \bar{x}\) values) to calculate \(s\). Because the sum of the deviations is zero, we can find the last deviation once we know the other \(n – 1\) deviations. The other \(n – 1\) deviations can change or vary freely. We call the number \(n – 1\) the degrees of freedom (df).

    For each sample size \(n\), there is a different Student's t-distribution.

    Properties of the Student's \(t\)-Distribution
    • The graph for the Student's \(t\)-distribution is similar to the standard normal curve.
    • The mean for the Student's \(t\)-distribution is zero and the distribution is symmetric about zero.
    • The Student's \(t\)-distribution has more probability in its tails than the standard normal distribution because the spread of the \(t\)-distribution is greater than the spread of the standard normal. So the graph of the Student's \(t\)-distribution will be thicker in the tails and shorter in the center than the graph of the standard normal distribution.
    • The exact shape of the Student's \(t\)-distribution depends on the degrees of freedom. As the degrees of freedom increases, the graph of Student's \(t\)-distribution becomes more like the graph of the standard normal distribution.
    • The underlying population of individual observations is assumed to be normally distributed with unknown population mean \(\mu\) and unknown population standard deviation \(\sigma\). The size of the underlying population is generally not relevant unless it is very small. If it is bell shaped (normal) then the assumption is met and doesn't need discussion. Random sampling is assumed, but that is a completely separate assumption from normality.

    Calculators and computers can easily calculate any Student's \(t\)-probabilities. The TI-83,83+, and 84+ have a tcdf function to find the probability for given values of \(t\). The grammar for the tcdf command is tcdf(lower bound, upper bound, degrees of freedom). However for confidence intervals, we need to use inverse probability to find the value of t when we know the probability.

    For the TI-84+ you can use the invT command on the DISTRibution menu. The invT command works similarly to the invnorm. The invT command requires two inputs: invT(area to the left, degrees of freedom) The output is the t-score that corresponds to the area we specified.

    The TI-83 and 83+ do not have the invT command. (The TI-89 has an inverse T command.)

    A probability table for the Student's \(t\)-distribution can also be used. The table gives \(t\)-scores that correspond to the confidence level (column) and degrees of freedom (row). (The TI-86 does not have an invT program or command, so if you are using that calculator, you need to use a probability table for the Student's \(t\)-Distribution.) When using a \(t\)-table, note that some tables are formatted to show the confidence level in the column headings, while the column headings in some tables may show only corresponding area in one or both tails.

    A Student's \(t\)-table gives \(t\)-scores given the degrees of freedom and the right-tailed probability. The table is very limited. Calculators and computers can easily calculate any Student's \(t\)-probabilities.

    The notation for the Student's t-distribution (using T as the random variable) is:

    • \(T \sim t_{df}\) where \(df = n – 1\).
    • For example, if we have a sample of size \(n = 20\) items, then we calculate the degrees of freedom as \(df = n - 1 = 20 - 1 = 19\) and we write the distribution as \(T \sim t_{19}\).

    If the population standard deviation is not known, the error bound for a population mean is:

    • \(EBM = \left(t_{\frac{\alpha}{2}}\right)\left(\frac{s}{\sqrt{n}}\right)\),
    • \(t_{\frac{\alpha}{2}}\) is the \(t\)-score with area to the right equal to \(\frac{\alpha}{2}\),
    • use \(df = n – 1\) degrees of freedom, and
    • \(s =\) sample standard deviation.

    The format for the confidence interval is:

    \[(\bar{x} - EBM, \bar{x} + EBM). \label{confint}\]

    To calculate the confidence interval directly:

    Press STAT.
    Arrow over to TESTS.
    Arrow down to 8:TInterval and press ENTER (or just press 8).

    Example \(\PageIndex{1}\): Acupuncture

    Suppose you do a study of acupuncture to determine how effective it is in relieving pain. You measure sensory rates for 15 subjects with the results given. Use the sample data to construct a 95% confidence interval for the mean sensory rate for the population (assumed normal) from which you took the data.

    The solution is shown step-by-step and by using the TI-83, 83+, or 84+ calculators.

    8.6; 9.4; 7.9; 6.8; 8.3; 7.3; 9.2; 9.6; 8.7; 11.4; 10.3; 5.4; 8.1; 5.5; 6.9

    Answer

    • The first solution is step-by-step (Solution A).
    • The second solution uses the TI-83+ and TI-84 calculators (Solution B).

    Solution A

    To find the confidence interval, you need the sample mean, \(\bar{x}\), and the \(EBM\).

    \(\bar{x} = 8.2267 \)

    \(s = 1.6722\) \(n = 15\)

    \(df = 15 – 1 = 14 CL so \alpha = 1 – CL = 1 – 0.95 = 0.05\)

    \(\frac{\alpha}{2} = 0.025 t_{\frac{\alpha}{2}} = t_{0.025}\)

    The area to the right of \(t_{0.025}\) is 0.025, and the area to the left of \(t_{0.025}\) is 1 – 0.025 = 0.975

    \(t_{\frac{\alpha}{2}} = t_{0.025} = 2.14\) using invT(.975,14) on the TI-84+ calculator.

    \[ \begin{align*} EBM &= \left(t_{\frac{\alpha}{2}}\right)\left(\frac{s}{\sqrt{n}}\right) \\[4pt] &= (2.14)\left(\frac{1.6722}{\sqrt{15}}\right) = 0.924 \end{align*}\]

    Now it is just a direct application of Equation \ref{confint}:

    \[ \begin{align*} \bar{x} – EBM &= 8.2267 – 0.9240 = 7.3 \\[4pt] \bar{x} + EBM &= 8.2267 + 0.9240 = 9.15 \end{align*}\]

    The 95% confidence interval is (7.30, 9.15).

    We estimate with 95% confidence that the true population mean sensory rate is between 7.30 and 9.15.

    Solution B

    Press STAT and arrow over to TESTS.

    Arrow down to 8:TInterval and press ENTER (or you can just press 8).
    Arrow to Data and press ENTER.
    Arrow down to List and enter the list name where you put the data.
    There should be a 1 after Freq.
    Arrow down to C-level and enter 0.95
    Arrow down to Calculate and press ENTER.
    The 95% confidence interval is (7.3006, 9.1527)

    When calculating the error bound, a probability table for the Student's t-distribution can also be used to find the value of \(t\). The table gives \(t\)-scores that correspond to the confidence level (column) and degrees of freedom (row); the \(t\)-score is found where the row and column intersect in the table.

    Exercise \(\PageIndex{1}\)

    You do a study of hypnotherapy to determine how effective it is in increasing the number of hourse of sleep subjects get each night. You measure hours of sleep for 12 subjects with the following results. Construct a 95% confidence interval for the mean number of hours slept for the population (assumed normal) from which you took the data.

    8.2; 9.1; 7.7; 8.6; 6.9; 11.2; 10.1; 9.9; 8.9; 9.2; 7.5; 10.5

    Answer

    (8.1634, 9.8032)

    Example \(\PageIndex{2}\): The Human Toxome Project

    The Human Toxome Project (HTP) is working to understand the scope of industrial pollution in the human body. Industrial chemicals may enter the body through pollution or as ingredients in consumer products. In October 2008, the scientists at HTP tested cord blood samples for 20 newborn infants in the United States. The cord blood of the "In utero/newborn" group was tested for 430 industrial compounds, pollutants, and other chemicals, including chemicals linked to brain and nervous system toxicity, immune system toxicity, and reproductive toxicity, and fertility problems. There are health concerns about the effects of some chemicals on the brain and nervous system. Table \(\PageIndex{1}\) shows how many of the targeted chemicals were found in each infant’s cord blood.

    Table \(\PageIndex{1}\)
    79 145 147 160 116 100 159 151 156 126
    137 83 156 94 121 144 123 114 139 99

    Use this sample data to construct a 90% confidence interval for the mean number of targeted industrial chemicals to be found in an in infant’s blood.

    Solution A

    From the sample, you can calculate \(\bar{x} = 127.45\) and \(s = 25.965\). There are 20 infants in the sample, so \(n = 20\), and \(df = 20 – 1 = 19\).

    You are asked to calculate a 90% confidence interval: \(CL = 0.90\), so

    \[\alpha = 1 – CL = 1 – 0.90 = 0.10 \frac{\alpha}{2} = 0.05, t_{\frac{\alpha}{2}} = t_{0.05}\]

    By definition, the area to the right of \(t_{0.05}\) is 0.05 and so the area to the left of \(t_{0.05}\) is \(1 – 0.05 = 0.95\).

    Use a table, calculator, or computer to find that \(t_{0.05} = 1.729\).

    \(EBM = t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right) = 1.729\left(\frac{25.965}{\sqrt{20}}\right) \approx 10.038\)

    \(\bar{x} – EBM = 127.45 – 10.038 = 117.412\)

    \(\bar{x} + EBM = 127.45 + 10.038 = 137.488\)

    We estimate with 90% confidence that the mean number of all targeted industrial chemicals found in cord blood in the United States is between 117.412 and 137.488.

    Solution B

    Enter the data as a list.

    Press STAT and arrow over to TESTS.
    Arrow down to 8:TInterval and press ENTER (or you can just press 8). Arrow to Data and press ENTER.
    Arrow down to List and enter the list name where you put the data.
    Arrow down to Freq and enter 1.
    Arrow down to C-level and enter 0.90
    Arrow down to Calculate and press ENTER.

    The 90% confidence interval is (117.41, 137.49).

    Example \(\PageIndex{3}\)

    A random sample of statistics students were asked to estimate the total number of hours they spend watching television in an average week. The responses are recorded in Table \(\PageIndex{2}\). Use this sample data to construct a 98% confidence interval for the mean number of hours statistics students will spend watching television in one week.

    Table \(\PageIndex{2}\)
    0 3 1 20 9
    5 10 1 10 4
    14 2 4 4 5

    Solution A

    • \(\bar{x} = 6.133\),
    • \(s = 5.514\),
    • \(n = 15\), and
    • \(df = 15 – 1 = 14\).

    \(CL = 0.98\), so \(\alpha = 1 - CL = 1 - 0.98 = 0.02\)

    \(\frac{\alpha}{2} = 0.01 t_{\frac{\alpha}{2}} = t_{0.01} 2.624\)

    \(EBM = t_{\frac{\alpha}{2}}\left(\frac{s}{\sqrt{n}}\right) = 2.624\left(\frac{5.514}{\sqrt{15}}\right) - 3.736\)

    \(\bar{x} – EBM = 6.133 – 3.736 = 2.397\)

    \(\bar{x} + EBM = 6.133 + 3.736 = 9.869\)

    We estimate with 98% confidence that the mean number of all hours that statistics students spend watching television in one week is between 2.397 and 9.869.

    Solution B

    Enter the data as a list.

    Press STAT and arrow over to TESTS.
    Arrow down to 8:TInterval.
    Press ENTER.
    Arrow to Data and press ENTER.
    Arrow down and enter the name of the list where the data is stored.
    Enter Freq: 1
    Enter C-Level: 0.98
    Arrow down to Calculate and press Enter.
    The 98% confidence interval is (2.3965, 9,8702).

    Reference

    1. “America’s Best Small Companies.” Forbes, 2013. Available online at http://www.forbes.com/best-small-companies/list/ (accessed July 2, 2013).
    2. Data from Microsoft Bookshelf.
    3. Data from http://www.businessweek.com/.
    4. Data from http://www.forbes.com/.
    5. “Disclosure Data Catalog: Leadership PAC and Sponsors Report, 2012.” Federal Election Commission. Available online at www.fec.gov/data/index.jsp (accessed July 2,2013).
    6. “Human Toxome Project: Mapping the Pollution in People.” Environmental Working Group. Available online at www.ewg.org/sites/humantoxome...tero%2Fnewborn (accessed July 2, 2013).
    7. “Metadata Description of Leadership PAC List.” Federal Election Commission. Available online at www.fec.gov/finance/disclosur...pPacList.shtml (accessed July 2, 2013).

    Glossary

    Degrees of Freedom (\(df\))
    the number of objects in a sample that are free to vary
    Normal Distribution
    a continuous random variable (RV) with pdf \(f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/2\sigma^{2}}\), where \(\mu\) is the mean of the distribution and \(\sigma\) is the standard deviation, notation: \(X \sim N(\mu,\sigma)\). If \(\mu = 0\) and \(\sigma = 1\), the RV is called the standard normal distribution.
    Standard Deviation
    a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: \(s\) for sample standard deviation and \(\sigma\) for population standard deviation
    Student's t-Distribution
    investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student; the major characteristics of the random variable (RV) are:
    • It is continuous and assumes any real values.
    • The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution.
    • It approaches the standard normal distribution as n get larger.
    • There is a "family" of t–distributions: each representative of the family is completely defined by the number of degrees of freedom, which is one less than the number of data.

    This page titled 8.3: A Single Population Mean using the Student t-Distribution is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform.