Skip to main content
Statistics LibreTexts

10.2: Confidence Interval for Difference of Means (Large Samples)

  • Page ID
    51903
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Swapping the roles of sample and population in the sampling theory, we have the confidence interval corresponding to the hypothesis test of Section 10.1

    \[(\bar{x}_1 - \bar{x}_2) - E < (\mu_1 - \mu_2) < (\bar{x}_1 - \bar{x}_2) + E\]

    where

    \[E = z_{\cal C}\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}\]

    Example 10.2 : Find the 95\(\%\) confidence interval for the difference between the means for the data of Example 10.1.

    Solution : First, recall our data :

    \(\bar{x}_1 = 88.42\), \(s_1 = 5.62\), \(n_1 = 50\)

    \(\bar{x}_2 = 80.61\). \(s_2 = 4.83\), \(n_2 = 50\)

    From the t Distribution Table, look up the \(z\) for the 95\(\%\) confidence interval: \(z_{95\%} = 1.960\). Then compute:

    \[\bar{x}_1 - \bar{x}_2 = 88.42 - 80.61 = 7.81\]

    and

    \[\begin{eqnarray*} E & = & z_{95\%}\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}} \\ & = & 1.960\sqrt{\frac{5.62^2}{50} + \frac{4.83^2}{50}} \\ & = & 2.05 \end{eqnarray*}\]

    so

    \[7.81 - 2.05 < (\mu_1 - \mu_2) < 7.81 + 2.05\]

    or

    \[5.76 < (\mu_1 - \mu_2) < 9.83\]

    with 95\(\%\) confidence. Notice that it is also correct to write \(\mu_{1} - \mu_{2} = 7.81 \pm 2.05\) with 95\(\%\) confidence.

    This is a good point to make an important observation. A two-tailed hypothesis test at a given \(\alpha\) is complementary to a confidence interval of \({\cal{C}} = 1 - \alpha\) in the sense that if 0 is in the confidence interval then the complementary hypothesis test will not reject \(H_{0}\).

    Let’s illustrate this principle with a one-sample \(t\)-test under \(H_{0}: \mu = 0\). (We need \(k=0\) for this principle to work.) Look at the two possible outcomes :

    Case 1 : 0 in the confidence interval, fail to reject \(H_{0}\). In the hypothesis test you would find :

    Figure-10.01-300x209.png

    In the confidence interval calculation you would find:

    Figure-10.02-300x121.png

    Putting the two pictures together gives:

    Figure-10.03-300x235.png

    See, \(0\) is in the confidence interval if ¯\(\bar{x}\) is not in the rejection region. The red distribution that defines the confidence interval is just the blue (identical) distribution slid over from \(0\) to \(\bar{x}\). The distance \(A\) is the same because \({\cal{C}} = 1 - \alpha\).

    Case 2 : 0 not in the confidence interval, reject \(H_{0}\). In this case the combined picture looks like:

    Figure-10.04-300x195.png

    Before we can consider the independent sample \(t\)-test, we need a tool for checking what the variances of the populations are. The formula for the \(t\) test statistic will depend on whether the two variances are the same or not. So let’s take a look at comparing population variances.


    This page titled 10.2: Confidence Interval for Difference of Means (Large Samples) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Gordon E. Sarty via source content that was edited to the style and standards of the LibreTexts platform.