Skip to main content
Statistics LibreTexts

10.1: Unpaired z-Test

  • Page ID
    51902
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    We have two populations and two sample sets, one from each population :

      Sample Mean Sample std. dev.
    From population 1 \(\overline{x}_{1}\) \(s_{1}\)
    From population 2 \(\overline{x}_{2}\) \(s_{2}\)

    The population means are \(\mu_{1}\) and \(\mu_{2}\) and just as with the single population test, there are 3 possible hypothesis tests :

    Two Tailed Right Tailed Left Tailed
    \(H_0: \mu_1 = \mu_2\) \(H_0: \mu_1 \leq \mu_2\) \(H_0: \mu_1 \geq \mu_2\)
    \(H_1: \mu_1 \neq \mu_2\) \(H_1: \mu_1 > \mu_2\) \(H_1: \mu_1 < \mu_2\)
    or or or
    \(H_0: \mu_1 - \mu_2\) = 0 \(H_0: \mu_1 - \mu_2 \leq\) \(H_0: \mu_1 - \mu_2 \geq 0\)
    \(H_1: \mu_1 - \mu_2 \neq 0\) \(H_1: \mu_1 - \mu_2 > 0\) \(H_1: \mu_1 - \mu_2 <0\)

    In the second row the hypotheses are written in terms of a difference. Irrespective of which way you write the hypotheses, give population 1 priority. Write population 1 first. That way you won’t mess up your signs or your interpretation.

    The test statistic to use, in all cases[1] is

    \[\begin{equation*} z_{\rm test} = \frac{(\bar{x}_1 - \bar{x}_2)}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}} \end{equation*}\]

    where \(n_{1}\) = sample set size from population 1 and \(n_{2}\) = sample set size from population 2. This test statistic is based on a distribution of sample means as shown in Figure 10.1.

    Figure-10.1-300x148.png
    Figure 10.1 : The distribution of the difference of sample means \(\bar{x}_{1} - \bar{x}_{2}\) under the null hypothesis \(H_{0}: \mu_{1} - \mu_{2} = 0\). A one-tail example is shown here. The test statistic of Equation 10.1 follows from a \(z\)-transformation of this picture.

    Example 10.1 : A researcher hypothesizes that the average number of sports colleges offer for males is greater than the average number of sports offered for females. Samples of the number of sports offered to each sex by randomly selected colleges is given here :

    Males (pop. 1) Females (pop. 2)
    \(n_{1} = 50\) \(n_{2} = 50\)
    \(\bar{x}_{1} = 8.6\) \(\bar{x}_{2} = 7.9\)
    \(s_{1} = 3.3\) \(s_{2} = 3.3\)

    At \(\alpha = 0.10\) is there enough evidence to support the claim?

    Solution :

    1. Hypotheses.

    \[H_{0}: \mu_1 \leq \mu_2 \hspace{.5in} H_{1}: \mu_1 > \mu_2 \mbox{ (claim)}\]

    Note that \(\bar{x}_{1} > \bar{x}_{2}\) (\(8.6>7.9\)) so \(H_{1}:\mu_1 > \mu_2\) is true on the face of it. If \(H_{1}\) is not true on the face of it then \(H_{1}\) is just plain false without the need for any statistical test. With the hypotheses direction set correctly, the question becomes: Is \(\bar{x}_{1}\) significantly greater than \(\bar{x}_2\)? The term “statistically significant” corresponds to “reject \(H_{0}\)“.

    2. Critical statistic.

    From the t Distribution Table, one-tailed test at \(\alpha = 0.10\) we find

    \[z_{\rm crit} = 1.282\]

    Note that \(z_{critical}\) is positive because this is a right-tailed test. For left tailed tests make \(z_{\rm crit}\) negative. For two-tailed tests you have \(\pm z_{\rm crit}\).

    3. Test statistic.

    \[\begin{eqnarray*} z &=& \frac{(\bar{x}_1 - \bar{x}_2)}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}} \\ &=& \frac{(8.6 - 7.9)}{\sqrt{\frac{3.3^2}{50} + \frac{3.3^2}{50}}} \\ &=& 1.06 \end{eqnarray*}\]

    Using the Standard Normal Distribution Table, we can find the \(p\)-value. Since \(A(z) = A(1.06) = 0.3554\), \(p = 0.05 - 0.3554 = 0.1446\).

    4. Decision.

    Figure-ex-10.1-300x212.png

    Do not reject \(H_{0}\) since \(z_{\rm test}\) is not in the rejection region. The \(p\)-value reflects this :

    \[ (p = 0.1446) > (\alpha = 0.10) \]

    5. Interpretation.

    There is not enough evidence, at \(\alpha = 0.10\) under a \(z\)-test, to support the claim that colleges offer more sports for males than females.


    1. You could specify a non-zero null hypothesis, e.g. \(H_{0}: \mu_{1}-\mu_{2} = k\), in which case you would have \(z_{\rm test} = \frac{(\bar{x}_1 - \bar{x}_2) - k}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}}\). We won't consider that case in this course.

    This page titled 10.1: Unpaired z-Test is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Gordon E. Sarty via source content that was edited to the style and standards of the LibreTexts platform.