Skip to main content
Statistics LibreTexts

9.2: Independent Samples t-test Equation

  • Page ID
    22090
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The test statistic for our independent samples \(t\)-test takes on the same logical structure and format as our other \(t\)-tests: our observed effect (one mean subtracted from the other mean), all divided by the standard error:

    \[t=\dfrac{(\overline{X_{1}}-\overline{X_{2}})}{\text SE} \nonumber \]

    Calculating our standard error, as we will see next, is where the biggest differences between this \(t\)-test and other \(t\)-tests appears. However, once we do calculate it and use it in our test statistic, everything else goes back to normal. Our decision criteria is still comparing our obtained test statistic to our critical value, and our interpretation based on whether or not we reject the null hypothesis is unchanged, as well as the information needed in a complete conclusion.

    The following explains the conceptual mathematics behind this new denominator. If this makes sense to you, it will help you to understand what the t-test is doing so that it's easier to interpret. However, it's not necessary to understand the mathematical reasoning that lies behind the standard error to calculate it. To calculate, you just need to plug in the correct numbers and follow the order of operations!

    Estimated Standard Error

    If you are here for the reasoning underlying the denominator, here you go!

    Recall that the standard error is the average distance between any given sample mean and the center of its corresponding sampling distribution (a distribution of means from many samples from the same population), and it is a function of the standard deviation and the sample size. This definition and interpretation hold true for our independent samples \(t\)-test as well, but because we are working with two samples drawn from two populations, we have to first combine their estimates of standard deviation – or, more accurately, their estimates of variance (variance is the standard devation squared) – into a single value that we can then use to calculate our standard error.

    The combined estimate of variance using the information from each sample is called the pooled variance and is denoted \(s_{p}^{2}\); the subscript \(p\) serves as a reminder indicating that it is the pooled variance. The term “pooled variance” is a literal name because we are simply pooling or combining the information on variance – the Sum of Squares and Degrees of Freedom – from both of our samples into a single number. The result is a weighted average of the observed sample variances, the weight for each being determined by the sample size, and will always fall between the two observed variances. The computational formula for the pooled variance is:

    \[s_{p}^{2}=\dfrac{\left(n_{1}-1\right) s_{1}^{2}+\left(n_{2}-1\right) s_{2}^{2}}{n_{1}+n_{2}-2} \nonumber \]

    This formula can look daunting at first, but it is in fact just a weighted average. Note that the subscript (the little number sorta under and to the right of some of the symbols) denotes which sample the symbol is representing (the first sample or the second sample).

    Unfortunately, that is just part of the denominator. Once we have our pooled variance calculated, we can drop it into the equation for our standard error:

    \[s_{p}^{2}=\sqrt{\left[\dfrac{\left(n_{1}-1\right) * s_{1}^{2} + \left(n_{2}-1\right) * s_{2}^{2}}{n_{1}+n_{2}-2}\right] * \left(\dfrac{1}{n_{1}} + \dfrac{1}{n_{2}}\right)} \nonumber \]

    Looking at that, we can now see that, once again, we are simply adding together two pieces of information: no new logic or interpretation required. Once the standard error is calculated, it goes in the denominator of our test statistic:

    \[t=\dfrac{(\overline{X_{1}}-\overline{X_{2}})}{\text SE} \nonumber \]

    Independent t-test Formula

    As you can see, we are once again, not done yet! This is denominator. And once again, although this formula is different, it is accomplishing the same task of standardizing and averaging the differences between the mean.

    The final formula to compare two independent means with a t-test is:

    \[t=\dfrac{(\bar{X}_{1}-\bar{X}_{2})}{\sqrt{\left[\dfrac{\left(n_{1}-1\right) * s_{1}^{2} + \left(n_{2}-1\right) * s_{2}^{2}}{n_{1}+n_{2}-2}\right] * \left(\dfrac{1}{n_{1}} + \dfrac{1}{n_{2}}\right)}} \nonumber \]

    I promise that this is not as hard as it looks! Let’s see an example in action. If you lose track of this page, remember that all formulas are listed at the back of the book in the Common Formulas page.


    This page titled 9.2: Independent Samples t-test Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Michelle Oja.