Skip to main content
Statistics LibreTexts

9.2: The Dependent Samples t-Test Formula

  • Page ID
    50060

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Before we can use the formula, it is important to understand what it can tell us and how it gets there. The dependent samples t-test formula looks fairly simple compared to some of the other inferential formulas we see in statistics. The obtained t-value tells us how large the difference is between pretest and posttest scores using standard error. Another way to say this is that it tells us how many standard errors apart the scores were at the two times of data collection. It does this by taking the difference in the scores from pretest to posttest and dividing that by the standard error of those differences. The difference is calculated in the numerator of the formula and the standard error is calculated in the denominator of the formula. Therefore, we can understand the formula’s main construction and outcomes as follows:

    \[t= \dfrac{\text{difference from pretest to postest}}{\text{standard error of the difference}} = \text{how many standard errors of difference are observed from pretest to postest} \nonumber \]

    The formula contains two symbols (\(n\) and \(d\)) and the six basic mathematical operations (adding, subtracting, squaring, square rooting, multiplying, and dividing). The two symbols \(n\) and \(d\) represent, and thus must be replaced with, specific values. The dependent samples t-test formula is as follows:

    \[t=\dfrac{\Sigma d}{\sqrt{\left[\dfrac{n\left(\Sigma d^2\right)-(\Sigma d)^2}{n-1}\right]}} \nonumber \]

    The two symbols are \(n\) which stands for sample size, and \(d\) which stands for difference from pretest to posttest. Difference (\(d\)) is calculated by subtracting each pretest score from its corresponding posttest score to see how different those two values are for each case. Remember, each case refers to data from one participant.

    Notice that the dependent samples t-test formula is not asking for means. Instead, the focus is on difference from oneself. The numerator asks for the sum of differences. The denominator asks for the sample size and some additional calculations using difference scores. Thus, the formula requires we know three basic things: sample size (there is only one group so there is only one sample size), the sum of differences, and the sum of the squared differences. We also see the \(df\) in the bottom of the formula; the \(df\) for a dependent samples t-test is just \(n – 1\) because there is only one sample of participants being used to produce the two groups of data (i.e. data from that sample at pretest and data from that same sample at posttest).

    Interpreting Obtained t-Values

    Obtained t-values have two components: a magnitude and a direction. The magnitude is the absolute value of t in this test; this value represents how many standard errors the posttest scores were from the pretests scores, on average. Thus, the t-value is used to assess the difference between the posttest mean and the pretest mean, though we don’t overtly see the means in the formula. The larger the t-value, the farther apart the pretest and posttest scores were. As the t-value increases, the evidence for the research hypothesis and against the null hypothesis also increases. Conversely, as the t-value decreases, the evidence for the research hypothesis and against the null hypothesis also decreases. Thus, researchers are generally hoping for larger t-values.

    The other component of t is its direction. When t is positive, it indicates that posttest scores were higher than pretest scores, on average. Conversely, when t is negative, it indicates that posttest scores were lower than pretest scores, on average. Remember, when testing a two tailed (non-directional) hypothesis, only the magnitude needs to be considered to determine whether a result is statistically significant. However, when testing a one-tailed (directional) hypothesis, both magnitude and direction need to be considered to determine whether a result is significant. The direction of the results must match the direction of the hypothesis when using a one-tailed test of significance. For example, if it was hypothesized that posttest scores would be higher than pretests scores, the hypothesis would only be significantly supported if the differences were sufficiently large (meaning t exceeded the critical value) and the result was positive. Conversely, if it was hypothesized that posttest scores would be lower than pretests scores, the hypothesis would only be significantly supported if the differences were sufficiently large and the result was negative.

    Note

    This interpretation of direction only applies when \(d\) is computed as posttest score minus pretest score. The formula, however, works well and will produce the same magnitude of t value if \(d\) is computed as pretest score minus posttest score. When this occurs, a positive result means pretest scores tended to be higher and a negative result means posttest scores tended to be higher.

    Reading Review 9.2

    1. What is being calculated and represented by the numerator of the dependent samples t test formula?
    2. What two things must be checked when determining whether a result from a one-tailed dependent samples t-test was statistically significant?

    Formula Components

    Now that we have taken some time to understand the construction of the dependent samples t-test formula, let’s focus on how to actually use it, starting with identifying its parts.

    In order to solve for t, three things must first be known:

    \(n\) = the sample size

    \(\Sigma d\) = the sum of differences

    \(\Sigma d^2\) = the sum of squared differences

    To find the difference scores, subtract the pretest score from the posttest score for each case. This can be summarized as follows:

    \[d=X_{\text {post }}-X_{\text {pre }} \nonumber \]

    Keep in mind that the symbols in the difference formula are \(X\) (which refers to an individual raw score not x̅(which would refer to the mean). Thus, differences are calculated separately for each case using this formula before they are squared and/or summed for use in the dependent samples t-test formula.

    Formula Steps

    The steps are shown in order and categorized into two sections: A) preparation and B) solving. I recommend using this categorization to help you organize, learn, and properly use all inferential formulas. Preparation steps refer to any calculations that need to be done before values can be plugged into the formula. For the dependent samples t-test this includes finding the three components of the formula: \(n\), \(\Sigma d\), and \(\Sigma d^2\). Once those are known, the steps in section B can be used to yield the obtained value for the formula. The symbol for the obtained value for each t-test is t. Follow these steps, in the specified order, to find t.

    Section A: Preparation

    1. Find \(n\) for the sample.
    2. Find \(\Sigma d \):
      1. Find \(n\) for each member of the sample by subtracting their pretest score from their posttest score.
      2. Then, sum all the difference scores.
    3. Find \(\Sigma d^2\) by squaring each difference score and then summing those squared values.

    Section B: Solving

    1. Write the formula with the values found in section A plugged into their respective locations. The numerator is completed as part of this step so we can move on to the denominator.
    2. Solve for the denominator as follows:
      1. Multiply the sample size by the sum of squared differences as shown in the upper left section of the denominator.
      2. Square the sum of differences as shown in the upper right section of the denominator.
      3. Subtract the squared sum of differences (the result of Step 2b) from the sum of squared differences which has been weighted by the sample size (the result of Step 2a) to complete the top section of the denominator.
      4. Find the \(df\) by subtracting 1 from the sample size, as shown in the bottom of the denominator
      5. Divide the top part of the denominator (the result of Step 2c) by the bottom of the denominator (the result of Step 2d).
      6. Square root the results of step 2e to get the standard error of the difference. This completes the steps for the denominator.
    3. Divide the sum of differences (the numerator) by the standard error of the differences (the denominator which was completed in step 2f) to get the obtained t value.

    This page titled 9.2: The Dependent Samples t-Test Formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by .