Skip to main content
Statistics LibreTexts

9.3: Distinguishing Parts of the Dependent Samples t-Test Formula

  • Page ID
    50061

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Distinguishing Parts of the Dependent Samples t-Test Formula

    Though the dependent samples t-test formula is one of simpler inferential formulas, it can still cause some confusion. One mistake that is sometimes made is confusing these two components:

    \((\Sigma d)^2\) = the squared sum of differences

    \(\Sigma d^2\) = the sum of squared differences

    These look and sound quite similar but order of operations dictates a different order to their steps and, thus, these are not the same thing. Here is a reminder of order of operations as it applies to each of these so you can note the distinction between them:

    Component

    Steps to Solve

    \((\Sigma d)^2\) = the squared sum of differences

    1. Find each difference by subtracting each pretest score from its posttest score.

    2. Sum the differences.

    3. Square the sum of differences.

    \(\Sigma d^2\) = the sum of squared differences

    1. Find each difference by subtracting each pretest score from its posttest score.

    2. Square each difference.

    3. Sum the squared differences.

    Example of How to Test a Hypothesis by Computing t

    Let us assume that a researcher believed that employees would have different levels of motivation after a pizza party compared to before. Suppose that motivation was measured on a scale of 1 to 10 where higher scores indicate greater motivation in a sample of 10 employees and that data were collected twice: once the day before the company sponsored pizza party (which is the pretest) and again the day after the party (which is the posttest). Assume that Data Set 9.1 includes data from the two waves of measurement. Keep in mind that each participant (i.e. case) will have two scores when using a dependent samples t-test: a pretest score and a posttest score. Each row contains the data from a single case. Let’s use this information to follow the steps in hypothesis testing.

    Data Set 9.1. Motivation Scores (n = 10)
    Posttest Pretest
    6.00 7.00
    5.00 4.00
    4.00 5.00
    7.00 6.00
    5.00 5.00
    4.00 4.00
    6.00 5.00
    3.00 6.00
    5.00 2.00
    8.00 7.00
    Note

    Putting posttest scores in the first column and pretest scores in the second can make calculations of d easier. However, it is also fine to show pretest scores first to reflect the timing of the waves of data collection.

    Steps in Hypothesis Testing

    In order to test a hypothesis, we must follow these steps:

    1. State the hypothesis.

    A non-directional hypothesis is the best fit because the goal is to see if there is a difference in motivation without specifying whether it will be higher or lower at posttest. A summary of the research hypothesis and corresponding null hypothesis in sentence and symbol format are shown below. However, researchers often only state the research hypothesis using a format like this: It is hypothesized that the mean motivation will be different after a pizza party compared to before. If the format shown in the table below is used instead, it must be made clear that what is being stated is a research hypothesis and not a result.

    Non-Directional Hypothesis for a Dependent Samples t-Test
    Research hypothesis The mean motivation after a pizza party will not be equal to the mean motivation before the pizza party. \(H_A: \mu_{\text {post }} \neq \mu_{\text {pre }}\)
    Null hypothesis The mean motivation after a pizza party will be equal to the mean motivation before the pizza party. \(H_0: \mu_{\text {post }}=\mu_{\text {pre }}\)

    2. Choose the inferential test (formula) that best fits the hypothesis.

    The scores of two dependent groups of data from the same sample are being compared so the appropriate test is a dependent samples t-test.

    3. Determine the critical value.

    In order to determine the critical value, three things must be identified: 1. the alpha level, 2. whether the hypothesis requires a one-tailed test or a two-tailed test, and, 3. the degrees of freedom (\(df\)).

    The alpha level is often set at .05 unless there is reason to adjust it such as when multiple hypotheses are being tested in one study or when a Type I Error could be particularly problematic. The default alpha level can be used for this example because only one hypothesis is being tested and there is no clear indication that a Type I Error would be especially problematic. Thus, alpha can be set to 5%, which can be summarized as \(\alpha \) = .05.

    The hypothesis is non-directional so a two-tailed test should be used.

    The \(df\) must also be calculated. Each inferential test has a unique formula for calculating \(df\). The formula for \(df\) for the dependent samples t-test is as follows: \(n-1\). It appears in the bottom of the dependent samples t-test formula. There is only one sample being tested twice so there is only one sample size to consider. We see there are 10 cases in Data Set 9.1 so \(n-10\). Thus, \(df\) = 10 – 1 so the \(df\) for this scenario is 9.

    These three pieces of information are used to locate the critical value from the test. The full tables of the critical values for t-tests are located in Appendix D. Below is an excerpt of the section of the t-tables that fits the current hypothesis and data. Under the conditions of and alpha level of .05, a two-tailed test, and 9 degrees of freedom, the critical value is 2.262.

    Degrees of Freedom

    two-tailed test

    alpha level:

    α = 0.05

    α = 0.01

    9

    2.262

    3.250

    The critical value represents the threshold of evidence needed to be confident a hypothesis is likely true. The obtained value (which is called t in a t-test) is the amount of evidence present. When using a two-tailed test, only the absolute value of the critical value must be considered. Thus, in order for the result to significantly support the hypothesis in this example, the absolute value of t needs to exceed the critical value of 2.262.

    Degrees of Freedom for a Dependent Samples t-Test

    Degrees of freedom (\(df\)) tell how much information you have that is free to vary. The degrees of freedom for a dependent samples t-test is equal to the sample size minus 1. It appears in the denominator of the dependent samples t-test like this: \[n-1 \nonumber \]

    This reflects the sample size minus the number of unique groups. Because only one sample is being tested in a dependent samples t-test, there is only one n to consider and only one time the “subtract 1” adjustment must be used.

    4. Calculate the test statistic.

    A test statistic can also be referred to as an obtained value. The formula needed to find the test statistics t for this scenario is as follows:

    \[t=\dfrac{\Sigma d}{\sqrt{\left[\dfrac{n\left(\Sigma d^2\right)-(\Sigma d)^2}{n-1}\right]}} \nonumber \]

    Section A: Preparation

    Start each inferential formula by identifying and solving for the pieces that must go into the formula. For the dependent samples t-test, this preparatory work is as follows:

    1. Find \(n\).

    This value is found using Data Set 9.1 and is summarized as \(n\) = 10

    2. Find \(\Sigma d\) for each member of the sample by subtracting their pretest score from their posttest score and then summing those values.

    The \(d\) column shows the result of subtracting the pretest score from the posttest score for each row. The total of these values is shown at the bottom of the table.

    Posttest Pretest \(d\)
    6.00 7.00 -1.00
    5.00 4.00 1.00
    4.00 5.00 -1.00
    7.00 6.00 1.00
    5.00 5.00 0.00
    4.00 4.00 0.00
    6.00 5.00 1.00
    3.00 6.00 -3.00
    5.00 2.00 3.00
    8.00 7.00 1.00
    \(\Sigma d\) = 2.00

    This value is found using Data Set 9.1 and is summarized as \(\Sigma d\) = 2.00

    3. Find \(\Sigma d^2\) by squaring each difference score and then summing those values.

    The \(d^2\) column shows each \(d\) value after it has been squared. Keep in mind that negative numbers become positive when they are squared. The total of these squared \(d\) values is shown at the bottom of the table.

    Posttest Pretest \(d\) \(d^2\)
    6.00 7.00 -1.00 1.00
    5.00 4.00 1.00 1.00
    4.00 5.00 -1.00 1.00
    7.00 6.00 1.00 1.00
    5.00 5.00 0.00 0.00
    4.00 4.00 0.00 0.00
    6.00 5.00 1.00 1.00
    3.00 6.00 -3.00 9.00
    5.00 2.00 3.00 9.00
    8.00 7.00 1.00 1.00
    \(\Sigma d^2\) = 24.00

    This value is found using Data Set 9.1 and is summarized as \(\Sigma d^2\) = 24.00

    Now that the pieces needed for the formula have been found, we can move to Section B.

    Section B: Solving

    Now that the preparatory work is done, the formula can be used to compute the obtained value. For the dependent samples t-test, this work is as follows:

    1. Write the formula with the values found in section A plugged into their respective locations.

    Writing the formula first in symbol format before filling it in with the values can help you recognize and memorize it. Here is the formula with the symbols:

    \[t=\dfrac{\Sigma d}{\sqrt{\left[\dfrac{n\left(\Sigma d^2\right)-(\Sigma d)^2}{n-1}\right]}} \nonumber \]

    Here is the formula with values filled into their appropriate locations in place of their symbols:

    \[t=\dfrac{2.00}{\sqrt{\left[\dfrac{10(24.00)-(2.00)^2}{10-1}\right]}} \nonumber \]

    2. Solve for the denominator as follows:

    Note

    Steps will appear in bold to show when they have occurred.

    1. Multiply the sample size by the sum of squared differences as shown in the upper left section of the denominator. \[t=\dfrac{2.00}{\sqrt{\left[\dfrac{\mathbf{240.00}-(2.00)^2}{10-1}\right]}} \nonumber \]
    2. Square the sum of differences as shown in the upper right section of the denominator. \[t=\dfrac{2.00}{\sqrt{\left[\dfrac{240.00-\mathbf{4.00}}{10-1}\right]}} \nonumber \]
    3. Subtract the squared sum of differences (the result of Step 2b) from the squared sum of deviations which has been weighted by the sample size (the result of Step 2a) to complete the top section of the denominator. \[t=\dfrac{2.00}{\sqrt{\left[\dfrac{\mathbf{236.00}}{10-1}\right]}} \nonumber \]
    4. Find the \(df\) by subtracting 1 from the sample size, as shown in the bottom of the denominator. \[t=\dfrac{2.00}{\sqrt{\left[\dfrac{236.00}{\mathbf{9}}\right]}} \nonumber \]
    5. Divide the top part of the denominator (the result of Step 2c) by the bottom of the denominator (the result of Step 2d). \[t=\dfrac{2.00}{\sqrt{[\mathbf{26.2222 \ldots ]}}} \nonumber \]
    6. Square root the results of step 2e to get the standard error of the difference. This completes the steps for the denominator. \[t=\dfrac{2.00}{\mathbf{5.1207 \ldots}} \nonumber \]

    3. Divide the sum of differences (the numerator) by the standard error of the differences (the denominator which was completed in step 2f) to get the obtained t value, as follows:

    \[\begin{gathered}
    t=\dfrac{2.00}{5.1207 \ldots} \\
    t=0.3905 \ldots \\
    t \approx 0.39
    \end{gathered} \nonumber \]

    The final result can be rounded to the hundredths place. This result, known as a test statistic or t-value, can also be referred to by the general term “obtained value.” This result is positive meaning that posttest scores were higher than pretest scores, on average. However, the magnitude of the obtained value is quite low so the differences from pretest to posttest, overall, were small.

    5. Apply a decision rule and determine whether the result is significant.

    Assess whether the obtained value for t exceeds the critical value as follows: The critical value is 2.262.

    The obtained t-value is 0.39

    The obtained t-value does not exceed (i.e. is less than) the critical value. Therefore, the result is not statistically significant and does not support the hypothesis.

    Note

    We only needed to check whether the magnitude exceeded that of the critical value to determine whether the result was significant because the hypothesis was non directional and, thus, required a two-tailed test of significance.

    6. Calculate the effect size and/or other relevant secondary analyses.

    When it is determined that the result is significant, an effect size should be computed. However, when a result is not significant, effect size is not particularly useful and is generally not reported. However, we will practice how to calculate it for a dependent samples t-test here, despite the fact that the result was not significant, so we can learn how to compute it.

    The effect size that is appropriate for t-tests under desirable conditions is known as Cohen’s \(d\) (Cohen, 1988). The version of the formula varies depending upon the test used. The formula for Cohen’s \(d\) is as follows when working with a dependent samples t test:

    \[d=\dfrac{\bar{X}_d}{S_d} \nonumber \]

    Cohen’s \(d\), when used for a dependent samples t-test, requires two parts: the mean of the differences and the standard deviation of the differences. The mean difference is divided by the standard deviation of the differences to yield the effect size.

    First, we must find the mean of differences using the following formula:

    \[\bar{X}_d=\dfrac{\Sigma d}{n} \nonumber \]

    The sum of differences was found earlier to be 2.00. We can see the differences (\(d\)) in the third column below and the sum of those differences at the bottom of that column.

    Posttest Pretest \(d\)
    6.00 7.00 -1.00
    5.00 4.00 1.00
    4.00 5.00 -1.00
    7.00 6.00 1.00
    5.00 5.00 0.00
    4.00 4.00 0.00
    6.00 5.00 1.00
    3.00 6.00 -3.00
    5.00 2.00 3.00
    8.00 7.00 1.00
    \(\Sigma d\) = 2.00

    The sample size is 10. We must plug those into the formula to find the mean of differences as follows:

    \[\bar{X}_d=\dfrac{\Sigma d}{n}=\dfrac{2.00}{10}=0.20 \nonumber \]

    Next, we must find the standard deviation of the differences. The formula to do so is as follows:

    \[S_{d}=\sqrt{\dfrac{\sum\left(d-\bar{X}_{d}\right)^2}{n-1}} \nonumber \]

    To use this standard deviation formula we must follow these steps:

    1. Find the difference from each \(d\) value and the mean of \(d\) values. This is shown in the second column below.
    2. Find the squared difference between each d value and the mean of \(d\) values. This is shown in the third column below.
    3. Sum the squared deviations. This is shown in the bottom of the third column.
    \(d\) \(d-\bar{X}_d\) \((d-\bar{X}_d)^2\)
    -1.00 -1.00 – 0.20 = -1.20 \((-1.20)^\) = 1.44
    1.00 1.00 – 0.20 = 0.80 \((0.80^2)\) = 0.64
    -1.00 -1.00 – 0.20 = -1.20 \((1.20)^\) = 1.44
    1.00 1.00 – 0.20 = 0.80 \((0.80^2)\) = 0.64
    0.00 0.00 – 0.20 = -0.20 \((0.20)^2\) = 0.04
    0.00 0.00 – 0.20 = -0.20 \((0.20)^2\) = 0.04
    1.00 1.00 – 0.20 = 0.80 \((0.80^2)\) = 0.64
    -3.00 -3.00 – 0.20 = -3.20 \((-3.20) = 10.24
    3.00 3.00 – 0.20 = 2.80 \((2.80) = 7.84
    1.00 1.00 – 0.20 = 0.80 \((0.80^2)\) = 0.64
    \((d-\bar{X}_d)^2\) = 23.60

    Next, divide the sum of deviations by the adjusted sample size. Then, square root to find the standard deviation as follows:

    \[S_d=\sqrt{\dfrac{\Sigma\left(d-\bar{X}_{d}\right)^2}{n-1}}=\sqrt{\dfrac{23.60}{10-1}}=\sqrt{\dfrac{23.60}{9}}=\sqrt{2.6222 \ldots}=1.6193 \ldots \nonumber \]

    Now we can put the pieces together to find the effect size as follows:

    \[\begin{array}{r}
    d=\dfrac{\bar{x}_d}{s_d} \\
    d=\dfrac{0.20}{1.6193 \ldots} \\
    d=0.1235 \ldots
    \end{array} \nonumber \]

    Effect sizes, like most values with decimals, are often rounded and reported to the hundredths place. Thus, this effect size is reported as \(d = 0.12\). Cohen’s d can be interpreted using the following rules of thumb (Cohen, 1988; Navarro, 2014):

    Interpreting Cohen’s d Effect Sizes
    ~0.80 Large effect
    ~0.50 Moderate effect
    ~0.20 Small effect

    The rules of thumb are general guidance and do not dictate precise or required interpretations. However, the rules of thumb are useful in providing an initial guideline for interpreting effect sizes. Following these rules of thumb, the current finding of d = 0.12 would be considered a small effect, at best. This is unsurprising because the result was not significant so we should not expect a particularly large effect size.

    7. Report the results in American Psychological Associate (APA) format.

    Results for inferential tests are often best summarized using a paragraph that states the following:

    1. the hypothesis and specific inferential test used,
    2. the main results of the test and whether they were significant,
    3. any additional results that clarify or add details about the results,
    4. whether the results support or refute the hypothesis.

    Results should be reported in past tense.

    Finally, APA format requires a specific format be used for reporting the results of a test. The descriptive statistics needed are the means and standard deviations for the posttest and for the pretest. The steps to computing these descriptive statistics are not shown in this chapter, though their values are reported in the APA formatted summary example. For a review of how to calculate means, see Chapter 3. For a review of how to calculate

    standard deviations, see Chapter 4. The information needed from the inferential tests includes the degrees of freedom, obtained value, and the p-value.

    Following this, the results for our hypothesis with Data Set 9.1 can be written as shown in the summary example.

    APA Formatted Summary Example

    A dependent samples t-test was used to test the hypothesis that the mean motivation would be different after a pizza party compared to before. Contrary to the hypothesis, the mean motivation was not significantly different at posttest (\(M\) = 5.30; \(SD\) = 1.49) compared to pretest (\(M\) = 5.10; \(SD\) = 1.52), t(9) = 0.39, \(p > .05\). The Cohen’s \(d\) effect size of 0.12 was very small.

    This succinct summary in APA format provides a lot of detail and uses specific symbols in a particular order. To understand how to read and create a summary like this, review the detailed walk-though in Chapter 7. For a brief review of the structure for APA format, see the summary below. Note: When a result is not significant, it is not necessary to report effect sizes. However, it is included in the summary example to show how it would be reported and interpreted for this inferential test, if desired.

    Summary of APA-Formatted Results for the Dependent Samples t-Test

    In your APA write-up for a dependent samples t-test you should state:

    1. Which test was used and the hypothesis which warranted its use.
    2. Whether the aforementioned hypothesis was supported or not. To do so properly, four components must be reported:
      1. The mean and standard deviation for both the posttest scores and the pretest scores
      2. The test results in an evidence string as follows: t(df) = obtained value
      3. The significance portion of the evidence string as \(p > .05\) if significant or \(p > .05\) if not significant
      4. The effect size, if the result was significant

    Anatomy of an Evidence String

    The following breaks down what each part represents in the evidence string for Data Set 9.1:

    Symbol for the test

    Degrees of Freedom

    Obtained Value

    p-Value

    t (9) = 0.39, \(p > .05\)
    Note

    When a result is not significant, the significance portion of the evidence string can be written as “\(p > .05\), ns.” The ns portion is shorthand for “not significant” which can be helpful for audiences who are less familiar with APA-formatted summaries.

    Reading Review 9.3

    1. What does \(d\) stand for when computed as part of a dependent samples t-test?
    2. What is the difference between the steps for calculating\((\Sigma {d})^2\) and \(\Sigma d^2\)?
    3. Which descriptive statistics are calculated for the APA-formatted summary which are not used in the dependent samples t-test formula?
    4. What set of symbols is used to indicate an inferential result was not statistically significant in an APA-formatted summary?
    5. Under what conditions is reporting Cohen’s \(d\) unnecessary?

    This page titled 9.3: Distinguishing Parts of the Dependent Samples t-Test Formula is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by .

    • Was this article helpful?