12.3: Practice with RM ANOVA Summary Table
 Page ID
 17392
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left#1\right}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)RLet's use a real scenario to practice with the Repeated Measures ANOVA Summary Table.
Scenario
The data are taken from a recent study conducted by Behmer and Crump (a coauthor of this chapter), at Brooklyn College (Behmer & Crump, 2017).
Behmer and Crump (2017) were interested in how people perform sequences of actions. One question is whether people learn individual parts of actions, or the whole larger pattern of a sequence of actions. We looked at these issues in a computer keyboard typing task. One of our questions was whether we would replicate some well known findings about how people type words and letters.
From prior work we knew that people type words way faster than than random letters, but if you made the random letters a little bit more Englishlike, then people who can read English type those letter strings a little bit faster, but not as slow as random string.
In the study, 38 participants sat in front of a computer and typed fiveletter strings one at a time. Sometimes the five letters made a word (Normal condition: TRUCK), sometimes they were completely random (Random condition: JWYFG), and sometimes they followed patterns like you find in English but were not actual words (Bigram condition: QUEND). What makes this repeated measures is that each participant received each condition; some trials was a word, some trials were a random string of letters, and some trials were a string a letters that looked like a word in English but was not a word. The order for each trial were randomly assigned. We measured every single keystroke that participants made, and we'll look at the reaction times (how long it took for participants to start typing the first letter in the string, in milliseconds).
Example \(\PageIndex{1}\)
Answer the following questions to understand the variables and groups that we are working with.
 Who is the sample?
 Who do might be the population?
 What is the IV (groups being compared)?
 What is the DV (quantitative variable being measured)?
Solution
 The sample is 38 participants.
 Maybe anyone who types on a keyboard? Englishspeaker typists? There's not much info in the scenario to determine a specific population.
 The IV is something like "word status" with the three levels being Normal (English word), Random (letter string), and Bigram (Englishlike letter string).
 Reaction time (how long it took for participants to start typig the first letter) in milliseconds.
Step 1: State the Hypotheses
Based on the means from Table \(\PageIndex{1}\), we can see the means look different. What could be a directional research hypothesis?

N: 
Mean: 
SD: 

Normal (English word) 
38 
779.00 
20.40 
Bigram (Englishlike nonword) 
38 
869.00 
24.60 
Random (nonword) 
38 
1037.00 
29.30 
Exercise \(\PageIndex{1}\)
Determine the research hypothesis in words and symbols. You can fill in the following underlined spot with the symbols for greater than (>), less than (<), or equal signs. Just remember, at least one pair of means must be predicted to be different from each other.
Symbols:
 \( \overline{X}_{N} \) _____ \( \overline{X}_{B} \)
 \( \overline{X}_{N} \) _____ \(\overline{X}_{R} \)
 \( \overline{X}_{B} \) _____ \(\overline{X}_{R} \)
 Answer

Based on the means, I might predict that the Normal condition will react fastest and will be significantly faster than the Bigram condition and the Random condition. I also might hypothesis that the Bigram condition would have a significantly shorter reaction time as the Random condition, as well.
Symbols:
 \( \overline{X}_{N} \) < \( \overline{X}_{B} \)
 \( \overline{X}_{N} \) < \(\overline{X}_{R} \)
 \( \overline{X}_{B} \) < \(\overline{X}_{R} \)
Notice that we are predicting that the Normal words will have a smaller reaction time, meaning that they will respond faster and the time to respond will be shorter.
What about the null hypothesis? What might that look like?
Exercise \(\PageIndex{2}\)
State the null hypothesis in words and symbols. .
 Answer

The reaction time will be similar for the Normal condition, the Bigram condition, and the Random condition.
\( \overline{X}_{N} = \overline{X}_{B} = \overline{X}_{R}\)
Step 2: Find the Critical Values
Using the sample size information included in Table 1, you can now find the critical values from the Critical Values of F Table found on this page in the chapter that first discussed ANOVAs, or find a list of critical value tables at the end of this textbook on the Common Critical Value Tables page.
As shown on the bottom of the critical values page, the two Degrees of Freedom that you’ll use is still from the numerator (Between Groups) and the denominator (Within Group or Error), but the denominator’s df is calculated slightly differently.
Example \(\PageIndex{2}\)
What is the critical value for this scenario?
Solution
The df for the numerator is still k1; 31 = 2.
The df for the denominator is (k1)*(P1), which means that we need to figure out P1 first. Since P stands for the number of participants, that would be 381 = 37.
\[(k1) * (P1) = (31) * (381) = 2 * 37 = 74 \nonumber \]
The critical value of F for 2 and 74 in the 0.05 row is 3.15.
Step 3: Compute the Test Statistic
Using the Sum of Squares provide in the following ANOVA Summary Table (Table \(\PageIndex{2}\)) and the information from the scenario about the sample size and number of conditions, fill in the ANOVA Summary Table to determine the calculated Fvalue.
Source 
\(SS\) 
\(df\) 
\(MS\) 
\(F\) 

Between 
1,424,914.00 



Participants 
2,452,611.90 



Error 




Total 
4,101,175.30 

Example \(\PageIndex{3}\)
Complete the ANOVA Summary Table in Table \(\PageIndex{2}\) to determine the calculated Fscore.
Solution
Source 
\(SS\) 
\(df\) 
\(MS\) 
\(F\) 

Between 
1,424,914.00 
\(k – 1 = 3 1 = 2\) 
\(\frac{S S_{B}}{d f_{B}} = \frac{1424914}{2} = 712457\) 
\(\frac{MS_{B}}{MS_{E}} = \frac{712457}{3022.29} = 235.73\) 
Participants 
2,452,611.90 
\(P 1 = 38 – 1 =37\) 
leave blank 
leave blank 
Error 
\(SS_{WG} = SS_{Total} – SS{BG} – SS{Ps} = 4101175.30 – 1424914 – 2452611.90 = 223,649.40\) 
\((k1)\times(P1) = 2 \times 37 = 74\) 
\(\frac{S S_{E}}{d f_{E}} = \frac{223649.40}{74} = 3022.29\) 
leave blank 
Total 
4,101,175.30 
\(N1 = 113\) \((N = k \times P)\) 
leave blank  leave blank 
So the ANOVA Summary Table should end up looking like Table \(\PageIndex{4}\):
Source 
\(SS\) 
\(df\) 
\(MS\) 
\(F\) 

Between 
1,424,914.00 
2 
712,457.00 
235.73 
Participants 
2,452,611.90 
37 

Error 
223,649.40 
74 
3,022.29 

Total 
4,101,175.30 
113 
Step 4: Make the Decision
We have the critical value (3.15) and the calculated value (235.73), so we can now make the decision just like we’ve been doing.
REJECT THE NULL HYPOTHESIS 
RETAIN THE NULL HYPOTHESIS 

Small pvalues (p<.05) 
Large pvalues (p>.05) 
A small pvalue means a small probability that all of the means are similar. Suggesting that at least one of the means is different from at least one other mean… 
A large pvalue means a large probability that all of the means are similar. 
We conclude that: At least one mean is different from one other mean. At least one group is not from the same population as the other groups. 
We conclude that: The means for all of the groups are similar. All of the groups are from the same population. 
The calculated F is further from zero (more extreme) than the critical F. In other words, the calculated F is bigger than the critical F. (Draw the standard normal curve and mark the calculated F and the critical F to help visualize this.) 
The calculated F is closer to zero (less extreme) than the critical F. In other words, the calculated F is smaller than the critical F. (Draw the standard normal curve and mark the calculated F and the critical F to help visualize this.) 
Reject the null hypothesis (which says that all of the means are similar).  Retain (or fail to reject) the null hypothesis (which says that the all of the means are similar). 
Support the Research Hypothesis? MAYBE. Look at the actual means:

Do not support the Research Hypothesis (because all of the means are similar). 
Statistical sentence: F(df) = Fcalc, p<.05 (fill in the df and the calculated F)  Statistical sentence: F(df) = Fcalc, p>.05 (fill in the df and the calculated F) 
Here’s another way to show the info in Table \(\PageIndex{5}\):
Critical \(<\) Calculated \(=\) Reject null \(=\) At least one mean is different from at least one other mean. \(=\) p<.05
Critical \(>\) Calculated \(=\) Retain null \(=\) All of the means are similar. \(=\) p>.05
Exercise \(\PageIndex{3}\)
Should we retain or reject the null hypothesis? Does this mean that we’re saying that all of the means are similar, or that at least one mean different?
 Answer

Because the calculated Fscore of 235.73 is so much bigger than the critical value of 3.15, we reject the null hypothesis and say that at least one mean is different from at least one other mean.
Before we can writeup the results for this analysis, we need to determine if the mean differences are in the hypothesized direction. Behmer and Crump (2017) provided the following ttest results that tested whether pairs of means are different:
 Normal versus Bigram: t(37) = 10.61, p < 0.001
 Normal versus Random: t(37) = 15.78, p < 0.001
 Bigram versus Random: t(37) = 13.49, p < 0.001
Exercise \(\PageIndex{4}\)
What is one problem with using ttests to check for multiple sets of mean differences in posthoc analyses?
 Answer

Alpha inflation; each ttest has a 5% of committing a Type I Error (rejecting the null hypothesis when there really is no difference between the sample’s means in the population).
Exercise \(\PageIndex{5}\)
What did we learn about in the Between Groups ANOVA chapter to use instead to check for multiple sets of mean differences in posthoc analyses?
 Answer

A variety of posthoc analyses that reduced the chance of a Type I Error in each pairwise comparison so that the total chance of a Type I Error was still 5%.
Writeup
Okay, we now have all we need to complete a conclusion that reports the results while including all of the required components:
 The statistical test is preceded by the descriptive statistics (means).
 The description tells you what the research hypothesis being tested is.
 A "statistical sentence" showing the results is included.
 The results are interpreted in relation to the research hypothesis.
Exercise \(\PageIndex{6}\)
What could the conclusion look like for this scenario?
 Answer

The research hypothesis was that the Normal condition would have the fastest reaction time (M = 779 ms) compared to the Bigram (M = 869 ms) and the Random (M = 1,037 ms) conditions, and the Bigram condition would also have a faster reaction time than the Random condition. This research hypothesis was fully supported (F(2,37) = 235.73, p < 0.05).
That's it! Let’s try one more example; this time, we’ll calculate the Sum of Squares and the pairwise comparison.
Contributors and Attributions
This page was extensively adapted by