10.6: Using SPSS
- Page ID
- 50149
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)As reviewed in Chapter 2, software such as SPSS can be used to expedite analyses once data have been properly entered into the program. This section focuses on how to enter and analyze data for a one-way ANOVA using SPSS. SPSS version 29 was used for this book; if you are using a different version, you may see some variation from what is shown here.
Entering Data
The one-way ANOVA is bivariate. One variable is used to organize data into comparison groups. The other variable is being compared between those groups. The variable being compared must be quantitative and should have been measured using numbers on an interval or ratio scale. If these things are all true of your data, you are ready to open SPSS and begin entering your data.
Open the SPSS software, click “New Dataset,” then click “Open” (or “OK” depending on which is shown in the version of the software you are using). This will create a new blank spreadsheet into which you can enter data. Click on the Variable View tab on the bottom of the spreadsheet. This tab of the spreadsheet has several columns to organize information about the variables. The first column is titled “Name.” Start here and follow these steps:
- Click the first cell of that column and enter the name of your grouping variable using no spaces, special characters, or symbols. You can name this variable “Group” for simplicity. Hit enter and SPSS will automatically fill in the other cells of that row with some default assumptions about the data.
- Click the first cell of the column titled “Type” and then click the three dots that appear in the right side of the cell. Specify that the data for that variable appear as numbers by selecting “Numeric.” For numeric data SPSS will automatically allow you to enter values that are up to 8 digits in length with decimals shown to the hundredths place as noted in the “Width” and “Decimal” column headers, respectively. You can edit these as needed to fit your data, though these settings will be appropriate for most variables in the behavioral sciences.
- Click the first cell of the column titled “Label.” This is where you can specify what you want the variable to be called in output, including in tables and graphs. You can use spaces or phrases here, as desired.
- Click on the three dots in the first cell of the column titled “Values.” This is where you can add details about each group. Click the plus sign and specify that the value 1 (for Group 1) refers to the Aggressive Example subsample. Then click the plus sign again and specify that value 2 (for Group 2) refers to the Non-Aggressive Example subsample as shown below. Then click the plus sign again and specify that value 3 (for Group 3) refers to the No Example subsample as shown below. Then click “OK.”
- Click on the first cell of the column titled “Measure.” A pulldown menu with three options will allow you to specify the scale of measurement for the variable. Select the “Nominal.” option because grouping variables are nominal. Now SPSS is set-up for data for the grouping variable.
- Next we need to set up space for the quantitative variable. In the second cell (row) of the “Name” column, enter the name of your quantitative variable using no spaces, special characters, or symbols. You can name this variable “Aggression” for simplicity. Hit enter and SPSS will automatically fill in the other cells of that row with some default assumptions about the data.
- Click the cell in row 2 of the column titled “Label.” Here we can clarify that Aggression refers to “Acts of Aggression” by stating as such in the label column for this variable.
- Click the cell in row 2 of the column titled “Type” and then click the three dots that appear in the right side of the cell. Specify that the data for that variable appear as numbers by selecting “Numeric.” Again, you can edit the width and decimals as needed to fit your data.
- Click on the cell in the second row of the column titled “Measure.” A pulldown menu with three options will allow you to specify the scale of measurement for the variable. SPSS does not differentiate between interval and ratio and, instead, refers to both of these as “Scale.” Select the “Scale” option because if you are using a one-way ANOVA your data for this variable should have been measured on the interval or ratio scale.
Here is what the Variable View tab would look like when created for Data Set 10.1:
Now you are ready to enter your data. Click on the Data View tab toward the bottom of the spreadsheet. This tab of the spreadsheet has several columns into which you can enter the data for each variable. Each column will show the names given to the variables that were entered previously using the Variable View tab. Click the first cell corresponding to the first row of the first column. Start here and follow these steps:
- Enter the data for the grouping variable moving down the rows under the first column. Put a 1 in this column for everyone who is a member of Group 1, put a 2 for this column for everyone who is a member of Group 2, and put a 3 for this column for everyone who is a member of Group 3. Continue in this fashion if you have more than three groups until all data are entered for the grouping variable.
- Enter the data for the quantitative variable moving down the rows under the second column. If your data are already on your computer in a spreadsheet format such as excel, you can copy-paste the data in for the variable. Take special care to ensure the Aggression data for Group 1 appear in the rows corresponding to Group 1, that the Aggression data for Group 2 appear in the rows corresponding to Group 2, and that the Aggression data for Group 3 appear in the rows corresponding to Group 3.
- Then hit save to ensure your data set will be available for you in the future.
Once all the variables have been specified and the data have been entered, you can begin analyzing the data using SPSS.
Conducting a one-way ANOVA in SPSS
The steps to running a one-way ANOVA in SPSS are:
- Click Analyze -> Compare Means and Proportions -> One-Way ANOVA from the pull down menus as shown below.
- Drag the name of the quantitative variable from the list on the left into the Dependent list box on the right of the command window. You can also do this by clicking on the variable name to highlight it and the clicking the arrow to move to the desired location. Next, put the grouping variable into the Factor box on the right side of the command window. If the version of SPSS you are using has a check box to estimate effect sizes, click that as well.
- Click the Options tab. Select both “Descriptive” and “Homogeneity of variance test.” Then click “Continue.”
- Click the Post Hoc tab. Once in that section, select Tukey Then click “Continue.”
- Click “OK” to run the analyses.
- The output (which means the page of calculated results) will appear in a new window of SPSS known as an output viewer. The results will appear in six tables as shown below.
Acts of Aggression | 95% Confidence Interval for Mean | |||||||
N | Mean | Std. Deviation | Std. Error | Lower Bound | Upper Bound | Minimum | Maximum | |
Aggressive Example | 7 | 6.0000 | 1.29099 | .48795 | 4.8060 | 7.1940 | 4.00 | 8.00 |
Non-Aggressive Example | 7 | 2.0000 | 1.29099 | .48795 | .8060 | 3.1940 | .00 | 4.00 |
No Example | 7 | 1.0000 | 1.00000 | .37796 | .0752 | 1.9248 | .00 | 3.00 |
Total | 21 | 3.0000 | 2.48998 | .54336 | 1.8666 | 4.1334 | .00 | 8.00 |
Acts of Aggression | Levene Statistic | df1 | df2 | Sig. |
---|---|---|---|---|
Based on Mean | .255 | 2 | 18 | .777 |
Based on Median | .255 | 2 | 18 | .777 |
Based on Median and with adjusted df | .255 | 2 | 17.743 | .777 |
Based on trimmed mean | .217 | 2 | 18 | .807 |
Acts of Aggression | Sum of Squares | df | Mean Square | F | Sig. |
---|---|---|---|---|---|
Between Groups | 98.000 | 2 | 49.000 | 33.923 | <.001 |
Within Groups | 26.000 | 18 | 1.444 | ||
Total | 124.000 | 20 |
Acts of Aggression | 95% Confidence Interval | ||
---|---|---|---|
Point Estimate | Lower | Upper | |
Eta-squared | .790 | .522 | .862 |
Epsilon-squared | .767 | .469 | .847 |
Omega-squared Fixed-effect | .758 | .457 | .840 |
Omega-squared Random-effect | .611 | .296 | .725 |
a. Eta-squared and Epsilon-squared are estimated based on the fixed-effect model. |
Dependent Variable: Tukey HSD | Acts of Aggression | 95% Confidence Interval for Mean | ||||
---|---|---|---|---|---|---|
(I) Group | (J) Group | Mean Difference (I-J) | Std. Error | Sig. | Lower Bound | Upper Bound |
Aggressive Example | Non-Aggressive Example | 4.00000* | .64242 | <.001 | 2.3604 | 5.6396 |
No Example | 5.00000* | .64242 | <.001 | 3.3604 | 6.6396 | |
Non-Aggressive Example | Aggressive Example | -4.00000* | .64242 | <.001 | -5.6396 | -2.3604 |
No Example | 1.00000* | .64242 | .289 | -.6396 | 2.6396 | |
No Example | Aggressive Example | -5.00000* | .64242 | <.001 | -6.6396 | -3.3604 |
Non-Aggressive Example | 1.00000* | .64242 | .289 | -2.6396 | .6396 | |
*. The mean difference is significant at the 0.05 level |
Tukey HSDa | Subset for alpha = 0.05 | ||
---|---|---|---|
N | 1 | 2 | |
No Example | 7 | 1.0000 | |
Non-Aggressive Example | 7 | 2.0000 | |
Aggressive Example | 7 | 6.0000 | |
Sig. | .289 | 1.000 | |
Means for groups in homogeneous subsets are displayed. a. Uses Harmonic Mean Sample Size = 7.000. |
Reading SPSS Output for One-Way ANOVA
The first table shows the descriptive statistics for the test. These include several statistics such as the sample sizes, means, standard deviations, and the standard errors. These match the results from the hand-calculations performed earlier in this chapter for Data Set 10.1. The means and standard deviations are needed for summarizing the results for each group in an APA-formatted summary paragraph.
The second table shows one of the assumptions checks: homogeneity of variances. The Levene’s test for homogeneity based on means should be checked in this table. When variances are homogeneous enough to meet this assumption, the Levene’s test will have a non-significant p-value (meaning that the group variances are not significantly uneven). In the output we see the Levene’s test has a “Sig.” value (which is what SPSS calls the \(p\)-value) of .777. This p-value is greater than .05 indicating that the group variances are not significantly uneven. This is desirable and means we have met the assumption of homogeneity of variances and can proceed to reading our results for the ANOVA.
The third table shows the main test results which are needed for the evidence string, including the \(F\)-value, the degrees of freedom between (\(df_b\)), the degrees of freedom within (\(df_b\)), and the \(p\)-value for the omnibus test. This table is sometimes referred to as a “Source Table” because it summarizes all the main parts of the ANOVA formula, the result of the formula, and the significance. We can see that the sums of squares, degrees of freedom, mean sums of squares, and the \(F\)-value all match the hand-calculations we performed earlier in this chapter using Data Set 10.1. This also includes the information needed to create the second sentence of the APA-formatted results, including the values needed for its evidence string.
The fourth table provides the effect sizes. The one of focus for this chapter and Data Set 10.1 appears in the first row of the table labelled as “Eta-squared.” The value of the effect size appears in decimal form under the column titled “Point Estimate.” Here we see the effect size is .79 when rounded to the hundredths place; this matches what was found using hand calculations earlier in this chapter.
The fifth and sixth tables provide the results for the post-hoc tests in two formats. The fifth table provides more details and the sixth reiterates some of those details in a different format to accentuate group similarity and dissimilarity. Let’s first look at the fifth table which shows the multiple comparisons. Each pairwise comparison is shown in the fifth table twice in the following order:
Group 1 vs. Group 2
Group 1 vs. Group 3
Group 2 vs. Group 1
Group 2 vs. Group 3
Group 3 vs. Group 1
Group 3 vs. Group 2
You only need to review three of these because the other three are redundant to them (e.g. Comparing Group 1 to Group 2 is the same as comparing Group 2 to Group 1). SPSS will call the first group in each of these pairs “I” and the second in each pair “J.” The two most important columns to review are the “Mean Difference” column and the “Sig.” column. The mean difference column subtracts the mean of the second group named in the pair (called the “J Group”) from the mean of the first group named in the pair (called the “I Group.”). The larger the value is in the mean difference column, the greater the difference is in the means before accounting for error. The next column to check, and the most useful, is the “Sig.” column. This is where we will see the \(p\)-value for each pairwise comparison. Keep in mind that \(p\) refers to the probability of a Type I Error and that a result is significant when this value is less than .05. Thus, for each pair, if the \(p\)-value shown is less than .05, it indicates the means of the two groups are significantly different. With this in mind, we can interpret the results in the table.
Let’s look at the comparison of Group 1 (Aggressive Example Group) to Group 2 (Non Aggressive Example Group) as an example. The difference in the means of those two groups was 4.00 (when shown to the hundredths place). This indicates that the mean of Group 1 was 4.00 units higher than the mean of Group 2. Now we can look at the “Sig.” column to find the \(p\) value. It shows that the \(p\)-value is so small that it is less than .001. This indicates that the chance of a Type I Error is very small. Thus, after accounting for error, the difference in the means of these two groups is found to be statistically significant because the \(p\)-value is less than .05.
Dependent Variable: Tukey HSD | Acts of Aggression | 95% Confidence Interval for Mean | ||||
---|---|---|---|---|---|---|
(I) Group | (J) Group | Mean Difference (I-J) | Std. Error | Sig. | Lower Bound | Upper Bound |
Aggressive Example | Non-Aggressive Example | 4.00000* | .64242 | <.001 | 2.3604 | 5.6396 |
No Example | 5.00000* | .64242 | <.001 | 3.3604 | 6.6396 | |
Non-Aggressive Example | Aggressive Example | -4.00000* | .64242 | <.001 | -5.6396 | -2.3604 |
No Example | 1.00000* | .64242 | .289 | -.6396 | 2.6396 | |
No Example | Aggressive Example | -5.00000* | .64242 | <.001 | -6.6396 | -3.3604 |
Non-Aggressive Example | 1.00000* | .64242 | .289 | -2.6396 | .6396 | |
*. The mean difference is significant at the 0.05 level |
Following the same logic, we can check the other two pairwise comparisons. We can see that the difference in the means for Group 1 (Aggressive Example Group) and Group 3 (No Example Group) is 5.00 (when shown to the hundredths place). This means the mean of Group 1 was 5.00 units higher than the mean of Group 3. After accounting for error, this is found to be a statistically significant difference because the \(p\)-value is less than .05. Finally, let’s compare Group 2 (Non-Aggressive Example Group) to Group 3 (No Example Group). The difference in the means of these two groups is 1.00 (when shown to the hundredths place) which is noticeably smaller than in the other two pairwise comparisons. The \(p\)-value for this pair is .289, which is higher than .05. Taken together, the difference in the means is small and the chance of a Type I Error if we conclude these two groups are different is unacceptably high. Thus, the difference between Group 2 and Group 3 is not statistically significant.
The sixth table displays the main results from the multiple comparison table in a new format. The sixth table shows the name of the quantitative variable being compared between groups as its title. For Data Set 10.1 this is “Acts of Aggression.” It then provides summaries of each group in their own row. The summaries include just the sample size and mean for each group. However, it will put the means into columns that distinguish which means are not significantly different (by putting them in the same column) and which are different (by putting them in separate columns). Here we see that the means for the No Example Group (Group 3) and the Non-Aggressive Example Group (Group 2) are in the same column. This indicates that they are not significantly different from one another. However, the mean for the Aggressive Example Group (Group 1) appears in a column separate from the other group means. This indicates that the mean for this group is significantly different than the means for the other two groups. Some people may find this version easier to understand but you can use either the fifth or sixth table to check which groups are significantly different in the post-hoc analyses. Because these two tables show the same key results in two different ways, you only need to use the one which you prefer when checking and reporting your post-hoc results.
Tukey HSDa | Subset for alpha = 0.05 | ||
---|---|---|---|
N | 1 | 2 | |
No Example | 7 | 1.0000 | |
Non-Aggressive Example | 7 | 2.0000 | |
Aggressive Example | 7 | 6.0000 | |
Sig. | .289 | 1.000 | |
Means for groups in homogeneous subsets are displayed. a. Uses Harmonic Mean Sample Size = 7.000. |
These results are all consistent with what we found following the hand-calculations earlier in this chapter, which is as expected. The benefit to doing the calculations by hand is that we get clarity on what goes into each formula, why, and how it connects to the result. The benefits to using SPSS, of course, are that it is fast and easy to use. However, we must always keep in mind that SPSS cannot think for us. Instead, it just computes what we tell it to. It is up to us to know when to use the formula, how to check that the assumptions are met, to ensure the data are entered properly, and finally, to interpret the results appropriately. With this in mind, let’s consider these results as we would if they were part of a real study of aggression.
Real-World Interpretations of One-Way ANOVA
Through this book and our classes, we are learning how to use statistics as a tool to measure, test, and ultimately better understand truths about the world. In some behavioral and social sciences, like psychology, statistics is used to test hypotheses about the experiences and behaviors of humans. In keeping, this chapter focused on an example about whether what children were exposed to might impact their aggressive behaviors. If this was done as a true experiment where children were randomly assigned to one of three different conditions (i.e. exposure groups) and their behavior was measured and compared across those conditions, a one-way ANOVA could be used as a tool to test whether exposure impacted behavior.
In the example in this chapter (which used fake data made up for demonstration and practice purposes), it was hypothesized that children who were shown an adult acting aggressively toward a toy (Group 1), children who were shown an adult playing non-aggressively with the toy (Group 2), and children who were not shown any interactions with it (Group 3) would engage in different mean acts of aggression toward the toy themselves. The results can be interpreted to say that the children who were shown the aggressive example engaged in more aggressive behaviors, on average, than those who were shown a non-aggressive example and those who were shown no example. However, the amounts of aggressive behaviors engaged in by children shown the non-aggressive example were no different, on average, than those shown no example. Were these results obtained from a real study, it could be compelling evidence for a theory stating that people learn from, and replicate, behaviors that they observe in others.
Though our example is fake, it is based on real research. Such a theory does exist and is known as the theory of Observational Learning. This was famously tested by Albert Bandura, Dorothea Ross, and Sheila A. Ross in 1961. Their original study (known by many as the Bobo Doll Experiment) is often presented as having only three groups, much like the example in this chapter. However, the original study also grouped based on the gender of the child and the gender of the adult observed to assess any gender-related differences. In addition, the data in the original study did not have homogenous variances. For these reasons, the researchers used a Friedman two-way analysis of variance rather than the one-way ANOVA reviewed in this chapter.
Some of the things we should learn from this chapter and this example are:
- Inferential tests such as ANOVA are tools that, when understood and used appropriately, can help us learn new things about our world and
- Real research is often more complicated and messy (i.e. assumptions may be violated necessitating adjustments to the analyses or formulas used) than we would like it to be.
Thus, each new statistical skill we acquire opens up another way of being able to test and understand our world. The examples we learn in class give us a solid foundation onto which we can continue to acquire more tools to deal with the various and complicated aspects of our world.
Reading Review 10.4
- What scale of measurement should be indicated in SPSS for the grouping (factor) variable?
- What information is used in the output to check that the assumption of homogeneity of variances was met?
- Under which table and column of the SPSS output can the \(F\)-value be found?
- Under which table and column of the SPSS output can the omnibus \(p\)-value be found?
- Under which table and column of the SPSS output can the \(\eta^2\)-value be found?
- Under which table and column of the SPSS output can the post-hoc \(p\)-values be found?