Skip to main content
Statistics LibreTexts

11.1.2: Ratio of Variability

  • Page ID
    22108
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Now that we know a little bit more about the different kinds of variability, or variances, let's learn what we can do with them.

    ANOVA is a Ratio of Variances

    The between-subjects ANOVA, is sometimes called a one-factor ANOVA, an independent factor ANOVA, or a one-way ANOVA (which is a bit of a misnomer). The critical ingredient for a one-factor (one IV), between-subjects (multiple groups) ANOVA, is that you have one independent variable, with at least two levels (at least two different groups in that one IV). You might be thinking, "When you have one IV with two levels, you can run a \(t\)-test." And you would be correct! You could also run an ANOVA. Interestingly, they give you almost the exact same results. You will get a \(p\)-value from both tests that is identical (they are really doing the same thing under the hood). The \(t\)-test gives a \(t\)-value as the important sample statistic. The ANOVA gives you the \(F\)-value (for Fisher, the inventor of the test) as the important sample statistic. It turns out that \(t^2\) equals \(F\), when there are only two groups in the design. They are the same test. Side-note, it turns out they are all related to Pearson’s r too (which we'll discuss much later in this textbook).

    Remember that \(t\) is the mean difference divided by the standard error of the sample. The idea behind \(F\) is the same basic idea that goes into making \(t\). Here is the general idea behind the formula, it is again a ratio (division) of the effect we are measuring (in the numerator), and the variation associated with the effect (in the denominator).

    \[\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber \]

    This idea is the same as with the t-test. The difference with \(F\), is that we use variances (how different, on average, each score is from the sample mean) to describe both the measure of the effect and the measure of error. So, \(F\) is a ratio of two variances.

    When the variance associated with the effect is the same size as the variance associated with sampling error, we will get two of the same numbers, this will result in an \(F\)-value of 1 (a number divided by itself equals 1). When the variance due to the effect is larger than the variance associated with sampling error, then \(F\) will be greater than 1. When the variance associated with the effect is smaller than the variance associated with sampling error, \(F\) will be less than one. Let’s rewrite in plainer English. We are talking about two concepts that we would like to measure from our data. 1) A measure of what we want to explain using our IV levels, and 2) a measure of error, or stuff about our data we can’t explain by our IV levels. So, the \(F\) formula looks like this:

    \[\text{F} = \frac{\text{Can Explain by IV}}{\text{Can't Explain by IV }} \nonumber \]

    When what we can explain based on what group participants are in is as much as we can’t explain, \(F\) = 1. This isn’t that great of a situation for us to be in. It means we have a lot of uncertainty. When we can explain much more than we can’t explain, we are doing a good job and \(F\) will be greater than 1. When we can explain less than what we can’t, we really can’t explain very much, \(F\) will be less than 1. That’s the concept behind making \(F\).

    If you saw an \(F\) in the wild, and it was .6. Then you would automatically know the researchers couldn’t explain much of their data. If you saw an \(F\) of 5, then you would know the researchers could explain 5 times more than the couldn’t, that’s pretty good. And the point of this is to give you an intuition about the meaning of an \(F\)-value, even before you know how to compute it.

    Computing the \(F\)-value

    Fisher’s ANOVA is considered very elegant. It starts us off with a big problem that we always have with data. We have a lot of numbers, and there is a lot of variation in the numbers, what to do? Wouldn’t it be nice to split up the variation into to kinds, or sources. If we could know what parts of the variation were being caused by our experimental manipulation (IV), and what parts were being caused by sampling error (wiggly samples), we would be making really good progress. We would be able to know if our IV was causing more change in the data than sampling error, or chance alone. If we could measure those two parts of the total variation, we could make a ratio, and then we would have an \(F\) value. This is what the ANOVA does. It splits the total variation in the data into two parts. The formula is:

    \[\text{Total Variation} = \text{Variation due to IV levels (groups)} + \text{Variation due to sampling error} \nonumber \]

    This is a nice idea, but it is also vague. We haven’t specified our measure of variation. What should we use?

    Remember the sums of squares ? That’s what we use. Let’s take another look at the formula, using sums of squares for the measure of variation:

    \[SS_\text{Total} = SS_\text{Effect} + SS_\text{Error} \nonumber \]

    We'll look at the exact formula for each sum of squares in the next section. Now, let's look at how we get from these sums of squares to the final calculated F-value.

    Introduction to the ANOVA Table

    We will go into more detail into the ANOVA Summary Table again, but here is your first introduction to understand the basic components and how they relate to each other. An ANOVA Summary Tables is provided in Table \(\PageIndex{1}\) for an example with three groups, A, B, and C. For example, we might have three scores in each group. To get from raw data the calculated F-value, you plug in the sums of squares for each kind of variation (between the groups, within each group, and the total) that we'll learn how to calculate next, with some the degrees of freedom for each type of variance (with new formulas!) to get the Mean Square for each type of variance. Then the ratio of the Mean Square for the Between Groups (MSB) divided by the Mean Square for Within Groups (sometimes called "Error", MSW) gives the final calculated F-value. All of these little pieces are conveniently organized in ANOVA Summary Tables.

    Table \(\PageIndex{1}\)- Example ANOVA Summary Table
    Source SS DF MS F
    Between Groups 72 2 36.00 0.94
    Within Groups (Error) 230 6 38.33 N/A
    Total 302 8 N/A N/A

    There isn’t anything special about the ANOVA table, it’s just a way of organizing all the pieces. After conducting an ANOVA, you would provide this summary table in your write-up so that readers can see important information about your samples, error, and the effect of your IV.

    Let's look through each column.

    Sum of Squares

    The SS column stands for Sum of Squares. We will get to the equations in the next section, but the basic idea is the same that we've idea that we've talked about since standard deviations. In general, sum of squares look at the sum of the difference from the mean, but squared before the summing to get rid of the negative values.

    Degrees of freedom

    \(DF\)s can be fairly simple when we are doing a relatively simple ANOVA like this one, but they can become complicated when designs get more complicated. Notice that each source has a difference degree of freedom, which means that you will need to calculate the \(DF\) for each source of variance (Between Groups, Within Groups or Error, and Total).

    The formula for the degrees of freedom for \(SS_\text{BG}\) is \(df_\text{BG} = \text{k} -1\), where k is the number of groups in the design. In the example in Table \(\PageIndex{1}\), there were three groups, so the \(DF\)is 3-1 = 2.

    The formula for the degrees of freedom for \(SS_\text{WG or Error}\) is \(df_\text{WG or Error} = \text{N} - \text{k}\), or the number of scores minus the number of groups. We have 9 scores and 3 groups, so our \(df\) for the error term is 9-3 = 6.

    The formula for the degrees of freedom for \(SS_\text{T}\) is \(df_\text{Total} = \text{N} - 1\), or the number of scores minus 1; this is the degrees of freedom that you are used to. We have 9 scores, so our Total \(df\) is 9-1 = 8.

    We are lucky to gave another computation check here because \(df_\text{Total} = df_\text{BG} + df_\text{WG or Error}\). You will be tempted to not calculate one of these, and just use addition (or subtraction) to figure out the other ones but I caution you not to do this. There have been plenty of times when I think that I have the DFs correct, but then try \(df_\text{BG} + df_\text{WG or Error}\) and find that it does not equal the \(df_\text{Total}\), which means that I messed up somewhere.

    Mean Squared Error

    The next column is MS, for Mean Square (or Mean Squared Error). To get the MS for each type of variance (between groups and within groups), we divide the \(SS\)es by their respective degrees of freedom. Remember we are trying to accomplish this goal:

    \[\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber \]

    We want to build a ratio that divides a measure of an effect by a measure of error. Perhaps you noticed that we already have a measure of an effect and error! The \(SS_\text{BG}\) and \(SS_\text{WG or Error}\) both represent the variation due to the effect, and the leftover variation that is unexplained. Why don’t we just do this?

    \[\frac{SS_\text{BG}}{SS_\text{WG or Error}} \nonumber \]

    Well, of course you could do that, but the kind of number you would get wouldn’t be readily interpretable like a \(t\) value or a \(z\) score. The solution is to normalize the \(SS\) terms. Don’t worry, normalize is just a fancy word for taking the average, or finding the mean. Remember, the SS terms are all sums. And, each sum represents a different number of underlying properties.

    For example, the SSBG represents the sum of variation for three means in our study. We might ask the question, well, what is the average amount of variation for each mean…You might think to divide SS_ by 3, because there are three means, but because we are estimating this property, we divide by the degrees of freedom instead (# groups - 1 = 3-1 = 2). Now we have created something new, it’s called the \(MS_\text{BG}\).

    \[MS_\text{BG} = \frac{SS_\text{BG}}{df_\text{BG}} \nonumber \]

    \[MS_\text{BG} = \frac{72}{2} = 36 \nonumber \]

    This might look alien and seem a bit complicated. But, it’s just another mean. It’s the mean of the sums of squares for the effect of the IV levels between the groups.

    The \(SS_\text{WG or Error}\) represents the sum of variation for nine scores in our study. That’s a lot more scores, so the \(SS_\text{WG or Error}\) is often way bigger than than \(SS_\text{BG}\). If we left our SSes this way and divided them, we would almost always get numbers less than one, because the \(SS_\text{WG or Error}\) is so big. What we need to do is bring it down to the average size. So, we might want to divide our \(SS_\text{WG or Error}\) by 9, after all there were nine scores. However, because we are estimating this property, we divide by the degrees of freedom instead (the number of scores minus the number groups) = 9-3 = 6). Now we have created something new, it’s called the \(MS_\text{WG or Error}\).

    \[MS_\text{WG or Error} = \frac{SS_\text{WG Error}}{df_\text{WG or Error}} \nonumber \]

    \[MS_\text{WG or Error} = \frac{230}{6} = 38.33 \nonumber \]

    Calculate F

    Now that we have done all of the hard work, calculating \(F\) is easy! Notice, the Mean Square for the effect (36) is placed above the Mean Square for the error (38.33) in the ANOVA Summary Table? That seems natural because we divide 36/38.33 to get the \(F\)-value!

    \[\text{F} = \frac{\text{measure of effect}}{\text{measure of error}} \nonumber \]

    \[\text{F} = \frac{MS_\text{BG}}{MS_\text{WG or Error}} \nonumber \]

    \[\text{F} = \frac{36}{38.33} = .94 \nonumber \]

    Summary

    So, that's how the ANOVA's F is a ratio of variability, and how you use the ANOVA Summary Table to calculate that ratio. We'll learn how to calculate each sum of squares next, then go back to the ANOVA Summary Table.


    This page titled 11.1.2: Ratio of Variability is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Michelle Oja.