Skip to main content
Statistics LibreTexts

2.5: Measures of Variation (Differentness)

  • Page ID
    7895
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    What did you do when you wrote essays in high school about a book you read? Probably compare and contrast something right? When you summarize data, you do the same thing. Measures of central tendency give us something like comparing does, they tell us stuff about what is the same. Measures of variation give us something like contrasting does, they tell us stuff about what is different.

    First, we note that whenever you see a bunch of numbers that aren’t the same, you already know there are some differences. This means the numbers vary, and there is variation in the size of the numbers.

    The Range

    Consider these 10 numbers, that I already ordered from smallest to largest for you:

    1 3 4 5 5 6 7 8 9 24

    The numbers have variation, because they are not all the same. We can use the range to describe the width of the variation. The range refers to the minimum (smallest value) and maximum (largest value) in the set. So, the range would be 1 and 24.

    The range is a good way to quickly summarize the boundaries of your data in just two numbers. By computing the range we know that none of the data is larger or smaller than the range. And, it can alert you to outliers. For example, if you are expecting your numbers to be between 1 and 7, but you find the range is 1 - 340,500, then you know you have some big numbers that shouldn’t be there, and then you can try to figure out why those numbers occurred (and potentially remove them if something went wrong).

    The Difference Scores

    It would be nice to summarize the amount of differentness in the data. Here’s why. If you thought that raw data (lots of numbers) is too big to look at, then you will be frightened to contemplate how many differences there are to look at. For example, these 10 numbers are easy to look at:

    1 3 4 5 5 6 7 8 9 24

    But, what about the difference between the numbers, what do those look like? We can compute the difference scores between each number, then put them in a matrix like the one below:

    numbers<-c(1, 3, 4, 5, 5, 6, 7, 8, 9, 24)
    mat<-matrix(rep(numbers,10),ncol=10)
    differences<-t(mat)-numbers
    row.names(differences)<-numbers
    colnames(differences)<-numbers
    knitr::kable(differences,row.names=T)
    |   |   1|   3|   4|   5|   5|   6|   7|   8|   9| 24|
    |:--|---:|---:|---:|---:|---:|---:|---:|---:|---:|--:|
    |1  |   0|   2|   3|   4|   4|   5|   6|   7|   8| 23|
    |3  |  -2|   0|   1|   2|   2|   3|   4|   5|   6| 21|
    |4  |  -3|  -1|   0|   1|   1|   2|   3|   4|   5| 20|
    |5  |  -4|  -2|  -1|   0|   0|   1|   2|   3|   4| 19|
    |5  |  -4|  -2|  -1|   0|   0|   1|   2|   3|   4| 19|
    |6  |  -5|  -3|  -2|  -1|  -1|   0|   1|   2|   3| 18|
    |7  |  -6|  -4|  -3|  -2|  -2|  -1|   0|   1|   2| 17|
    |8  |  -7|  -5|  -4|  -3|  -3|  -2|  -1|   0|   1| 16|
    |9  |  -8|  -6|  -5|  -4|  -4|  -3|  -2|  -1|   0| 15|
    |24 | -23| -21| -20| -19| -19| -18| -17| -16| -15|  0|

    We are looking at all of the possible differences between each number and every other number. So, in the top left, the difference between 1 and itself is 0. One column over to the right, the difference between 3 and 1 (3-1) is 2, etc. As you can see, this is a 10x10 matrix, which means there are 100 differences to look at. Not too bad, but if we had 500 numbers, then we would have 500*500 = 250,000 differences to look at (go for it if you like looking at that sort of thing).

    Pause for a simple question. What would this matrix look like if all of the 10 numbers in our data were the same number? It should look like a bunch of 0s right? Good. In that case, we could easily see that the numbers have no variation.

    But, when the numbers are different, we can see that there is a very large matrix of difference scores. How can we summarize that? How about we apply what we learned from the previous section on measures of central tendency. We have a lot of differences, so we could ask something like, what is the average difference that we have? So, we could just take all of our differences, and compute the mean difference right? What do you think would happen if we did that?

    Let’s try it out on these three numbers:

    1 2 3

    1 2 3
    1 0 1 2
    2 -1 0 1
    3 -2 -1 0

    You might already guess what is going to happen. Let’s compute the mean:

    \[ \text{mean of difference scores} = \frac{0+1+2-1+0+1-2-1+0}{9} = \frac{0}{9} = 0 \nonumber \]

    Uh oh, we get zero for the mean of the difference scores. This will always happen whenever you take the mean of the difference scores. We can see that there are some differences between the numbers, so using 0 as the summary value for the variation in the numbers doesn’t make much sense.

    Furthermore, you might also notice that the matrices of difference scores are redundant. The diagonal is always zero, and numbers on one side of the diagonal are the same as the numbers on the other side, except their signs are reversed. So, that’s one reason why the difference scores add up to zero.

    These are little problems that can be solved by computing the variance and the standard deviation. For now, the standard deviation is a just a trick that we use to avoid getting a zero. But, later we will see it has properties that are important for other reasons.

    The Variance

    Variability, variation, variance, vary, variable, varying, variety. Confused yet? Before we describe the variance, we want to you be OK with how this word is used. First, don’t forget the big picture. We know that variability and variation refers to the big idea of differences between numbers. We can even use the word variance in the same way. When numbers are different, they have variance.

    Note

    The formulas for variance and standard deviation depend on whether you think your data represents an entire population of numbers, or is sample from the population. We discuss this issue in later on. For now, we divide by N, later we discuss why you will often divide by N-1 instead.

    The word variance also refers to a specific summary statistic, the sum of the squared deviations from the mean. Hold on what? Plain English please. The variance is the sum of the squared difference scores, where the difference scores are computed between each score and the mean. What are these scores? The scores are the numbers in the data set. Let’s see the formula in English first:

    \[ \textit{variance} = \frac{\text{Sum of squared difference scores}}{\text{Number of Scores}} \nonumber \]

    Deviations from the mean, Difference scores from the mean

    We got a little bit complicated before when we computed the difference scores between all of the numbers in the data. Let’s do it again, but in a more manageable way. This time, we calculate the difference between each score and the mean. The idea here is

    1. We can figure out how similar our scores are by computing the mean
    2. Then we can figure out how different our scores are from the mean

    This could tell us, 1) something about whether our scores are really all very close to the mean (which could help us know if the mean is good representative number of the data), and 2) something about how much differences there are in the numbers.

    Take a look at this table:

    scores values mean Difference_from_Mean
    1 1 4.5 -3.5
    2 6 4.5 1.5
    3 4 4.5 -0.5
    4 2 4.5 -2.5
    5 6 4.5 1.5
    6 8 4.5 3.5
    Sums 27 27 0
    Means 4.5 4.5 0

    The first column shows we have 6 scores in the data set, and the value columns shows each score. The sum of the values, and the mean is presented on the last two rows. The sum and the mean were obtained by:

    \[ \frac{1+6+4+2+6+8}{6} = \frac{27}{6} = 4.5 \nonumber \]

    The third column mean, appears a bit silly. We are just listing the mean once for every score. If you think back to our discussion about the meaning of the mean, then you will remember that it equally distributes the total sum across each data point. We can see that here, if we treat each score as the mean, then every score is a 4.5. We can also see that adding up all of the means for each score gives us back 27, which is the sum of the original values. Also, we see that if we find the mean of the mean scores, we get back the mean (4.5 again).

    All of the action is occurring in the fourth column, Difference_from_Mean. Here, we are showing the difference scores from the mean, using \(X_{i}-\bar{X}\). In other words, we subtracted the mean from each score. So, the first score, 1, is -3.5 from the mean, the second score, 6, is +1.5 from the mean, and so on.

    Now, we can look at our original scores and we can look at their differences from the mean. Notice, we don’t have a matrix of raw difference scores, so it is much easier to look at out. But, we still have a problem:

    We can see that there are non-zero values in the difference scores, so we know there are a differences in the data. But, when we add them all up, we still get zero, which makes it seem like there are a total of zero differences in the data…Why does this happen…and what to do about it?

    The mean is the balancing point in the data

    One brief pause here to point out another wonderful property of the mean. It is the balancing point in the data. If you take a pen or pencil and try to balance it on your figure so it lays flat what are you doing? You need to find the center of mass in the pen, so that half of it is on one side, and the other half is on the other side. That’s how balancing works. One side = the other side.

    We can think of data as having mass or weight to it. If we put our data on our bathroom scale, we could figure out how heavy it was by summing it up. If we wanted to split the data down the middle so that half of the weight was equal to the other half, then we could balance the data on top of a pin. The mean of the data tells you where to put the pin. It is the location in the data, where the numbers on the one side add up to the same sum as the numbers on the other side.

    If we think this through, it means that the sum of the difference scores from the mean will always add up to zero. This is because the numbers on one side of the mean will always add up to -x (whatever the sum of those numbers is), and the numbers of the other side of the mean will always add up to +x (which will be the same value only positive). And:

    \( -x + x = 0 \), right.

    Right.

    The squared deviations

    Some devious someone divined a solution to the fact that differences scores from the mean always add to zero. Can you think of any solutions? For example, what could you do to the difference scores so that you could add them up, and they would weigh something useful, that is they would not be zero?

    The devious solution is to square the numbers. Squaring numbers converts all the negative numbers to positive numbers. For example, \(2^2 = 4\), and \(-2^2 = 4\). Remember how squaring works, we multiply the number twice: \(2^2 = 2*2 = 4\), and \(-2^2 = -2*-2 = 4\). We use the term squared deviations to refer to differences scores that have been squared. Deviations are things that move away from something. The difference scores move away from the mean, so we also call them deviations.

    Let’s look at our table again, but add the squared deviations.

    scores values mean Difference_from_Mean Squared_Deviations
    1 1 4.5 -3.5 12.25
    2 6 4.5 1.5 2.25
    3 4 4.5 -0.5 0.25
    4 2 4.5 -2.5 6.25
    5 6 4.5 1.5 2.25
    6 8 4.5 3.5 12.25
    Sums 27 27 0 35.5
    Means 4.5 4.5 0 5.91666666666667

    OK, now we have a new column called squared_deviations. These are just the difference scores squared. So, \(-3.5^2 = 12.25\), etc. You can confirm for yourself with your cellphone calculator.

    Now that all of the squared deviations are positive, we can add them up. When we do this we create something very special called the sum of squares (SS), also known as the sum of the squared deviations from the mean. We will talk at length about this SS later on in the ANOVA chapter. So, when you get there, remember that you already know what it is, just some sums of some squared deviations, nothing fancy.

    Finally, the variance

    Guess what, we already computed the variance. It already happened, and maybe you didn’t notice. “Wait, I missed that, what happened?”.

    First, see if you can remember what we are trying to do here. Take a pause, and see if you can tell yourself what problem we are trying solve.

    pause

    Without further ado, we are trying to get a summary of the differences in our data. There are just as many difference scores from the mean as there are data points, which can be a lot, so it would be nice to have a single number to look at, something like a mean, that would tell us about the average differences in the data.

    If you look at the table, you can see we already computed the mean of the squared deviations. First, we found the sum (SS), then below that we calculated the mean = 5.916 repeating. This is the variance. The variance is the mean of the sum of the squared deviations:

    \( \textit{variance} = \frac{SS}{N}\), where SS is the sum of the squared deviations, and N is the number of observations.

    OK, now what. What do I do with the variance? What does this number mean? Good question. The variance is often an unhelpful number to look at. Why? Because it is not in the same scale as the original data. This is because we squared the difference scores before taking the mean. Squaring produces large numbers. For example, we see a 12.25 in there. That’s a big difference, bigger than any difference between any two original values. What to do? How can we bring the numbers back down to their original unsquared size?

    If you are thinking about taking the square root, that’s a ding ding ding, correct answer for you. We can always unsquare anything by taking the square root. So, let’s do that to 5.916. \(\sqrt{5.916} = 2.4322829\).

    The Standard Deviation

    Oops, we did it again. We already computed the standard deviation, and we didn’t tell you. The standard deviation is the square root of the variance…At least, it is right now, until we complicate matters for you in the next chapter.

    Here is the formula for the standard deviation:

    \[\text{standard deviation} = \sqrt{\textit{Variance}} = \sqrt{\frac{SS}{N}} \nonumber \]

    We could also expand this to say:

    \[\text{standard deviation} = \sqrt{\frac{\sum_{i}^{n}({x_{i}-\bar{x})^2}}{N}} \nonumber \]

    Don’t let those big square root signs put you off. Now, you know what they are doing there. Just bringing our measure of the variance back down to the original size of the data. Let’s look at our table again:

    scores values mean Difference_from_Mean Squared_Deviations
    1 1 4.5 -3.5 12.25
    2 6 4.5 1.5 2.25
    3 4 4.5 -0.5 0.25
    4 2 4.5 -2.5 6.25
    5 6 4.5 1.5 2.25
    6 8 4.5 3.5 12.25
    Sums 27 27 0 35.5
    Means 4.5 4.5 0 5.91666666666667

    We measured the standard deviation as \( 2.4322829 \). Notice this number fits right in the with differences scores from the mean. All of the scores are kind of in and around + or - \( 2.4322829 \). Whereas, if we looked at the variance, 5.916 is just too big, it doesn’t summarize the actual differences very well.

    What does all this mean? Well, if someone told they had some number with a mean of 4.5 (like the values in our table), and a standard deviation of \( 2.4322829 \), you would get a pretty good summary of the numbers. You would know that many of the numbers are around 4.5, and you would know that not all of the numbers are 4.5. You would know that the numbers spread around 4.5. You also know that the spread isn’t super huge, it’s only + or - \( 2.4322829 \) on average. That’s a good starting point for describing numbers.

    If you had loads of numbers, you could reduce them down to the mean and the standard deviation, and still be pretty well off in terms of getting a sense of those numbers.


    This page titled 2.5: Measures of Variation (Differentness) is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Matthew J. C. Crump via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.