# 2.8: Measures of the Spread of the Data

- Page ID
- 5332

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)An important characteristic of any set of data is the variation in the data. In some data sets, the data values are concentrated closely near the mean; in other data sets, the data values are more widely spread out from the mean. The most common measure of variation, or spread, is the standard deviation. The **standard deviation** is a number that measures how far data values are from their mean.

## The standard deviation

- provides a numerical measure of the overall amount of variation in a data set, and
- can be used to determine whether a particular data value is close to or far from the mean.

### The standard deviation provides a measure of the overall variation in a data set

The standard deviation is always positive or zero. The standard deviation is small when the data are all concentrated close to the mean, exhibiting little variation or spread. The standard deviation is larger when the data values are more spread out from the mean, exhibiting more variation.

Suppose that we are studying the amount of time customers wait in line at the checkout at supermarket \(A\) and supermarket \(B\). The average wait time at both supermarkets is five minutes. At supermarket \(A\), the standard deviation for the wait time is two minutes; at supermarket \(B\). The standard deviation for the wait time is four minutes.

Because supermarket \(B\) has a higher standard deviation, we know that there is more variation in the wait times at supermarket \(B\). Overall, wait times at supermarket \(B\) are more spread out from the average; wait times at supermarket \(A\) are more concentrated near the average.

### Calculating the Standard Deviation

If \(x_i\) is a data value, then the difference "\(x_i\) minus the mean" is called its **deviation**. In a data set, there are as many deviations as there are items in the data set. The deviations are used to calculate the standard deviation. If the numbers belong to a population, in symbols a deviation is \(x_i – \mu\). For sample data, in symbols a deviation is \(x_i – \overline{x}\).

The procedure to calculate the standard deviation depends on whether the numbers are the entire population or are data from a sample. The calculations are similar, but not identical. Therefore the symbol used to represent the standard deviation depends on whether it is calculated from a population or a sample. The lower case letter \(s\) represents the sample standard deviation and the Greek letter \(\sigma\) (sigma, lower case) represents the population standard deviation. If the sample has the same characteristics as the population, then \(s\) should be a good estimate of \(\sigma\).

To calculate the standard deviation, we need to calculate the variance first. The **variance** is the **average of the squares of the deviations** (the \(x_i – \overline{x}\) values for a sample, or the \(x_i – \mu\) values for a population). The symbol \(\sigma^2\) represents the population variance; the population standard deviation \(\sigma\) is the square root of the population variance. The symbol \(s^2\) represents the sample variance; the sample standard deviation \(s\) is the square root of the sample variance. You can think of the standard deviation as a special average of the deviations. Formally, the variance is the second moment of the distribution or the first moment around the mean. Remember that the mean is the first moment of the distribution.

If the numbers come from a census of the entire **population** and not a sample, when we calculate the average of the squared deviations to find the variance, we divide by \(N\), the number of items in the population. If the data are from a **sample** rather than a population, when we calculate the average of the squared deviations, we divide by \(\bf{n – 1}\), one less than the number of items in the sample.

### Formulas for the Sample Standard Deviation

- \(s=\sqrt{\frac{\sum^n_{i=1}(x_i-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\Sigma f_i(x_i-\overline{x})^{2}}{n-1}} \text { or } s=\sqrt{\frac{\left(\sum_{i=1}^{n} x_i^{2}\right)-n \overline{x}^{2}}{n-1}}\)
- For the sample standard deviation, the denominator is \(\bf{n – 1}\), that is the sample size minus 1.

### Formulas for the Population Standard Deviation

- \(\boldsymbol{\sigma}=\sqrt{\frac{\sum^N_{i=1}(x_i-\mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\Sigma f_i(x_i \mu)^{2}}{N}} \text { or } \sigma=\sqrt{\frac{\sum_{i=1}^{N} x_{i}^{2}}{N}-\mu^{2}}\)
- For the population standard deviation, the denominator is \(N\), the number of items in the population.

In these formulas, \(f_i\) represents the frequency with which a value appears. For example, if a value appears once, \(f_i=1\). If a value appears three times in the data set or population, \(f_i=3\).

Two important observations concerning the variance and standard deviation: the deviations are measured from the mean and the deviations are squared. In principle, the deviations could be measured from any point, however, our interest is measurement from the center weight of the data, what is the "normal" or most usual value of the observation. Later we will be trying to measure the "unusualness" of an observation or a sample mean and thus we need a measure from the mean. The second observation is that the deviations are squared. This does two things, first it makes the deviations all positive and second it changes the units of measurement from that of the mean and the original observations. If the data are weights then the mean is measured in pounds, but the variance is measured in pounds-squared. One reason to use the standard deviation is to return to the original units of measurement by taking the square root of the variance. Further, when the deviations are squared it explodes their value. For example, a deviation of 10 from the mean when squared is 100, but a deviation of 100 from the mean is 10,000. What this does is place great weight on outliers when calculating the variance.

### Types of Variability in Samples

When trying to study a population, a sample is often used, either for convenience or because it is not possible to access the entire population. Variability is the term used to describe the differences that may occur in these outcomes. Common types of variability include the following:

- Observational or measurement variability
- Natural variability
- Induced variability
- Sample variability

Here are some examples to describe each type of variability.

**Example 1: Measurement ****variability**

Measurement variability occurs when there are differences in the instruments used to measure or in the people using those instruments. If we are gathering data on how long it takes for a ball to drop from a height by having students measure the time of the drop with a stopwatch, we may experience measurement variability if the two stopwatches used were made by different manufacturers: For example, one stopwatch measures to the nearest second, whereas the other one measures to the nearest tenth of a second. We also may experience measurement variability because two different people are gathering the data. Their reaction times in pressing the button on the stopwatch may differ; thus, the outcomes will vary accordingly. The differences in outcomes may be affected by measurement variability.

**Example 2: Natural ****variability**

Natural variability arises from the differences that naturally occur because members of a population differ from each other. For example, if we have two identical corn plants and we expose both plants to the same amount of water and sunlight, they may still grow at different rates simply because they are two different corn plants. The difference in outcomes may be explained by natural variability.

**Example 3: Induced ****variability**

Induced variability is the counterpart to natural variability; this occurs because we have artificially induced an element of variation (that, by definition, was not present naturally): For example, we assign people to two different groups to study memory, and we induce a variable in one group by limiting the amount of sleep they get. The difference in outcomes may be affected by induced variability.

**Example 4: Sample ****variability**

Sample variability occurs when multiple random samples are taken from the same population. For example, if I conduct four surveys of 50 people randomly selected from a given population, the differences in outcomes may be affected by sample variability.

In a fifth grade class, the teacher was interested in the average age and the sample standard deviation of the ages of her students. The following data are the ages for a SAMPLE of \(n = 20\) fifth grade students. The ages are rounded to the nearest half year:

9; 9.5; 9.5; 10; 10; 10; 10; 10.5; 10.5; 10.5; 10.5; 11; 11; 11; 11; 11; 11; 11.5; 11.5; 11.5;

\[\overline{x}=\frac{9+9.5(2)+10(4)+10.5(4)+11(6)+11.5(3)}{20}=10.525\nonumber\]

The average age is 10.53 years, rounded to two places.

The variance may be calculated by using a table. Then the standard deviation is calculated by taking the square root of the variance. We will explain the parts of the table after calculating \(s\).

Data | Freq. | Deviations | Deviations^{2} |
(Freq.)(Deviations^{2}) |
---|---|---|---|---|

\(x_i\) | \(f_i\) | \((x_i - \overline{x})\) | \((x_i – \overline{x})^2\) | \(f_i(x_ – \overline{x})^2\) |

9 | 1 | \(9 – 10.525 = –1.525\) | \((–1.525)^2 = 2.325625\) | \(1 \times 2.325625 = 2.325625\) |

9.5 | 2 | \(9.5 – 10.525 = –1.025\) | \((–1.025)2 = 1.050625\) | \(2 \times 1.050625 = 2.101250\) |

10 | 4 | \(10 – 10.525 = –0.525\) | \((–0.525)2 = 0.275625\) | \(4 \times 0.275625 = 1.1025\) |

10.5 | 4 | \(10.5 – 10.525 = –0.025\) | \((–0.025)2 = 0.000625\) | \(4 \times 0.000625 = 0.0025\) |

11 | 6 | \(11 – 10.525 = 0.475\) | \((0.475)2 = 0.225625\) | \(6 \times 0.225625 = 1.35375\) |

11.5 | 3 | \(11.5 – 10.525 = 0.975\) | \((0.975)2 = 0.950625\) | \(3 \times 0.950625 = 2.851875\) |

The total is 9.7375 |

The sample variance, \(s^2\), is equal to the sum of the last column (9.7375) divided by the total number of data values minus one \((20 – 1)\):

\(s^{2}=\frac{9.7375}{20-1}=0.5125\)

The **sample standard deviation** *s* is equal to the square root of the sample variance:

\(s=\sqrt{0.5125}=0.715891\), which is rounded to two decimal places, \(s = 0.72\).

### Explanation of the standard deviation calculation shown in the table

The deviations show how spread out the data are about the mean. The data value 11.5 is farther from the mean than is the data value 11 which is indicated by the deviations 0.97 and 0.47. A positive deviation occurs when the data value is greater than the mean, whereas a negative deviation occurs when the data value is less than the mean. The deviation is –1.525 for the data value 9. **If you add the deviations, the sum is always zero**. (For Example \(\PageIndex{1}\), there are \(n = 20\) deviations.) So you cannot simply add the deviations to get the spread of the data. By squaring the deviations, you make them positive numbers, and the sum will also be positive. The variance, then, is the average squared deviation. By squaring the deviations we are placing an extreme penalty on observations that are far from the mean; these observations get greater weight in the calculations of the variance. We will see later on that the variance (standard deviation) plays the critical role in determining our conclusions in inferential statistics. We can begin now by using the standard deviation as a measure of "unusualness." "How did you do on the test?" "Terrific! Two standard deviations above the mean." This, we will see, is an unusually good exam grade.

The variance is a squared measure and does not have the same units as the data. Taking the square root solves the problem. The standard deviation measures the spread in the same units as the data.

Notice that instead of dividing by \(n = 20\), the calculation divided by \(n – 1 = 20 – 1 = 19\) because the data is a sample. For the **sample** variance, we divide by the sample size minus one \((n – 1)\). Why not divide by \(n\)? The answer has to do with the population variance. **The sample variance is an estimate of the population variance.** This estimate requires us to use an estimate of the population mean rather than the actual population mean. Based on the theoretical mathematics that lies behind these calculations, dividing by \((n – 1)\) gives a better estimate of the population variance.

The standard deviation, \(s\) or \(\sigma\), is either zero or larger than zero. Describing the data with reference to the spread is called "variability". The variability in data depends upon the method by which the outcomes are obtained; for example, by measuring or by random sampling. When the standard deviation is zero, there is no spread; that is, the all the data values are equal to each other. The standard deviation is small when the data are all concentrated close to the mean, and is larger when the data values show more variation from the mean. When the standard deviation is a lot larger than zero, the data values are very spread out about the mean; outliers can make \(s\) or \(\sigma\) very large.

Use the following data (first exam scores) from Susan Dean's spring pre-calculus class:

\(33; 42; 49; 49; 53; 55; 55; 61; 63; 67; 68; 68; 69; 69; 72; 73; 74; 78; 80; 83; 88; 88; 88; 90; 92; 94; 94; 94; 94; 96; 100\)

- Create a chart containing the data, frequencies, relative frequencies, and cumulative relative frequencies to three decimal places.
- Calculate the following to one decimal place:
- The sample mean
- The sample standard deviation
- The median
- The first quartile
- The third quartile
- \(IQR\)

**Answer**-
a. See Table \(\PageIndex{2}\)

b.

- The sample mean = 73.5
- The sample standard deviation = 17.9
- The median = 73
- The first quartile = 61
- The third quartile = 90
- \(IQR = 90 – 61 = 29\)

Data | Frequency | Relative frequency | Cumulative relative frequency |
---|---|---|---|

33 | 1 | 0.032 | 0.032 |

42 | 1 | 0.032 | 0.064 |

49 | 2 | 0.065 | 0.129 |

53 | 1 | 0.032 | 0.161 |

55 | 2 | 0.065 | 0.226 |

61 | 1 | 0.032 | 0.258 |

63 | 1 | 0.032 | 0.29 |

67 | 1 | 0.032 | 0.322 |

68 | 2 | 0.065 | 0.387 |

69 | 2 | 0.065 | 0.452 |

72 | 1 | 0.032 | 0.484 |

73 | 1 | 0.032 | 0.516 |

74 | 1 | 0.032 | 0.548 |

78 | 1 | 0.032 | 0.580 |

80 | 1 | 0.032 | 0.612 |

83 | 1 | 0.032 | 0.644 |

88 | 3 | 0.097 | 0.741 |

90 | 1 | 0.032 | 0.773 |

92 | 1 | 0.032 | 0.805 |

94 | 4 | 0.129 | 0.934 |

96 | 1 | 0.032 | 0.966 |

100 | 1 | 0.032 | 0.998 (Why isn't this value 1? Answer: Rounding) |

## Standard deviation of Grouped Frequency Tables

Recall that for grouped data we do not know individual data values, so we cannot describe the typical value of the data with precision. In other words, we cannot find the exact mean, median, or mode. We can, however, determine the best estimate of the measures of center by finding the mean of the grouped data with the formula: \(\text{ Mean of Frequency Table }=\frac{\sum f_i m_i}{\sum f_i}\)

where \(f_i=\) interval frequencies and \(m_i\) = interval midpoints.

Just as we could not find the exact mean, neither can we find the exact standard deviation. Remember that standard deviation describes numerically the expected deviation a data value has from the mean. In simple English, the standard deviation allows us to compare how “unusual” individual data is compared to the mean.

Find the standard deviation for the data in __Table \(\PageIndex{3}\)__.

Class | Frequency, \(f_i\) | Midpoint, \(m_i\) | \(f_i\cdot m_\) | \(f_i(m_i−\bar{x})^2\) |
---|---|---|---|---|

0–2 | 1 | 1 | \(1\cdot 1=1\) | \(1(1−6.88)^2=34.57\) |

3–5 | 6 | 4 | \(6\cdot 4=24\) | \(6(4−6.88)^2=49.77\) |

6-8 | 10 | 7 | \(10\cdot 7=70\) | \(10(7−6.88)^2=0.14\) |

9-11 | 7 | 10 | \(7\cdot 10=70\) | \(7(10−6.88)^2=68.14\) |

12-14 | 0 | 13 | \(0\cdot 13=0\) | \(0(13−6.88)^2=0\) |

n = 24 | \(\bar{x}=16524=6.88\) | \(s^2=152.6224−1=6.64\) |

For this data set, we have the mean, \(\bar{x} = 6.88\) and the standard deviation, \(s_x = 2.58\). This means that a randomly selected data value would be expected to be 2.58 units from the mean. If we look at the first class, we see that the class midpoint is equal to one. This is almost three standard deviations from the mean. While the formula for calculating the standard deviation is not complicated,

\[s_x=\sqrt{\frac{\sum f_i(m_i−\bar{x})^2}{n−1}}\nonumber\]

where \(s_x =\) sample standard deviation, \(\bar{x} =\) sample mean, the calculations are tedious. It is usually best to use technology when performing the calculations.

## Comparing Values from Different Data Sets

The standard deviation is useful when comparing data values that come from different data sets. If the data sets have different means and standard deviations, then comparing the data values directly can be misleading.

- For each data value x, calculate how many standard deviations away from its mean the value is.
- Use the formula: x = mean + (#of STDEVs)(standard deviation); solve for #of STDEVs.
- \(\# \text { of } S T D E V s=\frac{x-\text { mean }}{\text { standard deviation }}\)
- Compare the results of this calculation.

#of STDEVs is often called a "z-score"; we can use the symbol \(z\). In symbols, the formulas become:

Sample |
\(x=\overline{x}+z s\) | \(z=\frac{x-\overline{x}}{s}\) |

Population |
\(x=\mu+z \sigma\) | \(z=\frac{x-\mu}{\sigma}\) |

Two students, John and Ali, from different high schools, wanted to find out who had the highest GPA when compared to his school. Which student had the highest GPA when compared to his school?

Student | GPA | School mean GPA | School standard deviation |
---|---|---|---|

John | 2.85 | 3.0 | 0.7 |

Ali | 77 | 80 | 10 |

**Answer**-
For each student, determine how many standard deviations (#of STDEVs) his GPA is away from the average, for his school. Pay careful attention to signs when comparing and interpreting the answer.

\(z=\# \text { of STDE } \mathrm{Vs}=\frac{\text { value - mean }}{\text { standard deviation }}=\frac{x-\mu}{\sigma}\)

For John, \(z=\# \text { ofSTDEV } s=\frac{2.85 \cdot 3.0}{0.7}=-0.21\)

For Ali, \(z=\# \text { ofSTDEV } s=\frac{77-80}{10}=-0.3\)

John has the better GPA when compared to his school because his GPA is 0.21 standard deviations

**below**his school's mean while Ali's GPA is 0.3 standard deviations**below**his school's mean.John's z-score of –0.21 is higher than Ali's z-score of –0.3. For GPA, higher values are better, so we conclude that John has the better GPA when compared to his school.

The following lists give a few facts that provide a little more insight into what the standard deviation tells us about the distribution of the data.

For ANY data set, no matter what the distribution of the data is:

- At least 75% of the data is within two standard deviations of the mean.
- At least 89% of the data is within three standard deviations of the mean.
- At least 95% of the data is within 4.5 standard deviations of the mean.
- This is known as Chebyshev's Rule.

For data having a Normal Distribution, which we will examine in great detail later:

- Approximately 68% of the data is within one standard deviation of the mean.
- Approximately 95% of the data is within two standard deviations of the mean.
- More than 99% of the data is within three standard deviations of the mean.
- This is known as the Empirical Rule.
- It is important to note that this rule only applies when the shape of the distribution of the data is bell-shaped and symmetric. We will learn more about this when studying the "Normal" or "Gaussian" probability distribution in later chapters.

## Coefficient of Variation

Another useful way to compare distributions besides simple comparisons of means or standard deviations is to adjust for differences in the scale of the data being measured. Quite simply, a large variation in data with a large mean is different than the same variation in data with a small mean. To adjust for the scale of the underlying data the Coefficient of Variation (CV) has been developed. Mathematically:

\[C V=\frac{s}{\overline{x}} * 100 \text { conditioned upon } \overline{x} \neq 0, \text { where } s \text { is the standard deviation of the data and } \overline{x} \text{ is the mean}\nonumber\]

We can see that this measures the variability of the underlying data as a percentage of the mean value; the center weight of the data set. This measure is useful in comparing risk where an adjustment is warranted because of differences in scale of two data sets. In effect, the scale is changed to common scale, percentage differences, and allows direct comparison of the two or more magnitudes of variation of different data sets.